Friday 30 June 2017

calculus - Finishing proof of identity $sum_{k=b}^{n} binom{n}{k} binom{k}{b} = 2^{n-b} binom{n}{b}$



The identity




$$
\sum_{k=b}^{n} \binom{n}{k} \binom{k}{b} = 2^{n-b} \binom{n}{b}\
$$



is one of a few combinatorial identities I having been trying to prove, and it has taken me way too long. I am using the principles most familiar to me (which are algebra, some basic combinatorial identities, but not applying differentiation or proof by bijection).



First I tried to see whether finding an identity for $\sum\limits_{k=0}^n \binom{n}{k}$ leads anywhere.



$$\begin{align}

&\sum_{k=0}^{n} \binom{n}{k} = \sum_{0 \le k \lt b} \binom{n}{k} + \sum_{b \lt k \lt n} \binom{n}{k} \tag{1} \\
\\
\end{align}$$



But it didn't for me, so I started over and next tried



$$\begin{align}
&\sum_{k=b}^{n} \binom{n}{k} \binom{k}{b} = \sum_{k=b}^{n} \left( \frac{n!}{k! (n-k)! } \right) \left( \frac{k!}{(k-b)!} \right) \tag{2} \\
\\
\end{align}$$




but this also fell short of a proof.



It is really hard for me to step away from the problem. I was just hoping for a really a big hint on how to proceed.


Answer



Try thinking of this in terms of what it's counting, instead of algebraically. If we view this as counting committees, we're picking some big committee (of size at least b) of our n employees, and then choosing a subcommittee of that with b employees. However, what if we try picking the b employees first, and then choosing some extra people for the bigger committee?


Elegant proofs that similar matrices have the same characteristic polynomial?



It's a simple exercise to show that two similar matrices has the same eigenvalues and eigenvectors (my favorite way is noting that they represent the same linear transformation in different bases).




However, to show that two matrices has the same characteristic polynomial it does not suffice to show that they have the same eigenvalues and eigenvectors - one needs to say something smart about the algebraic multiplicities of the eigenvalues. Moreover, we might be working over a field which is not algebraically closed and hence simply "don't have" all the eigenvalues. This can be overcome, of course, by working in the algebraic closure of the field, but it complicates the explanation.



I'm looking for a proof that is simple and stand-alone as much as possible (the goal is writing an expository article about the subject, so clarity is the most important thing, not efficiency).


Answer



If you define the characteristic polynomial of a matrix $A$ to be $\det(xI - A)$, then for $M$ invertible we have:



$\det(xI - M^{-1} A M)$



$= \det(M^{-1} xI M - M^{-1} A M)$




$= \det(M^{-1} (xI-A) M)$



$= \det (M^{-1}) \det(xI-A) \det(M)$



$=\det(xI - A)$


elementary set theory - Fixed points and cardinal exponentiation



Let the function $F: On \rightarrow On$ be defined by the following recursion:



$F(0) = \aleph_0$



$F(\alpha+1) = 2^{F(\alpha)}$ (cardinal exponentiation)



$F(\lambda) = \sup\{F(\alpha):\alpha \in \lambda\}$ for $\lambda$ a limit ordinal




Prove that there is a fixed point for $F$, i.e. an ordinal $\kappa$ with $F(\kappa) = \kappa$.



Are such fixed points always cardinals?



Thoughts:
So I can see that such a fixed point is going to have to be for a limit ordinal, since the function is strictly increasing for successor ordinals.



$F(\lambda) = \sup\{\aleph_{\alpha}: \alpha \in \lambda\}$




I feel as if $\aleph_{\omega}$ might be a fixed point and suspect that any fixed points have to be cardinals, but I don’t have a justification for either.



I’m not sure how to go about proving a fixed point exists and whether it has to always be a cardinal.


Answer



What you have defined is in fact the $\beth$ function.



It is a continuous function, namely, it is increasing and the limit is evaluated as the limit of previous results. Therefore there is a proper class of fixed points. The fact that cardinal exponentiation returns a cardinal, and the supremum of cardinals is a cardinal ensures that such fixed points will necessarily be cardinals.



$\aleph_\omega$ is never a fixed point, though. Because even if $\aleph_\omega$ is a strong limit (namely, $2^{\aleph_n}<\aleph_\omega$ for all $n<\omega$), it is not the case that $\aleph_\omega=F(\omega_\omega)$. The first fixed point, with---and certainly without---assuming some relatively tame continuum function is going to be unfathomably larger than $\aleph_\omega$. How large is it going to be? Well, just think about it. It will be an ordinal which satisfies $\alpha=F(\alpha)$. It will have $\alpha$ cardinals which are strictly smaller than itself. $\aleph_\omega$ has only $\omega$ (well, $\omega+\omega$ if you count the finite cardinals).




There is no way to describe it "nicely" from below in any reasonable way.


calculus - Solve this limit without using L'Hôpital's rule



I am given the following limit to solve, presumably without knowledge of L'Hôpital's rule:




$$\lim_{x\to0}\left({\frac{x}{1-\cos x}}\right)^2$$



I tried using trigonometric identities (namely Pythagorean) to solve it, but with no luck.


Answer



$\displaystyle \lim_{x\to0}\left({\frac{x}{1-\cos x}}\right)^2=$



$\displaystyle = \lim_{x\to0}\left({\frac{x}{2\sin^2 \frac x2}}\right)^2=$



$\displaystyle = \lim_{x\to0}\left({\frac{ \frac{x}{2}}{\sin^2 \frac x2}}\right)^2=$




$\displaystyle = \lim_{x\to0}\left({\frac{ \frac{x}{2}}{\sin \frac x2}} \cdot \frac{1}{\sin \frac x2}\right)^2=$



$\displaystyle =\left(1\cdot \frac{1}{0} \right)^2=$



$=\infty$


Thursday 29 June 2017

algebra precalculus - How do I show that $sqrt{5+sqrt{24}} = sqrt{3}+sqrt{2}$



According to wolfram alpha this is true: $\sqrt{5+\sqrt{24}} = \sqrt{3}+\sqrt{2}$



But how do you show this? I know of no rules that works with addition inside square roots.



I noticed I could do this:



$\sqrt{24} = 2\sqrt{3}\sqrt{2}$




But I still don't see how I should show this since $\sqrt{5+2\sqrt{3}\sqrt{2}} = \sqrt{3}+\sqrt{2}$ still contains that addition


Answer



Hint: Since they are both positive numbers, they are equal if, and only if, their squares are equal.


notation for first and second derivatives of a power series



I have a power series
$$\sum_{k=0}^\infty\frac{c_k}{k!}x^k$$
where $c_k$ is an arbitrary $k$-th term of some sequence. Then

$$\frac{d^2}{dx^2}\left[\sum_{k=0}^\infty\frac{c_k}{k!}x^k\right]=\frac{d}{dx}\left[\sum_{k=0}^\infty\frac{k\cdot c_k}{k!}x^{k-1}\right]=\sum_{k=0}^\infty\frac{k(k-1)\cdot c_k}{k!}x^{k-2}$$
Now, is it okay to leave this like this? I know that the first two terms are $0$, which removes the $1/x^2 $ term when $k=0$ and $1/x$ term when $k=1$. I figure this is no problem, but perhaps I am wrong. Should I re-index or can I leave it like this?



EDIT: One of the reasons I'm posting this is I have an expression with a polynomial in front of the power series. For example,
$$(x-x^2)\sum_{k=0}^\infty\frac{k\cdot c_k}{k!}x^{k-1}$$
In this example, I have not re-indexed as has been mentioned in the previous posts and comments. Now, since I haven't re-indexed, and if I distribute I get
$$\sum_{k=0}^\infty\frac{k\cdot c_k}{k!}x^{k}+\sum_{k=0}^\infty\frac{k\cdot c_k}{k!}x^{k+1}$$
Now, given that there is a polynomial coefficient in front of the derivative of a power series, can I NOT re-index and distribute first?


Answer



We should keep in mind the following:





If there is a valid expression then




  • re-indexing by adding one or more zeros or skipping one or more zeros does not alter the value of the expression


  • index shifting does not alter the value of the expression






Therefore we conclude:





  • Re-indexing and index shifting of an expression are only referring to presentational aspects and so they are always only a matter of convenience.




So, it belongs only to your valuation if re-indexing or index-shifting is convenient. But as you already know, a proper presentation is often beneficial since it can be used to






  • simplify the expression


  • clarify aspects you want to indicate


  • make further transformations of the expression easier (or possible :-))


  • etc.





So, let's have a look at your first question:





Is it ok to leave the power series like this?



\begin{align*}
\sum_{k=0}^\infty\frac{k(k-1)c_k}{k!}x^{k-2} \tag{1}
\end{align*}



The answer is: Of course it's ok, since nothing is wrong with it.





BUT, presumably the presentation is not convenient because as you correctly noted, the first two summands are
\begin{align*}
\frac{0\cdot(-1)\cdot c_0}{0!}x^{-2}=0&\qquad\qquad(k=0)\\
\frac{1\cdot0\cdot c_1}{1!}x^{-1}=0&\qquad\qquad(k=1)\\
\end{align*}
Therefore we could simplify the expression (1) by re-indexing, index shifting and cancelling and also indicate thereby which summands are more interesting




\begin{align*}

\sum_{k=0}^\infty\frac{k(k-1)c_k}{k!}x^{k-2}&=\sum_{k=2}^\infty\frac{k(k-1)c_k}{k!}x^{k-2}\\
&=\sum_{k=2}^\infty\frac{c_k}{(k-2)!}x^{k-2}\\
&=\sum_{k=0}^\infty\frac{c_{k+2}}{k!}x^{k}\tag{2}\\
\end{align*}



Observe, that the presentation (2) represents exactly the same value as (1) and so they are completely interchangeable.




Now let's go on with your other question regarding:





Can you distribute without re-indexing the following:
\begin{align*}
(x-x^2)\sum_{k=0}^\infty\frac{k\cdot c_k}{k!}x^{k-1}=
\sum_{k=0}^\infty\frac{k\cdot c_k}{k!}x^{k}-\sum_{k=0}^\infty\frac{k\cdot c_k}{k!}x^{k+1}
\end{align*}
The answer is again: Of course you can, since multiplication with $x$ and $x^2$ as well as applying the distributive law are valid transformations regarding this expression.



Any re-indexing or index-shifting first or second, yes or no simply don't matter, since this is only a presentional issue and belongs exclusively to your convenience.









(Important) Note: While re-indexing and index-shifting is harmless, we should be careful when cancellation comes into play. Observe:
\begin{align*}
\sum_{k=0}^\infty\frac{k(k-1)c_k}{k!}x^{k-2} \neq \sum_{k=0}^\infty\frac{c_k}{(k-2)!}x^{k-2}\tag{3}\\
\sum_{k=2}^\infty\frac{k(k-1)c_k}{k!}x^{k-2} = \sum_{k=2}^\infty\frac{c_k}{(k-2)!}x^{k-2}\tag{4}
\end{align*}
Do you see the crucial difference? While cancellation in (4) causes no problems, since $\frac{k(k-1)}{k!}=\frac{1}{(k-2)!}$ is well defined for $k\geq 2$, the cancellation of the first two summands with $k=0$ and $k=1$ in (3) is not valid! The cancellation transforms these summands into





\begin{align*}
\frac{0\cdot(-1)\cdot c_0}{0!}x^{-2}=0&\qquad\longrightarrow\qquad\frac{c_0}{(-2)!}x^{-2}&\qquad(k=0)\\
\frac{1\cdot0\cdot c_1}{1!}x^{-1}=0&\qquad\longrightarrow\qquad\frac{c_1}{(-1)!}x^{-2}&\qquad(k=1)\\
\end{align*}




The crucial fact here is, that cancellation removes the factor $0$ leaving an invalid expression. Using these invalid summmands erroneously in further calculations may lead to strange results!









(Harmless) Note: You may expect that we always skip zeros in power series in order to simplify sums. This is not the case. Sometimes it's in fact convenient to add zeros to gain a better presentation. As an example you may have a look at my answer to this question; see number (3).



Wednesday 28 June 2017

integration - Evaluating $intfrac{cos^2x}{1+tan x},dx$



I'd like to evaluate the following integral:



$$\int \frac{\cos^2 x}{1+\tan x}dx$$



I tried integration by substitution, but I was not able to proceed.


Answer




$\displaystyle \int \frac{\cos^2 x}{1+\tan x}dx = \int\frac{\cos^3 x}{\sin x+\cos x}dx = \frac{1}{2}\int\frac{(\cos^3 x+\sin ^3 x)+(\cos^3 x-\sin ^3 x)}{\cos x+\sin x}dx$



$\displaystyle = \frac{1}{4}\int (2-\sin 2x)dx+\frac{1}{4}\int\frac{(2+\sin 2x)(\cos x-\sin x)}{\cos x+\sin x}dx$



put $\cos x+\sin x= t$ and $1+\sin 2x = t^2$ in second integral


elementary set theory - Show that $mathbb{R}$ cross $mathbb{R}$ is equinumerous to $mathbb{R}$

I need help showing that $\mathbb{R}$ cross $\mathbb{R}$ is equinumerous to $\mathbb{R}$.



I know that you need to show a bijection, however need help on that part.

cardinals - Cardinality of infinite sequences of $0$ and $1$ $geq |mathbb{R}|$

Think of all infinite sequences of $0$s and $1$s. Let the set be $S$. I want to prove that the cardinality $|S|$ is greater than or equal to $|\mathbb{R}|$. I think it is useful to use the fact that the set $T$ of reals in $(0,1)$ has the same cardinality as $\mathbb{R}$. If I can create an injective function from $S$ to $T$ then it would imply $|S|=|T|=|\mathbb{R}|$. I think of taking the the infinite number in $S$, typically 10011001... and map to the number in $T$ with that decimal expansion, so 0.10011001.... Then wouldn't that be an injection?

combinatorics - Finding Summation with Binomial Expansion



Having trouble figuring out this question. Any help would be appreciated! Thanks.



$\sum_{k=2}^n k(k-1)\binom{n}{k}$


Answer




First, we can substitute in the factorial form for the binomial coefficient and simplify to:



$$\sum_{k=2}^n \frac{n!}{(k-2)!(n-k)!}$$



If we then make the substitution $m = k-2$:



$$\sum_{m=0}^{n-2} \frac{n(n-1)(n-2)!}{m!((n-2)-m)!}$$



We can then bring out constant factors:




$$n(n-1)\sum_{m=0}^{n-2} \binom{n-2}{m}$$



Finally, we can note that the sum part is the expansion for $(1+1)^{n-2}$ (or from Pascals Triangle), which means that the result is:



$$n(n-1)2^{n-2}$$



Note that these types of simplification problem often appear, and the strategy is frequently to manipulate them into a form that looks like a binomial expansion of some kind.


induction - Prove that all positive integers $n$, $(1-{sqrt 5})^n$ can be written in the form $a-b{sqrt 5}$ where $a$ and $b$ are positive integers



Prove, by induction, that all positive integers $n$, $(1-{\sqrt 5})^n$ can be written in the form $a-b{\sqrt 5}$ where $a$ and $b$ are positive integers.



I understand these idea of proof by induction and this was a different type of question that I'm used too and wasn't sure on how to approach it as I'm not entirely confident with proving things with induction yet.



Any help would be appreciated.


Answer




1-Base Step: $$For\ n=1,\ {{\left( 1-\sqrt{5} \right)}^{1}}=a-b\sqrt{5},\ with\ a=1\,and\ b=1$$
2-Inductive step: Assume that ${{\left( 1-\sqrt{5} \right)}^{n}}=a-b\sqrt{5},\ Consider\ {{\left( 1-\sqrt{5} \right)}^{n+1}}$



$\begin{align}
& {{\left( 1-\sqrt{5} \right)}^{n+1}}=\left( 1-\sqrt{5} \right){{\left( 1-\sqrt{5} \right)}^{n}} \\
& \quad \quad \quad \quad \ \ =\left( 1-\sqrt{5} \right)\left( a-\sqrt{5}b \right) \\
& \quad \quad \quad \quad \ \ =a-a\sqrt{5}+5b-b\sqrt{5} \\
& \quad \quad \quad \quad \ \ =\left( a+5b \right)-\left( a+b \right)\sqrt{5} \\
\end{align}$




So the inductive case holds. Now by induction we see that the assumption is true.


analysis - Prove that $f'$ exists for all $x$ in $R$ if $f(x+y)=f(x)f(y)$ and $f'(0)$ exists

A function $f$ is defined in $R$, and $f'(0)$ exist.
Let $f(x+y)=f(x)f(y)$ then prove that $f'$ exists for all $x$ in $R$.




I think I have to use two fact:
$f'(0)$ exists
$f(x+y)=f(x)f(y)$
How to combine these two things to prove that statement?

Tuesday 27 June 2017

calculus - Imaginary part of $f_m$ tends to zero

Does anybody have an idea how to show that for $|x|< \pi$ the imaginary part of the following sequence of functions $f_m$ tends to zero for $m \rightarrow \infty.$




$$f_m(x):=\left( \frac{e^{-ix}-1}{-ix} \right)^m \left( \sum_{l \in \mathbb{Z}} \frac{\left|e^{-ix}-1 \right|^{2m}}{|x+ 2 \pi l |^{2m}} \right)^{-\frac{1}{2}}$$



So I want to show that $Im(f_m)|_{(-\pi,\pi)} \rightarrow 0.$ Unfortunately, I don't get anywhere because I don't know how to estimate the imaginary part of this product there.



The question is part of a longer proof in analysis.

abstract algebra - If $nmid m$ prove that the canonical surjection $pi: mathbb Z_m rightarrow mathbb Z_n$ is also surjective on units



Not sure if this is the right proof (i found it online):



Since $n\mid m$, if we factor $m = p_1^{\alpha_1}p_2^{\alpha_2}\cdots p_k^{\alpha_k}$, then $n = p_1^{\beta_1}p_2^{\beta_2}\cdots p_k^{\beta_k} $ with $\beta_i \leq \alpha_i$. On the other hand, by the Chinese Remainder Theorem: $$(\mathbb Z_m)^\times \cong (\mathbb Z_{p_1^{\alpha_1}})^\times \times (\mathbb Z_{p_2^{\alpha_2}})^\times \times\dots \times (\mathbb Z_{p_k^{\alpha_k}})^ \times $$
$$(\mathbb Z_n)^\times \cong (\mathbb Z_{p_1^{\beta_1}})^\times \times (\mathbb Z_{p_2^{\beta_2}})^\times \times \dots \times(\mathbb Z_{p_k^{\beta_k}})^\times$$




Now, if we define:
$$\pi: \mathbb Z_m \longrightarrow \mathbb Z_n,\,\,\, a+(m) \mapsto a+(n)$$



then: $$\pi: \mathbb Z_{p_1^{\alpha_1}} \times \mathbb Z_{p_2^{\alpha_2}} \times \dots \times\mathbb Z_{p_k^{\alpha_k}} \longrightarrow \mathbb Z_{p_1^{\beta_1}} \times \mathbb Z_{p_2^{\beta_2}} \times \dots \times\mathbb Z_{p_k^{\beta_k}}$$
has map: $(a+p_1^{\alpha_1},a+p_2^{\alpha_2}, \dots, a+p_k^{\alpha_k}) \mapsto (a+p_1^{\beta_1},a+p_2^{\beta_2}, \dots, a+p_r^{\beta_k})$



It suffices to show that the statement holds for $n =p^{\beta}$ and $m = p^{\alpha}$ with $\beta \leq \alpha.$ First, we notice that $(a,p^{\alpha})=1 \Leftrightarrow (a, p^{\beta}) =1$, both means that $p \nmid a$. Now, the projection: $$\pi: \mathbb Z/p^{\alpha}\mathbb Z \longrightarrow \mathbb Z/p^{\beta}\mathbb Z$$ maps $(\mathbb Z/p^{\alpha}\mathbb Z)^ \times$ to $(\mathbb Z/p^{\beta}\mathbb Z)^\times$, but $a+(p^{\alpha}) \in (\mathbb Z/p^{\alpha}\mathbb Z)^ \times$ iff $(a,p^{\alpha})=1 \Rightarrow (a, p^{\beta}) =1 $, that is $a+(p^{\beta}) \in (\mathbb Z/p^{\beta}\mathbb Z)^ \times$.



Now it is onto since if $\pi(a+(p^{\alpha})) = a+(p^{\beta}) \in (\mathbb Z/p^{\beta}\mathbb Z)^ \times$, then $(a, p^{\beta}) =1$ and this implies that $(a,p^{\alpha})=1$, so $a+(p^{\alpha}) \in (\mathbb Z/p^{\alpha}\mathbb Z)^ \times$




Then the natural surjective ring projection is also surjective on the units.


Answer



This has been asked on mathoverflow and received a couple of answers:



MO/32875: Lifting units from modulus n to modulus mn.



MO/31495: When does a ring surjection imply a surjection of the group of units?



Your proof has already the right idea: Using the chinese remainder theorem, one easily reduces to the case of powers of a prime $p$. Now if $n \leq m$, then $\mathbb{Z}/p^m \twoheadrightarrow \mathbb{Z}/p^n$ is clearly surjective on unit groups since $z \bmod p^n$ is a unit iff $p \nmid z$.


linear algebra - Eigenvalues of a general block hermitian matrix

I have a question regarding the eigenvalues of a block Hermitian matrix as a function of the eigenvalues of the diagonal block matrices and the off-diagonal matrices. In particular, I am interested in the 2x2 block case.



I have checked some previous posts [1]: Eigenvalues of certain block hermitian matrix and in Wikipedia, and it is clear to me that the solution for the case $M_1=\left(\begin{array}{} A & B\\ B &A \end{array}\right)$, where $M_1$, $A$ and $B$ are Hermitian, can be derived.



Nevertheless, I would like to know if it is possible, in the following case: $M_2=\left(\begin{array}{} A & B\\ B^{H} &C \end{array}\right)$ where $M_2$, $A$ and $C$ are Hermitian and $B$ corresponds to the off-diagonal block, to say something about the eigenvalues of $M_2$ as a function of the eigenvalues of $A$ and $C$ and the matrix $B$.



Best regards.

sequences and series - Prove that $sum_{n=1}^infty{cos(nx) over n^2}$ converges for all real $x$.




Prove that $$\sum_{n=1}^\infty{\cos(nx) \over n^2}$$ converges for all real $x$.




We know that for all $t\in\mathbb R$ that $|\cos(t)| \le 1$. Does it suffice to show that $$\sum_{k=1}^n\left|{1\over k^2}\cos(kx)\right| = \sum_{k=1}^n{1\over k^2}|\cos(kx)| \le \sum_{k=1}^n{1\over k^2}(1) \le \sum_{k=1}^\infty{1\over k^2} < \infty?$$



If we can show absolute convergence, then this will imply convergence of the original infinite series. However, does this implication still hold for all $x \in \mathbb R$? My intuition says yes, but we know that for all $n\in\mathbb N$ that $$\sum_{n=1}^k\cos(nx) = {\sin\left[\left(n+{1\over2}\right)x\right] - \sin({x\over2})\over2\sin({x\over2})},$$ is bounded where $x$ is not an integer multiple of $2\pi$. I understand the sequence of partial sums could be different between different series, but different enough that an unbounded sequence for some values of $x$ becomes bounded for all $x$?



Answer



$$(\forall n>0)\;\;(\forall x\in \mathbb R \; ) \;|\frac{\cos(nx)}{n^2}|\leq \frac{1}{n^2}$$



$\implies$



$$\sum \frac{\cos(nx)}{n^2}$$



normally convergent at $\mathbb R$ since $\sum \frac{1}{n^2} \;$ is absolutly convergent.



$\implies$




it converges uniformly at $\mathbb R$.



$\implies$



$$(\forall x\in \mathbb R )\; \sum_{n=1}^{+\infty}\frac{\cos(nx)}{n^2} \in \mathbb R$$


induction - Prove or disprove that every Boolean function can be expressed by using only the operator ↓

I know that the ↓ operator means "nor" but how do I prove/disprove that every Boolean function can be expressed using only this operator ? Induction ? Contradiction ? I have to idea where to begin. Help would be much appreciated.

binomial coefficients - What is the sum of the second numbers in the first $100$ rows of Pascal's triangle (excluding the first row)?


What is the sum of the second numbers in the first $100$ rows of Pascal's triangle (excluding the first row, the row containing a single $1$)? The sum should be from the second to the hundredth row.





Pascal's triangle



Starting from the second row, I initially thought this meant you count from the left two numbers. So it would be $$1+2+3+4+\cdots+99$$ This means I get $4950$. I thought this would be too simple of a solution.



Could someone tell me if the addition I did above is all the question is asking from me?

abstract algebra - Constructing a multiplication table for a finite field



Let $f(x)=x^3+x+1\in\mathbb{Z}_2[x]$ and let $F=\mathbb{Z}_2(\alpha)$, where $\alpha$ is a root of $f(x)$. Show that $F$ is a field and construct a multiplication table for $F$.




Can you please help me approach this problem? I've tried searching around, but I don't really know what I'm looking for!



Thanks.

linear algebra - Eigenvalues of symmetric matix

Well for a symmetric matrix ($A^T = A$), is there an easy algorithm to get the eigenvalues by hand?



Especially for moderate sized matrices like 6*6 (where calculating/solving the determinant becomes infeasible).

Polynomial with one rational root or one imaginary root

In my textbook there is an example where we have to find all the roots of $2x^3-5x^2+4x-1$. After applying the Rational Root Theorem we can conclude that $1$ and $1/2$ are two solutions to this equation. Now we have to find the third root.



It says, that we can exclude the irrational or imaginary numbers as the third root since a polynomial can not just have one irrational or one imaginary root.



But why is it so?




(It turns out that $x = 1$ is a double root.)

Monday 26 June 2017

linear algebra - Finding Null Space Basis over a Finite Field

I have more a systems background, but I have a math-y type question so I figured I'd give it a shot here...This is more of an implementation question, I don't need to prove anything at the moment.



I'm trying to find the basis of the null space over $\mathbb{F}_2^N$ for a given basis quickly and I was hoping someone here would know how.



For example if I have the following basis in $\mathbb{F}_2^{16}$:



$$

\left(
\begin{array}{cccccccccccccccc}
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 \\
0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 \\
0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 \\
\end{array}
\right)
$$




How would I find the null space basis for this matrix?



If I put my basis into reduced row echelon form, I could find it easily, but for my particular problem I cannot do that. I know there are exhaustive search methods, but the matrices I'm dealing with can be quite large, which make those impractical.



BEGIN EDIT



@Neil de Beaudrap, It has to do with the fact that I am actually splitting up the vector space and using part of it for another purpose. If I change this matrix with elementary row operations and put it into reduced-row-echelon form it messes things up....



I am unfamiliar with column operations, could you explain in a bit more detail what you are talking about? Thanks!




END EDIT

calculus - Must a Function be Bounded for the Antiderivative to Exist over a Given Interval?




In my Calculus class, we were given the following definition of antiderivative:




Let $f$ be a function defined on an interval.



An antiderivative of $f$ is any function $F$ such that $F' = f$.



The collection of all antiderivatives of $f$ is denoted $\displaystyle \int f(x) dx$.





My question is, don't we have to say that $f$ should be bounded in the definition?



If not, then $f$ is not integrable by definition, so we can't say anything about $\displaystyle \int f(x) dx$, right?



I'm not sure if I'm thinking about this the right way.


Answer



The existence of an antiderivative and being integrable are distinct (although related) concepts.



Take$$\begin{array}{rccc}f\colon&[0,1]&\longrightarrow&\mathbb R\\&&x\mapsto&\begin{cases}x^2\sin\left(\frac1{x^2}\right)&\text{ if }x>0\\0&\text{ if }x=0.\end{cases}\end{array}$$Then $f$ is differentiable, but $f'$ is unbounded. But, in particular, $f$ is an antiderivative of $f'$.


set theory - What is $aleph_0$ powered to $aleph_0$?




By definition $\aleph_1 = 2 ^{\aleph_0}$. And since $2 < \aleph_0$, then $2^{\aleph_0} = {\aleph_1} \le \aleph_0 ^ {\aleph_0}$. However, I do not know what exactly $\aleph_0 ^ {\aleph_0}$ is or how I could compute it.


Answer



No. By definition $\aleph_1$ is the least uncountable $\aleph$ number. $2^{\aleph_0}$ can be quite a large $\aleph$, or it could be $\aleph_1$. For example, many forcing axioms (e.g. the proper forcing axiom) prove that $2^{\aleph_0}=\aleph_2$.



The assertion $2^{\aleph_0}=\aleph_1$ is known as The Continuum Hypothesis and was proven unprovable from the usual axioms of set theory. We can therefore add axioms which decide the continuum hypothesis, e.g. itself or the aforementioned forcing axiom.



On the the other hand:



$$2^{\aleph_0}\leq\aleph_0^{\aleph_0}\leq (2^{\aleph_0})^{\aleph_0}= 2^{\aleph_0\cdot\aleph_0}=2^{\aleph_0}$$







To read more:



Here are some links to answers discussing the cardinality of the continuum:




  1. Can the real numbers be forced to have arbitrary cardinality?

  2. bound on the cardinality of the continuum? I hope not

  3. Implications of continuum hypothesis and consistency of ZFC


  4. How do we know an $ \aleph_1 $ exists at all?


calculus - Evaluate integral



How do I evaluate the following integral, the answer according to Wolfram Alpha is $2$, but I keep on getting $0$ after using integration by parts.$$\frac12\int_{-\infty}^\infty x^2e^{-|x|}\ dx$$


Answer



$$\frac12\int_{-\infty}^\infty x^2e^{-|x|}dx
= \frac12\int_{-\infty}^0x^2e^{x}dx +\frac12\int_{0}^\infty x^2e^{-x}dx
\\

= \int_{0}^\infty x^2e^{-x}dx = [x^2(-e^{-x})]_{0}^\infty + 2\int_{0}^\infty xe^{-x} dx
\\
=0+ 2[x(-e^{-x})]_{0}^\infty + 2\int_{0}^\infty e^{-x}dx = 0+0+2[(-e^{-x})]_{0}^\infty =2
$$


Sunday 25 June 2017

Long form of an arithmetic sequence formula



I've been studying arithmetic sequences and am finding that I can do the formulas, but can't truly understand until I can do a long-form version of the formula.



Let's take the below:




$a$5 = 2+2(5-1)



I can do 2+2+2+2+2 to get to 10 or following the sequence step by step I can do 2,4,6,8,10. But how would I calculate these answers on something like a basic calculator?



Edit: In case unclear, I'm asking for the non-formulaic version to calculate what, given the above, would write as $a$n = $a$1 + d (n - 1).



Edit 2: Okay, say I give you any arithmetic sequence formula such as shown above, and I hand you a calculator as basic as the one shown in this image. How do you solve it? You don't just do $a$1 + $a$2 + $a$3 ... ad infinitum. How do you solve for it? All the more advanced calculators and services online provide none of the step-by-step process, so you get the final answer without understanding a thing, and there has to be a more effective way than just adding or subtracting to the nth term.



Basic image of calculator



Answer



So the formula $$a_n = a_1 + d(n-1)$$ is the explicit formula, or closed form. You don't need to calculate $a_4$ to calculate $a_5$. For example if we have the same formula as you have in your post, $a_n = 2+ 2(n-1)$ then if you want to calculate $a_{10}$ you just plug in 10 to the equation and compute it using a calculator if you like $$a_{10}= 2 + 2(10-1)$$


probability - show $mathbb{E} vert X vert = int_0^infty mathbb{P}({vert X vert>y})dy leq sum_{n=0}^inftymathbb P {vert X vert>y} $.



Looking for a hint to show $\mathbb{E} \vert X \vert = \int_0^\infty \mathbb{P}(\{\vert X \vert>y\})dy \leq \sum_{n=0}^\infty\mathbb P \{\vert X \vert>y\} $.



This is from Theorem 2.3.7 in Durrett (Probability: Theory and examples)




The first equality makes sense by Fubini and the definition of expectation (Durrett Lemma 2.2.8). I'm having a hard time showing the second, though. My gut intuition makes me feel like it should be the other direction.


Answer



As you mention, the first equation is essentially Fubini, along with rewriting $\mathbb{P}\{\lvert X\rvert > y\}$ as an integral.



For the inequality on the right: observe that
$$\begin{align}
\int_0^\infty \mathbb{P}\{ \lvert X\rvert > y\} dy
&= \sum_{n=0}^\infty \int_n^{n+1} \mathbb{P}\{ \lvert X\rvert > y\} dy \leq
\sum_{n=0}^\infty \int_n^{n+1} \mathbb{P}\{ \lvert X\rvert > n\} dy \\
&= \sum_{n=0}^\infty \mathbb{P}\{ \lvert X\rvert > n\}

\end{align}$$
using the fact that for $y \geq x$, $\mathbb{P}\{ \lvert X\rvert > y\} \leq \mathbb{P}\{ \lvert X\rvert > x\}$ since $\{ \lvert X\rvert > y\} \subseteq \{ \lvert X\rvert > x\}$.


complex analysis - Finding contour integral $ int_gamma frac{mathrm{Im} (z)}{z - alpha} dz $

I'm trying to find the contour integral
$$

\int_\gamma \frac{\mathrm{Im} (z)}{z - \alpha} dz
$$

where $ \alpha $ is a complex number such that $ 0 < |\alpha| < 2 $ and $ \gamma $ is the circle oriented in the positive sense, centred at the circle with radius 3.



I can find that
$$
\int_\gamma \frac{\mathrm{Im} (z)}{z - \alpha} dz
=
\int_0^{2\pi} \frac{e^{it}-e^{-it}}{2i} \frac{1}{e^{it}-\alpha} i e^{it} dz
$$


but the denominator is making it difficult to find the value of the contour integral. How can I proceed in this?

D'alembert functional equation

The D'Alembert functional equation is $f(x+y)+f(x-y)=2f(x)f(y)$.
Let $f:\mathbb{R}\rightarrow\mathbb{R}$ satisfy the functional equation for all $x,y\in\mathbb{R}$. It's well known that $f$ is of the form $f(x)=\frac{E(x)+E^∗(x)}{2}$, for some $E:\mathbb{R}\rightarrow\mathbb{C}$.
How can I use this functional equation to solve the following problem?




Let $\lambda$ be a nonzero real constant. Find all functions $f,g:\mathbb{R}\rightarrow\mathbb{R}$ that satisfy the functional equation $f(x+y)+g(x-y)=\lambda f(x)g(y)$
for all $x,y\in\mathbb{R}$.


general topology - bijective continuous function on $mathbb R^n$ not homeomorphism?



Suppose we have a bijective continuous map $\mathbb{R}^n\to\mathbb{R}^n$ (relative to the standard topology). Must this map be a homeomorphism?



I have little doubt about this. I think that if it happens, I guess it's true, I've heard it is true, but I can not prove it.


Answer



Every such map is open according to invariance of domain and is therefore a homeomorphism. However, invariance of domain is highly nontrivial to prove with elementary topological methods. The slickest way would be via algebraic topology.



statistics - Find cumulative distribution function of random variable



Let the random variable $Y$ be defined as $Y = aX$ where $a$ is some non-zero constant (may be a positive or negative real number) and $X$ is some known random variable. How would we find the cumulative distribution function of $Y$?



We can say...
$$F_Y(y) = \mathbb P(Y \le y)$$

$$ = \mathbb P(aX \le y)$$
$ $
$$\text{if } a > 0\text{, then}$$
$$ = \mathbb P(X \le y / a)$$
$ $
$$\text{if } a < 0 \text{, then}$$
$$ = \mathbb P(X \ge y / a) \approx 1 - \mathbb P(X \le y / a)$$
$ $
$$\text{So thus...}$$
$$\text{if } a > 0\text{, then}$$

$$F_Y(y) = F_X(y / a)$$
$ $
$$\text{if } a < 0 \text{, then}$$
$$F_Y(y) = 1 - F_X(y / a)$$



Is my formulation correct, and if so how can we make this disjointed function into a single line?



EDIT:
Could we say something like...
$$F_Y(y) = (0.5 - 0.5 * sgn(a)) + sgn(a) * F_X(y / a)$$



Answer



Yes, that is correct if $X$ is a continuous random variable. In this case your $\approx$ sign is actually an $=$ sign. If $X$ is discrete, though, then the $\approx$ is not an $=$, and you should instead have $\mathbb{P}(X \geq y/a) = 1-\mathbb{P}(X \leq y/a) + \mathbb{P}(X=y/a)= 1-\lim_{x \to (y/a)^-}F_X(y/a)$.



I would not recommend trying to "make this disjointed function into a single line" the way you did. Your one-line formula really obscures what's going on in the two cases. If anything, I would write a "piece-wise" function:
$$F_Y(y) = \begin{cases}F_X(y/a) & a>0 \\ 1-F_X(y/a) & a<0 \end{cases}$$
(in the continuous case).


elementary number theory - Verifying that $2^{44}-1$ is divisible by $89$



As the title, I was asked to show that $2^{44}-1$ is divisible by $89$.




My first attempt is to change the expression into $(2^{22}-1)(2^{22}+1)$.



Then further simplified it into $(2^{11}-1)(2^{11}+1)(2^{22}+1)$, I used my calculator and was able to show that $2^{11}-1$ is divisible by $89$ but then I don't know how to show it with modular arithmetic. I do think that it is quite similar to the form where we can use the Fermat's little theorem. $(\sqrt{2})^{88}-1$. (Though I do understand Flt can only be applied to integer.)



Can someone tell me whether I can take square root in modular arithmetic as well? I am still pretty new to the modular arithmetic. Thank you very much.


Answer



Hint: By Fermat's Theorem, we have $2^{88}\equiv 1\pmod{89}$.



So $(2^{44}-1)(2^{44}+1)\equiv 0 \pmod{89}$.




If we can show that $2^{44}+1\not\equiv 0\pmod{89}$ we will be finished.



One way to do this is to use the fact that $2$ is a quadratic residue of $89$, since $89$ is of the shape $8k+1$.



Remark: Your direct computational approach is perfectly fine. However, it may be that you are expected to make use of "theory," as in the approach described above.


integration - Evaluating $int_{-infty}^{infty}frac{e^{-alpha x}}{(1+e^{-x})^{1+lambda}},mathrm{d}x$ for $alpha,lambda>0$





For $\alpha>0,\lambda>0$, I am trying to evaluate the integral



$$I(\alpha,\lambda)=\int_{\mathbb R}\frac{e^{-\alpha y}}{(1+e^{-y})^{1+\lambda}}\,\mathrm{d}y$$




Basically I am struggling to find a proper substitution here.



I can write




$$I(\alpha,\lambda)=\int_{\mathbb R}e^{(1-\alpha)y}\frac{e^{-y}}{(1+e^{-y})^{1+\lambda}}\,\mathrm{d}y$$



Changing variables $$z=\frac{1}{(1+e^{-y})^{(1+\lambda)/2}}$$



, so that $$\mathrm{d}z=\left(\frac{1+\lambda}{2}\right)(1+e^{-y})^{\frac{\lambda-1}{\lambda+1}}\frac{e^{-y}}{(1+e^{-y})^{1+\lambda}}\,\mathrm{d}y\tag{1}$$



Now,



\begin{align}
&\qquad\quad z(1+e^{-y})^{(1+\lambda)/2}=1

\\&\implies \ln z+\left(\frac{1+\lambda}{2}\right)\ln(1+e^{-y})=0
\\&\implies \ln(1+e^{-y})=-\frac{2}{1+\lambda}\ln z
\\&\implies e^{-y}=\exp\left(-\frac{2}{1+\lambda}\ln z\right)-1=z^{-2/(1+\lambda)}-1
\end{align}



From $(1)$, I get $$\frac{e^{-y}}{(1+e^{-y})^{1+\lambda}}\,\mathrm{d}y=\left(\frac{2}{1+\lambda}\right)z^{\frac{\lambda-1}{\lambda+1}}\,\mathrm{d}z$$



And



$$e^{(1-\alpha)y}=(e^{-y})^{-(1-\alpha)}=\left(z^{-2/(1+\lambda)}-1\right)^{\alpha-1}$$




So finally,



$$I(\alpha,\lambda)=\frac{2}{1+\lambda}\int_0^1 \left(z^{-2/(1+\lambda)}-1\right)^{\alpha-1}z^{\frac{\lambda-1}{\lambda+1}}\,\mathrm{d}z$$



I transformed the domain of integration to $(0,1)$ so that I can use the Beta function and hence get an answer in terms of Gamma functions. But looks like I have made an error somewhere.






I also tried writing $$\frac{1}{(1+e^{-y})^{1+\lambda}}=\sum_{j=0}^\infty \binom{-(1+\lambda)}{j}e^{-yj}$$




, so that



\begin{align}
I(\alpha,\lambda)&=\int_{\mathbb R} e^{-\alpha y}\sum_{j=0}^\infty \binom{-(1+\lambda)}{j}e^{-yj}\,\mathrm{d}y
\\&\stackrel{?}{=}\sum_{j=0}^\infty \binom{-(1+\lambda)}{j}\int_{\mathbb R}e^{-(\alpha+j)y}\,\mathrm{d}y
\end{align}



But this is not correct either.







According to Mathematica, I should get



$$I(\alpha,\lambda)=\frac{\Gamma(\alpha)\Gamma(1+\lambda-\alpha)}{\Gamma(1+\lambda)}\quad,\text{ if }\alpha<1+\lambda$$


Answer



You can use $x = \frac{1}{1+\mathrm{e}^{-y}} ~ \Leftrightarrow ~ \mathrm{e}^{-y} = \frac{1-x}{x}$ here. Then
$$ I(\alpha, \lambda) = \int \limits_0^1 x^{\lambda-\alpha} (1-x)^{\alpha-1} \, \mathrm{d} x = \operatorname{B} (1+\lambda-\alpha,\alpha) = \frac{\Gamma(\alpha) \Gamma(1+\lambda-\alpha)}{\Gamma(1+\lambda)} \, . $$


calculus - Find the limit of the sequence $left( sqrt {2n^{2}+n}-sqrt {2n^{2}+2n}right) _{nin N}$



My answer is as follows, but I'm not sure with this:
$\lim _{n\rightarrow \infty }\dfrac {\sqrt {2n^{2}+n}}{\sqrt {2n^{2}+2n}}=\lim _{n\rightarrow \infty }\left( \dfrac {2n^{2}+n}{2n^{2}+2n}\right) ^{\dfrac {1}{2}}$




$\lim _{n\rightarrow \infty }\dfrac {2n^{2}+n}{2n^{2}+2n}=\lim _{n\rightarrow \infty }\dfrac {2+\dfrac {1}{n}}{2+\dfrac {2}{n}}$



since $\lim _{n\rightarrow \infty }\dfrac {1}{n}=0$, $\lim _{n\rightarrow \infty }\dfrac {2n^{2}+n}{2n^{2}+2n}=1$



hence $\lim _{n\rightarrow \infty }\left( \dfrac {2n^{2}+n}{2n^{2}+2n}\right) ^{\dfrac {1}{2}}=\left( 1\right) ^{\dfrac {1}{2}}=1$ (by composite rule)



hence $\sqrt {2n^{2}+n}=\sqrt {2n^{2}+2n}$ as $n\rightarrow \infty $



so $\lim _{n\rightarrow \infty }\left( \sqrt {2n^{2}+n}-\sqrt {2n^{2}+2n}\right) =0$


Answer




You may write, as $n \to \infty$,
$$
\begin{align}
\sqrt {2n^{2}+n}-\sqrt {2n^{2}+2n}&=\frac{(2n^{2}+n)-(2n^{2}+2n)}{\sqrt {2n^{2}+n}+\sqrt {2n^{2}+2n}}
\\\\&=\frac{-n}{\sqrt {2n^{2}+n}+\sqrt {2n^{2}+2n}}
\\\\&=\frac{-1}{\sqrt {2+1/n}+\sqrt {2+2/n}}
\\\\&\to-\frac{1}{2\sqrt{2}}
\end{align}
$$


sequences and series - Limit $lim_{ntoinfty}nfrac{sinfrac{1}{n}-frac{1}{n}}{1+frac{1}{n}}$

I need to find a limit of a sequence:
$$\lim_{n\to\infty}n\frac{\sin\frac{1}{n}-\frac{1}{n}}{1+\frac{1}{n}}$$



I tried to divide numerator and denominator by n, but it didn't help, as the limit became $\frac{0}{0}$. I tried other things, but always got an indefinite limit. I know that the limit is 0, but I just don't know how to show it. It's probably something really simple, but I'm totally stuck.

integration - How to evaluate $I=int_0^{pi/2}x^2ln(sin x)ln(cos x) mathrm dx$



Find the value of
$I=\displaystyle\int_0^{\pi/2}x^2\ln(\sin x)\ln(\cos x)\ \mathrm dx$



We have the information that
$J=\displaystyle\int_0^{\pi/2}x\ln(\sin x)\ln(\cos x)\ \mathrm dx=\dfrac{(\pi\ln{2})^2}{8}-\dfrac{\pi^4}{192}$


Answer



Tools Needed

$$
\begin{align}
\frac1{k(j-k)^2}&=\frac1{j^2k}-\frac1{j^2(k-j)}+\frac1{j(k-j)^2}\tag{1}\\
\frac1{k(j+k)^2}&=\frac1{j^2k}-\frac1{j^2(k+j)}-\frac1{j(k+j)^2}\tag{2}\\
\log(\sin(x))&=-\log(2)-\sum_{k=1}^\infty\frac{\cos(2kx)}{k}\tag{3}\\
\log(\cos(x))&=-\log(2)-\sum_{k=1}^\infty(-1)^k\frac{\cos(2kx)}{k}\tag{4}\\
\cos(2jx)\cos(2kx)&=\frac12\Big[\cos(2(j-k)x)+\cos(2(j+k)x)\Big]\tag{5}\\
\end{align}
$$
$$

\int_0^{\pi/2}x^2\cos(2kx)\,\mathrm{d}x=\left\{
\begin{array}{}
(-1)^k\frac\pi{4k^2}&\text{if }k\ne0\\
\frac{\pi^3}{24}&\text{if }k=0
\end{array}\right.\tag{6}\\
$$






Tool Use

$$
\begin{align}
&\int_0^{\pi/2}x^2\log(\sin(x))\log(\cos(x))\,\mathrm{d}x\\[12pt]
&=\int_0^{\pi/2}x^2\left(\log(2)+\sum_{k=1}^\infty\frac{\cos(2kx)}{k}\right)\left(\log(2)+\sum_{k=1}^\infty(-1)^k\frac{\cos(2kx)}{k}\right)\,\mathrm{d}x\\[12pt]
&=\log(2)^2\int_0^{\pi/2}x^2\,\mathrm{d}x
+\log(2)\sum_{k=1}^\infty\frac1k\int_0^{\pi/2}x^2\cos(4kx)\,\mathrm{d}x\\
&+\sum_{j=1}^\infty\sum_{k=1}^\infty\frac{(-1)^k}{2jk}\int_0^{\pi/2}x^2\Big[\cos(2(j-k)x)+\cos(2(j+k)x)\Big]\,\mathrm{d}x\\[12pt]
&=\frac{\pi^3}{24}\log(2)^2+\log(2)\frac\pi{16}\zeta(3)\\
&+\frac\pi8\sum_{j=1}^\infty\sum_{k=1}^\infty\frac{(-1)^j}{jk}\left[\mathrm{iif}\left(j=k,\frac{\pi^2}{6},\frac1{(j-k)^2}\right)+\frac1{(j+k)^2}\right]\\[12pt]
&=\frac{\pi^3}{24}\log(2)^2+\log(2)\frac\pi{16}\zeta(3)\\

&+\frac\pi8\sum_{j=1}^\infty\frac{(-1)^j}{j}\sum_{k=1}^{j-1}\frac1{k(j-k)^2}
+\frac\pi8\sum_{j=1}^\infty\frac{(-1)^j}{j^2}\frac{\pi^2}{6}
+\frac\pi8\sum_{j=1}^\infty\frac{(-1)^j}{j}\sum_{k=j+1}^\infty\frac1{k(j-k)^2}\\
&+\frac\pi8\sum_{j=1}^\infty\frac{(-1)^j}{j}\sum_{k=1}^\infty\frac1{k(j+k)^2}\\[12pt]
&=\frac{\pi^3}{24}\log(2)^2+\log(2)\frac\pi{16}\zeta(3)\\
&+\frac\pi8\sum_{j=1}^\infty\frac{(-1)^j}{j}\left(\frac2{j^2}H_{j-1}+\frac1jH_{j-1}^{(2)}\right)
-\frac{\pi^5}{576}
+\frac\pi8\sum_{j=1}^\infty\frac{(-1)^j}{j}\left(-\frac1{j^2}H_j+\frac1j\frac{\pi^2}{6}\right)\\
&+\frac\pi8\sum_{j=1}^\infty\frac{(-1)^j}{j}\left(\frac1{j^2}H_j-\frac1j\frac{\pi^2}{6}+\frac1jH_j^{(2)}\right)\\[12pt]
&=\frac{\pi^3}{24}\log(2)^2+\log(2)\frac\pi{16}\zeta(3)\\

&+\frac\pi8\sum_{j=1}^\infty\frac{(-1)^j}{j}\left(\frac2{j^2}H_j+\frac2jH_j^{(2)}-\frac3{j^3}\right)
-\frac{\pi^5}{576}\\[12pt]
&=\frac{\pi^3}{24}\log(2)^2+\log(2)\frac\pi{16}\zeta(3)+\frac{11\pi^5}{5760}
+\frac\pi4\sum(-1)^j\left(\frac1{j^3}H_j+\frac1{j^2}H_j^{(2)}\right)\\[12pt]
&=\frac{\pi^3}{24}\log(2)^2+\log(2)\frac\pi{16}\zeta(3)-\frac{\pi^5}{960}
-\frac\pi{16}\sum_{j=1}^\infty\frac1{j^3}H_{2j}\tag{7}
\end{align}
$$
Numerically, $(7)$ matches the integral. I'm working on the last harmonic sum. Both numerical integration and $(7)$ yield $0.0778219793722938643380944$.




Mathematica Help



Thanks to Artes' answer on Mathematica, I have verified that these agree to 100 places.


linear algebra - Row equivalent matrices and row spaces




Let $A$ and $R$ be row equivalent $m \times n$ matrices. Let the row vectors of $A$ be $a_1, a_2, a_3, \ldots, a_m$ and the row vectors of $R$ be $r_1, r_2, r_3,\ldots, r_m$. Matrices $A, R$ are row equivalent, therefore the $r$ row vectors are obtained from the $a$ row vectors by elementary row operations. This means that every $r$ row vector is a linear combination of the $a$ row vectors. Therefore the row space of matrix $A$ lies in the row space of matrix $R$.




Is the bolded part in the quote saying $R$ is a subset of row space of $A$? How does that show that row space of $A$ is in the row space of $R$?


Answer




It doesn't make any sense to say that "$R$ is a subset of row space of $A$", since $R$ is a matrix and the row space of $A$ is a subspace; they are different objects.



The bolded part implies that every $r$ row vector is in the row space for $A$. That in turn implies that every linear combination of row vectors from $R$ is in the row space for $A$, since the row space of $A$ is a subspace and hence closed under linear combinations. Therefore, the row space of $R$ is in the row space for $A$.



Since we can flip the roles of $R$ and $A$ with no changes to the above, we can also say that the row space of $A$ is in the row space of $R$. Hence, the row space of $R$ and the row space of $A$ are the same.


Deriving area of sector through calculus



The area of a sector of a circle is the area of the triangle plus an additional portion which is $\int_{r cos\theta}^r \sqrt{r^2 - x^2} dx$



In order to integrate this, a trig substitution is used, $x =rsin\theta, dx = rcos\theta$. But by making that substitution the integrating limits would change from $\frac{\pi}{4}$ to $\frac{\pi}{2}$ since $r = rsin\theta$ and $sin^{-1}(1) = \frac{\pi}{2}$ and for the lower limit we would have $cos\theta = sin\theta$, which $\theta = \frac{\pi}{4}$




But that doesn't make it any easier to solve for the area formula. What is the proper derivation of the area of a sector using calculus?


Answer



The angle $\theta$ is fixed, it is given to you.



When you are integrating $\sqrt{r^2-x^2}$ using a trig substitution, you must not use $\theta$, that's taken.



There are plenty of letters left, Greek if you like, let $x=\sin \phi$. Or maybe use $x=\sin t$. Then everything will work nicely.



Remark: This is a very time consuming way to find the area of a sector with angle $\theta$. For the area of the sector, if $\theta$ is given in radians, is$\dfrac{\theta}{2\pi}$ times the area of the circle.




That gives area $\dfrac{\theta}{2}r^2$.


Saturday 24 June 2017

Dense subset of $[0,1]$ with Lebesgue measure $epsilon$




We wish to find a Lebesgue measurable subset of $[0,1]$ that is in dense in $[0,1]$ with measure exactly $\epsilon$, where $\epsilon \in (0,1)$. My idea is to let $I=(0,\epsilon)$ and let $I'=\mathbb{Q} \cap (\epsilon, 1)$. Then set $A = I \cup I'$. Is this correct? If so, is there another such set?


Answer



Yes, that works fine. For an open dense set, put intervals of width $2^{-k + 100} \epsilon$ around the $k$-th rational number (according to your favorite enumeration of $\mathbb{Q} \cap (0, 1)$), adjusted appropriately if the interval doesn't lie in $[0,1]$. Then lengthen the interval around $1/2$ as needed to make up the remaining measure.


definite integrals - Integrating $displaystyleint_0^{pi/2} {sin^2x over 1 + sin xcos x}dx$



We need to evaluate $\displaystyle \int_0^{\pi/2} {\sin^2x \over 1 + \sin x\cos x}dx$
and some solution to this starts as,




$\displaystyle\int_0^{\pi/2} {\sin^2x \over 1 + \sin x\cos x}dx =
\int_0^{\pi/2} {\{\sin(\pi/2 -x)\}^2 \over 1 + \sin (\pi/2 -x)\cos (\pi/2 -x)}dx$.



We fail to understand how this step has been carried out. Even variable substitution
does not seem to work.



Do you think that you could give us a hint?


Answer



Using $\displaystyle\int_a^bf(x)\ dx=\int_a^bf(a+b-x)\ dx,$




$\displaystyle I=\int_0^\frac\pi2\frac{\sin^2x}{1+\sin x\cos x}dx=\int_0^\frac\pi2\frac{\cos^2x}{1+\sin x\cos x}dx$



$\displaystyle2I=\int_0^\frac\pi2\frac1{1+\sin x\cos x}dx=\int_0^\frac\pi2\frac{\sec^2x}{1+\tan^2x+\tan x}dx$



Setting $\tan x=u$



$\displaystyle2I=\int_0^\infty\frac{dt}{1+t+t^2}=4\int_0^\infty\frac{dt}{(2t+1)^2+(\sqrt3)^2}$



Set $2t+1=\sqrt3\tan\theta$


algebra precalculus - can any identity involving integers be proved by mathematical induction

Hello mathematics community,



Today I was studying mathematical induction which is an axiom.



I was wondering





  1. Can "ANY" identity or inequality involving integers which is already proven can also be proved by mathematical induction?


  2. Are there any theorems which can only be proved using mathematical induction?




3.As far as I know we have first principle of mathematical induction, second principle of mathematical induction



Do we have nth principle of mathematical induction also, if yes can I know problems involving it.(n value being larger upto 10 or even more).




I dont know what tags are to be kept for this question....



Thankyou for your valuable time.



EDIT



I have found the answer for the third question and the example of such a problem is to prove the that the number of triangles in a triangulation of polygon of n sides is n-2. Here is the link https://www.youtube.com/watch?v=Z9sYIWHIvNc

real analysis - Discuss the convergence of the sequence: $a_1=1,a_{n+1}=sqrt{2+a_n} quad forall n in mathbb{N}$




Computing the first few terms $$a_1=1, a_2=\sqrt{3}=1.732....,a_3=1.9318....,a_4=1.9828...$$ I feel that $(a_n)_{n\in \mathbb{N}}$ is bounded above by 2, although I have no logical reasoning for this. Since, $(a_n)_{n\in \mathbb{N}}$ is monotone increasing sequence, it must converge by monotone convergence theorem, and converge to 2.



Can anyone help me to make this more formal? Besides, I would really appreciate if anyone could shed some light on how to find the bound and limit of such sequences (that are not in closed form but in recursion).



Answer



Hints:



$$a_1\le a_2\;\;\text{and}\;\;a_{n+1}:=\sqrt{2+a_n}\stackrel{\text{induction}}\le\sqrt{2+a_{n+1}}=:a_{n+2}$$



$$a_1\le 2\;\;\text{and}\;\; a_{n+1}:=\sqrt{2+a_n}\stackrel{\text{induction}}\le\sqrt{2+2}=2$$



The above shows your sequence is a monotone ascending one and bounded above, so its limit exists, say it is $\;w\;$, and now a little arithmetic of limits:



$$w\leftarrow a_{n+1}=\sqrt{2+a_n}\rightarrow\sqrt{2+w}$$




so $\;w=\ldots\ldots?$


Friday 23 June 2017

calculus - The intermediate value property of functions



The intermediate value property (IVP): A function has the intermediate value property on an interval $[a,b]$ if for all $x

I think that



$f$ has IVP if and only if $f^{-1}$ maps connected sets to connected sets.



Please, can you tell me if what I think is correct?


Answer




Take $f(x)=x^2$. Then $f$ is continuous, and thus it has the IVP property but $$f^{-1}\big([1,4]\big)=[-2,-1]\cup[1,2].$$


divisibility - Derive a formula to find the number of trailing zeroes in $n!$







I know that I have to find the number of factors of $5$'s, $25$'s, $125$'s etc. in order to do this. But how can you derive such a formula for any number $n$?

integration - Evaluating the integral $int_0^{infty}frac{x^3}{x^2+a^2},mathrm{d}x$



If $\displaystyle\int_0^{\infty}\dfrac{x^3}{x^2+a^2}\,\mathrm{d}x=\large\displaystyle\dfrac1{ka^6}$, then find the value of $\displaystyle\dfrac{k}{8}$.




I tried a lot but finally stuck at an intermediate form :



$$\begin{align}
&\int_0^{\infty}\dfrac{x^3}{x^2+a^2}\,\mathrm{d}x, \text{with}\, {x^2=t},{2x~\mathrm{d}x=\mathrm{d}t}\\
&=\frac12\int_0^{\infty}\dfrac{(x^2)(2x)}{x^2+a^2}\,\mathrm{d}x=\frac12\int_0^{\infty}\dfrac{t}{t+a^2}\,\mathrm{d}t=\frac12\int_0^{\infty}\dfrac{t+a^2-a^2}{t+a^2}\,\mathrm{d}t\\
&=\frac12\left[\int_0^{\infty}\mathrm{d}t-\int_0^{\infty}\dfrac{a^2}{t+a^2}\,\mathrm{d}t\right]=\frac12\left[t|_0^{\infty}-a^2\ln(a^2+t)|_0^{\infty}\right]
\end{align}$$

exponentiation - Simplifying exponents



I've been refreshing my maths over the last couple of weeks, and it's been a challenge since it has been a long time since I was actively using it (20+ years). Anyways, Khan Academy and old textbooks have been a lot of help, but I am stuck on a few things, so I hope you don't mind asking a few basic-ish questions.



I have gone over through the rules of exponents and it seems I got a good grasp on it, as far as challenges go on Khan Academy, but then I opened up an example from the old textbook (has no solution) and I am not sure about it. I need your help here.



This is the task - it says to simplify it:




$$\frac{5^{2n-1} - 25^{n-1}}{125^{n-1}-5^{3n-2}} $$



(original screenshot)



So, what I started doing was making $25^{n-1}$ in the numerator into a ${(5^2)}^{n-1}$ which I got then as $5^{2n-1}$ and from that numerator is 0 and I didn't go through the rest, since numerator == 0 is 0 in result.



But, I have a strong feeling this isn't right and that I made a mistake, but since I have no one to ask and textbook isn't of much help I have to ask you guys for help, guidance here.



Wolfram alpha reports simplified/alternative form as ${(-5)}^{1-n}$ but without further guidance, step-by-step or anything basically.



Answer



As noted by Musafa you have:
$$
\dfrac{5^{2n-1}-5^{2n-2}}{5^{3n-3}-5^{3n-2}}
$$
Now note that $5^{2n-1}=5\times 5^{2n-2}$ and the same for $5^{3n-2}=5 \times 5^{3n-3}$. Using distributivity you can simplifies the fraction and finally you find the result of Wolfram. ( if you have some problem I can help).
$$
\dfrac{5^{2n-1}-5^{2n-2}}{5^{3n-3}-5^{3n-2}}=\dfrac{5^{2n-2}(5-1)}{5^{3n-3}(1-5)}=
$$
$$

=5^{2n-2-3n+3} (-1)= - (5^{1-n})
$$


calculus - Is there any sequence of functions containing functions growing slower than any smooth function?



The function



$f(x)= \begin{cases}

e^{-\frac{1}{x}} & x > 0 \\
0 & x\leq 0 \\
\end{cases}$



is a smooth function which grows, at $0$, slower than any function of the form $x^n$.



My question is, does there exist a sequence of functions $(f_n)$, infinitely differentiable at $0$ and with $f_n(0)=0$ and $f_n(x)\neq 0$ when $x\neq 0$ for all $n$, with the following property?




For any function $f$ infinitely differentiable at 0 where $f(0)=0$ and $f(x)\neq 0$ when $x\neq 0$ there exists a natural number $N$ such that for all $n\geq N$, we have $\lim_{x\rightarrow 0}\frac{f_n(x)}{f(x)}=0$.





Or is there no such sequence of functions, i.e. for any sequence of functions does there exist a function which grows more slowly than all the functions in the sequence?


Answer



There is no such $C^\infty$ function. To prove this, let $f_n$ satisfy the conditions you specify. WLOG, we can assume all $f_n>0$ on $\mathbb R\setminus \{0\}.$ If that is not true, go to the functions $f_n^2.$ We will construct an even $f\in C^\infty(\mathbb R),$ positive on $\mathbb R\setminus \{0\}$ and strictly increasing on $[0,\infty),$ such that for every $n,$



$$\tag 1\lim_{x\to 0} \frac{f_n(x)}{f(x)}=\infty.$$



For any $g\in C^l(\mathbb R),$ define




$$\|g\|_l = \sum_{j=0}^{l}\sup_{\mathbb R} |D^jg|.$$



If $g_k\in C^\infty(\mathbb R), k = 1,2,\dots,$ and $\sum_{k=1}^{\infty} \|g_k\|_l <\infty$ for each $l,$ then $\sum_{k=1}^{\infty} g_k \in C^\infty(\mathbb R).$ Furthermore, for each $l,$



$$D^l\left(\sum_{k=1}^{\infty} g_k\right ) = \sum_{k=1}^{\infty} D^lg_k.$$



Choose a sequence $a_1>a_2 > \cdots \to 0.$ For each $k,$ set $E_k=\{x:a_{k+1}\le |x|\le a_k\}$ and define $m_k$ to be the smallest of the numbers



$$\min_{E_k} f_1,\min_{E_k} f_2,\dots,\min_{E_k} f_k.$$




Now choose a positive sequence $c_k\to 0$ such that $c_km_k$ strictly decreases to $0.$



For each $k$ we can choose $g_k\in C^\infty(\mathbb R)$ with $g_k>0$ on $(a_{k+1},a_k)$ and $g_k=0$ elsewhere. By multiplying by small enough positive constants we can also require



$$\|g_k\|_k < 1/2^k \text { and } \int_{a_{k+1}}^{a_k} g_k < c_km_k - c_{k+1}m_{k+1}.$$



We then define



$$g = \sum_{k=1}^{\infty}g_k.$$




This $g\in C^\infty(\mathbb R).$



Finally we get to define $f$ (almost): Define



$$f(x) = \int_0^x g,\,\, x\in \mathbb R.$$



Then $f\in C^\infty(\mathbb R)$ and $f$ is strictly increasing on $[0,\infty).$ Fix $n.$ Claim: If $x\in [a_{k+1},a_k]$ and $k\ge n,$ then



$$\tag 2\frac{f_n(x)}{f(x)} \ge \frac{1}{c_k}.$$




Since $c_k\to 0^+,$ $(2)$ proves $(1),$ at least as $x\to 0^+.$ To prove $(2),$ note $f(x)\le f(a_k).$ Now



$$f(a_k) = \int_0^{a_k} g =\sum_{j=k}^{\infty}\int_{a_{j+1}}^{a_j} g_j < \sum_{j=k}^{\infty}(c_jm_j - c_{j+1}m_{j+1}) = c_km_k.$$



So



$$\frac{f_n(x)}{f(x)}\ge \frac{f_n(x)}{f(a_k)} > \frac{m_k}{c_km_k}=\frac{1}{c_k},$$



proving $(2).$




We're done except for a minor detail. This $f=0$ on $(-\infty,0].$ All we need to do is redefine $f$ for $x\le 0$ by setting $f(x)=f(-x).$ Because all $D^lf(0)=0$ for the original $f,$ the redefined $f\in C^\infty(\mathbb R ).$ Since the $m_k$ took into account the behavior of the $f_n$ to the left of $0,$ we have our counterexample as claimed.


calculus - Conceptual question on substitution in integration





In calculus we learn about the substitution method of integrals, but I haven't been able to prove that it works. I mainly don't see how manipulations of differentials is justified, i.e how $dy/dx = f(x)$ means that $dy = dx * f(x)$ so $dx * f(x)$ can be substituted for $dy$, since I thought that $dy/dx$ is merely notation and that $dy$ and $dx$ don't actually exist.


Answer



Essentially we want to show:
$$\int_a^bf(y)dy=\int_{g^{-1}(a)}^{g^{-1}(b)}f(g(x))g'(x)dx$$
for a strictly increasing continuous function $g$ (if $g$ is decreasing, we take $-g$ and absorb the minus sign into swapping the limits). If $\{y_0,\ldots,y_n\}$ is a partition of $[a,b]$, $y_{j-1}\leq y_j^*\leq y_j$, then a Riemann sum for the left integral is

$$\sum_{j=1}^nf(y^*_j)(y_j-y_{j-1}).$$
Put $y_j:=g(x_j)$. So $\{x_0,\ldots,x_n\}$ is a partition of $[g^{-1}(a),g^{-1}(b)]$. By the mean value theorem, there exists $x_j^*\in[x_{j-1},x_j]$ such that
$$g'(x_j^0)=\frac{g(x_j)-g(x_{j-1})}{x_j-x_{j-1}}\iff y_j-y_{j-1}=g'(x_j^*)(x_j-x_{j-1}).$$
Now remember that all Riemann sums converge to the integral (by definition of a function being Riemann integrable), so we may choose $y_j^*$ so that $x_j^*=g^{-1}(y_j^*)$. Hence we have
$$\sum_{j=1}^nf(y^*_j)(y_j-y_{j-1})=\sum_{j=1}^nf(g(x_j^*)g'(x_j^*)(x_j-x_{j-1}).$$
Now we simply recognise that the right hand side is a Riemann sum for the right integral and that $\max|y_j-y_{j-1}|\to0$ as $\max|x_j-x_{j-1}|\to0$.



Sorry for such a lengthy answer - the basic takeaway is that you can prove it, and the notation we use is just a shorthand. While $dy$ and $dx$ are not actual objects as you rightly point out, they act similarly enough in a lot of ways that we often treat them as such.


abstract algebra - Showing that $x^n -2$ is irreducible in $mathbb{Q}[X]$



I'm trying to show that the polynomial $X^n -2$ ($n \in \mathbb{N}$) is irreducible in $\mathbb{Q}[X]$ but am a bit stuck. Methods I know to show irreducibility:





  • Gauss' lemma - which says that if I can show it is irreducible in $\mathbb{Z}[X]$ then it will be irreducible in $\mathbb{Q}[X]$.

  • This would allow me to check reduction modulo a prime.



However this doesn't work in this case, because if I reduce mod 2, then the polynomial is just $X^n$ which is reducible.



I'm also aware of Eisensteins criterion where I can check that if a prime divides all the coefficients but its square doesn't divide the constant coefficient, then it is irreducible.



None of these methods are working for this polynomial so help would be much appreciated! Also are there any general methods to look out for when trying to show polynomials are irreducible?




Thanks


Answer



Let $$P(x)=x^{n}-2=(x^k+\cdots+a)(x^{n-k}+\cdots+b);~~1\le k \le n-1.$$
On other hand $$P(x)=(x-\sqrt[n]{2}\varepsilon_0)(x-\sqrt[n]{2}\varepsilon_1)\cdots(x-\sqrt[n]{2}\varepsilon_{n-1}),$$
where $\varepsilon_i=e^{\frac{2\pi i}{n}},i=0,1,\ldots,n-1.$ Therefore $$a=(-1)^{k}\sqrt[n]{2^{k}}\varepsilon_{i_0}\varepsilon_{i_1}\cdots\varepsilon_{i_{k-1}}.$$ It's easy to see that $a$ can't be integer.


integration - Why does $dfrac{1}{1+x^2}=1-x^2+x^4-x^6+x^8dots$?



I apologize if this is a duplicate.

I was taught how to prove that $\dfrac{\pi}{4}=1-\dfrac{1}{3}+\dfrac{1}{5}-\dfrac{1}{7}\dots$, and one of the steps was to write the equality:
$$\int \dfrac{1}{1+x^2} \ dx = \int \sum\limits_{n=0}^\infty x^{2n}\cdot(-1)^n \ dx = \int 1-x^2+x^4-x^6+x^8\dots \ dx$$
Why does $\dfrac{1}{1+x^2}=1-x^2+x^4-x^6+x^8\dots$? I have no idea on how to proceed with this. Could someone please point me in the right direction? Thanks in advance.


Answer




First, use this basic fact from geometric series:
$$
\frac1{1-x}=\sum_{n=0}^\infty x^n.
$$
Make the substitution $-x^2$ for $x$ to obtain
$$
\frac{1}{1+x^2}=\sum_{n=0}^\infty (-1)^nx^{2n}.
$$



EDIT: to elucidate the first bit, suppose have the infinite series

$$
a+r\cdot a+r^2\cdot a+\ldots
$$
where $|r|<1$. Let $L$ be this sum, supposing it exists; $|r|<1$ is actually a necessary and sufficient condition. Then $L-a=r\cdot L$ by construction, so we have
$$
a=(1-r)L\implies L=\frac a{1-r}.
$$
In our case, we have $a=1$, $r=x$.


Thursday 22 June 2017

calculus - Does L'Hôpital's work the other way?



Hello fellows,



As referred in Wikipedia (see the specified criteria there), L'Hôpital's rule says,



$$
\lim_{x\to c}\frac{f(x)}{g(x)}=\lim_{x\to c}\frac{f'(x)}{g'(x)}
$$




As



$$
\lim_{x\to c}\frac{f'(x)}{g'(x)}=
\lim_{x\to c}\frac{\int f'(x)\ dx}{\int g'(x)\ dx}
$$



Just out of curiosity, can you integrate instead of taking a derivative?
Does




$$
\lim_{x\to c}\frac{f(x)}{g(x)}=
\lim_{x\to c}\frac{\int f(x)\ dx}{\int g(x)\ dx}
$$



work? (given the specifications in Wikipedia only the other way around: the function must be integrable by some method, etc.) When? Would it have any practical use? I hope this doesn't sound stupid, it just occurred to me, and I can't find the answer myself.





Edit




(In response to the comments and answers.)



Take 2 functions $f$ and $g$. When is



$$
\lim_{x\to c}\frac{f(x)}{g(x)}=
\lim_{x\to c}\frac{\int_x^c f(a)\ da}{\int_x^c g(a)\ da}
$$






true?



Not saying that it always works, however, it sometimes may help. Sometimes one can apply l'Hôpital's even when an indefinite form isn't reached. Maybe this only works on exceptional cases.



Most functions are simplified by taking their derivative, but it may happen by integration as well (say $\int \frac1{x^2}\ dx=-\frac1x+C$, that is simpler). In a few of those cases, integrating functions of both nominator and denominator may simplify.



What do those (hypothetical) functions have to make it work? And even in those cases, is is ever useful? How? Why/why not?


Answer




I recently came across a situation where it was useful to go through exactly this process, so (although I'm certainly late to the party) here's an application of L'Hôpital's rule in reverse:



We have a list of distinct real numbers $\{x_0,\dots, x_n\}$.
We define the $(n+1)$th nodal polynomial as
$$
\omega_{n+1}(x) = (x-x_0)(x-x_1)\cdots(x-x_n)
$$
Similarly, the $n$th nodal polynomial is
$$
\omega_n(x) = (x-x_0)\cdots (x-x_{n-1})

$$
Now, suppose we wanted to calculate $\omega_{n+1}'(x_i)/\omega_{n}'(x_i)$ when $0 \leq i \leq n-1$. Now, we could calculate $\omega_{n}'(x_i)$ and $\omega_{n+1}'(x_i)$ explicitly and go through some tedious algebra, or we could note that because these derivatives are non-zero, we have
$$
\frac{\omega_{n+1}'(x_i)}{\omega_{n}'(x_i)} =
\lim_{x\to x_i} \frac{\omega_{n+1}'(x)}{\omega_{n}'(x)} =
\lim_{x\to x_i} \frac{\omega_{n+1}(x)}{\omega_{n}(x)} =
\lim_{x\to x_i} (x-x_{n+1}) = x_i-x_{n}
$$
It is important that both $\omega_{n+1}$ and $\omega_n$ are zero at $x_i$, so that in applying L'Hôpital's rule, we intentionally produce an indeterminate form. It should be clear though that doing so allowed us to cancel factors and thus (perhaps surprisingly) saved us some work in the end.




So would this method have practical use? It certainly did for me!






PS: If anyone is wondering, this was a handy step in proving a recursive formula involving Newton's divided differences.


elementary number theory - Prove that $gcd(2^{a}+1,2^{gcd(a,b)}-1)=1$


Let $a$ and $b$ be two odd positive integers. Prove that $\gcd(2^{a}+1,2^{\gcd(a,b)}-1)=1$.




I tried rewriting it to get $\gcd(2^{2k+1}+1,2^{\gcd(2k+1,2n+1)}-1)$, but I didn't see how this helps.

calculus - Test $x_n = (n+ipi)^n n^{-n + 1/n}$ for convergence and give its limit if possible.

Test $x_n = (n+i\pi)^n n^{-n + 1/n}$ for convergence and give its limit if possible.







I'm not really sure what to do here. My first instinct was to rewrite the sequence as $x_n= (n+i\pi)^n n^{-n} n^{1/n}$ and evaluate the limits, but I'm left with $\lim_{n\rightarrow\infty} n^{1/n}=1$ and $\lim_{n\rightarrow\infty} n^{-n}=0$, which leaves me with nothing really. Can somebody help out?

Wednesday 21 June 2017

elementary number theory - Linear Diophantine Equations: Integer Solutions $x,y$ exist for $ax+by=c$, but how do I find them by hand?



I'm trying to find which of $133x+203y=38$, $133x+203y=40$, $133x+203y=42$, and $133x+203y=44$ have integer solutions. I know that only the third equation suffices for these conditions because $gcd(133,203)=7$ which only divides $42$, but how do I find the solution with smallest possible $x\geq 0$ for $133x+203y=42$?



How do I find solutions $a,b$ to the equation $19a+29b=1$ by hand?


Answer



Correct. Note $\rm\ 19x\!+\!29y=6\, \Rightarrow\, mod\ 19\!:\ y \equiv \dfrac{6}{29}\equiv \dfrac{6}{10}\equiv \dfrac{12}{20}\equiv \dfrac{12}{1}\equiv -7\,\ $ so $\rm\,\ \color{#c00}{y = -7\!+\!19n},\:$ therefore $\rm\ x\, =\, \dfrac{6\!-\!29\,\color{#c00}y}{19} = \dfrac{6\!-\!29(\color{#c00}{-7+19n}))}{19}\, =\, 11\!-\!29n.$




Beware $\ $ One can employ fractions $\rm\ x\equiv b/a\ $ in modular arithmetic (as above) only when the fractions have denominator $ $ coprime $ $ to the modulus $ $ (else the fraction may not (uniquely) exist, $ $ i.e. the equation $\rm\: ax\equiv b\,\ (mod\ m)\:$ might have no solutions, or more than one solution). The reason why such fraction arithmetic works here (and in analogous contexts) will become clearer when one learns about the universal properties of fraction rings (localizations).



The above method is a special case of Gauss's algorithm for computing inverses. This always works for prime moduli, but may fail for composite moduli (in which case one may employ the extended Euclidean algorithm to compute inverses). Gauss's algorithm is often more efficient for manual calculations with small numbers.


real analysis - Examples of bijective map from $mathbb{R}^3rightarrow mathbb{R}$




Could any one give an example of a bijective map from $\mathbb{R}^3\rightarrow \mathbb{R}$?




Thank you.


Answer



First, note that it is enough to find a bijection $f:\Bbb R^2\to \Bbb R$, since then $g(x,y,z) = f(f(x,y),z)$ is automatically a bijection from $\Bbb R^3$ to $\Bbb R$.



Next, note that since there is a bijection from $[0,1]\to\Bbb R$ (see appendix), it is enough to find a bijection from the unit square $[0,1]^2$ to the unit interval $[0,1]$. By constructions in the appendix, it does not really matter whether we consider $[0,1]$, $(0,1]$, or $(0,1)$, since there are easy bijections between all of these.



Mapping the unit square to the unit interval



There are a number of ways to proceed in finding a bijection from the unit square to the unit interval. One approach is to fix up the "interleaving" technique I mentioned in the comments, writing $\langle 0.a_1a_2a_3\ldots, 0.b_1b_2b_3\ldots\rangle$ to $0.a_1b_2a_2b_2a_3b_3\ldots$. This doesn't quite work, as I noted in the comments, because there is a question of whether to represent $\frac12$ as $0.5000\ldots$ or as $0.4999\ldots$. We can't use both, since then $\left\langle\frac12,0\right\rangle$ goes to both $\frac12 = 0.5000\ldots$ and to $\frac9{22} = 0.40909\ldots$ and we don't even have a function, much less a bijection. But if we arbitrarily choose to the second representation, then there is no element of $[0,1]^2$ that is mapped to $\frac12$, and if we choose the first there is no element that is mapped to $\frac9{22}$, so either way we fail to have a bijection.




This problem can be fixed.



(In answering this question, I tried many web searches to try to remember the fix, and I was amazed at how many sources I found that ignored the problem, either entirely, or by handwaving. I never did find it; I had to remember it. Sadly, I cannot remember where I saw it first.)



First, we will deal with $(0,1]$ rather than with $[0,1]$; bijections between these two sets are well-known, or see the appendix. For real numbers with two decimal expansions, such as $\frac12$, we will agree to choose the one that ends with nines rather than with zeroes. So for example we represent $\frac12$ as $0.4999\ldots$.



Now instead of interleaving single digits, we will break each input number into chunks, where each chunk consists of some number of zeroes (possibly none) followed by a single non-zero digit. For example, $\frac1{200} = 0.00499\ldots$ is broken up as $004\ 9\ 9\ 9\ldots$, and $0.01003430901111\ldots$ is broken up as $01\ 003\ 4\ 3\ 09\ 01\ 1\ 1\ldots$.



This is well-defined since we are ignoring representations that contain infinite sequences of zeroes.




Now instead of interleaving digits, we interleave chunks. To interleave $0.004999\ldots$ and $0.01003430901111\ldots$, we get $0.004\ 01\ 9\ 003\ 9\ 4\ 9\ldots$. This is obviously reversible. It can never produce a result that ends with an infinite sequence of zeroes, and similarly the reverse mapping can never produce a number with an infinite sequence of trailing zeroes, so we win. A problem example similar to the one from a few paragraphs ago is resolved as follows: $\frac12 = 0.4999\ldots$ is the unique image of $\langle 0.4999\ldots, 0.999\ldots\rangle$ and $\frac9{22} = 0.40909\ldots$ is the unique image of $\langle 0.40909\ldots, 0.0909\ldots\rangle$.



This is enough to answer the question posted, but I will give some alternative approaches.



Continued fractions



According to the paper "Was Cantor Surprised?" by Fernando Q. Gouveâ, Cantor originally tried interleaving the digits himself, but Dedekind pointed out the problem of nonunique decimal representations. Cantor then switched to an argument like the one Robert Israel gave in his answer, based on continued fraction representations of irrational numbers. He first constructed a bijection from $(0,1)$ to its irrational subset (see this question for the mapping Cantor used and other mappings that work), and then from pairs of irrational numbers to a single irrational number by interleaving the terms of the infinite continued fractions. Since Cantor dealt with numbers in $(0,1)$, he could guarantee that every irrational number had an infinite continued fraction representation of the form $$x = x_0 + \dfrac{1}{x_1 + \dfrac{1}{x_2 + \ldots}}$$



where $x_0$ was zero, avoiding the special-case handling for $x_0$ in Robert Israel's solution.




Cantor-Schröder-Bernstein mappings



The Cantor-Schröder-Bernstein theorem takes an injection $f:A\to B$ and an injection $g:B\to A$, and constructs a bijection between $A$ and $B$.



So if we can find an injection $f:[0,1)^2\to[0,1)$ and an injection $g:[0,1)\to[0,1)^2$, we can invoke the CSB theorem and we will be done.



$g$ is quite trivial; $x\mapsto \langle x, 0\rangle$ is one of many obvious injections.



For $f$ we can use the interleaving-digits trick again, and we don't have to be so careful because we need only an injection, not a bijection. We can choose the representation of the input numbers arbitrarily; say we will take the $0.5000\ldots$ representation rather than the $0.4999\ldots$ representation. Then we interleave the digits of the two input numbers. There is no way for the result to end with an infinite sequence of nines, so we are guaranteed an injection.




Then we apply CSB to $f$ and $g$ and we are done.



Appendix




  1. There is a bijection from $(-\infty, \infty)$ to $(0, \infty)$. The map $x\mapsto e^x$ is an example.


  2. There is a bijection from $(0, \infty)$ to $(0, 1)$. The map $x\mapsto \frac2\pi\tan^{-1} x$ is an example, as is $x\mapsto{x\over x+1}$.


  3. There is a bijection from $[0,1]$ to $(0,1]$. Have $0\mapsto \frac12, \frac12\mapsto\frac23,\frac23\mapsto\frac34,$ and so on. That takes care of $\left\{0, \frac12, \frac23, \frac34,\ldots\right\}$. For any other $x$, just map $x\mapsto x$.


  4. Similarly, there is a bijection from $(0,1]$ to $(0,1)$.




general topology - Closure of node




What is the relation between the topological notion of closure and the closure of a node in a graph? Here, by the closure of a node I mean its closed neighborhood. I came across this definition in p.662 of Murphy's Machine Learning: A Probabilistic Perspective (an excerpt containing the relevant page is available here).



I am aware of two ways to define a topology on a graph:



(1) Define a metric by $d(u,v)$ = (length of shortest path between $u$ and $v$), and then take the metric topology, which in this case is just the power set of the vertices (since for every node $u$ we have, for example, $B_{1/2}(u)=\{u\}$). (Note: this metric only makes sense for connected graphs, but for unconnected nodes $u$ and $v$ you could just define $d(u,v)=\infty$ and generate the topology the usual way.)



(2) Take as a subbasis the set of open neighborhoods of the graph (see this page).



In neither of these topologies is the closed neighborhood of a node always the closure of the node. In the first case, the closure of a node is always itself, so the closure is the closed neighborhood if and only if the node has no neighbors. In the second case, the closed neighborhood is also not always the closure. For example, say the graph is $(\{1,2,3\},\{23\})$. Then our subbasis is $\{\emptyset,2,3\}$, so our topology is $\{\emptyset,123,2,3,23\}$, so our closed sets are $\{\emptyset,123,1,12,13\}$; thus, the closure of $1$ is just itself.




Murphy, Kevin P., Machine learning: A probabilistic perspective, Cambridge, MA: MIT Press (ISBN 978-0-262-01802-9/hbk; 978-0-262-30616-4/ebook). xxix, 1067 p. (2012). ZBL1295.68003.


Answer



I think you've given a good argument that there's not a very close relationship. Another argument: topological closure is always a closure operator; in particular, the closure of a set equals the closure of the closure. This is not the case for closed neighborhoods in graphs, except in very simple cases.


What is the proof for $sqrt{-a}timessqrt{-b}neqsqrt{ab},text{ where }a,bin mathbb{R}$



Having just learned about $i$ in my 10$^{th}$ grade classroom I'm interested in the proofs underlying the rules for algebraic manipulations with imaginary numbers; such an understanding will create a greater appreciation for these foundations mathematics.



Without given a reason, we were told that $\sqrt{-a}*\sqrt{-b}\neq\sqrt{ab},\text{ where }a,b\in \mathbb{R}$.



I've proved on my own (I don't know if my attempt is correct and recognize how relatively rudimentary it is, though I'm just starting out) that
enter image description here



I need to prove $\sqrt{-a}*\sqrt{-b}\neq\sqrt{ab},\text{ where }a,b\in \mathbb{R}$. So I begin by assuming the proof holds true for all $b_1,b_2\in \mathbb{R}$ and not just $\mathbb{R}^{-}$ and try to prove by way of contradiction that this won't work. But from what I see, it does work. So where am I going wrong?




Maybe it's that once imaginary numbers exist this system falls apart because $\sqrt{-1}\times\sqrt{-1}=-1$ so you perform some sort of pattern matching?



Obviously $-1$ is some special case.



I'm just not clear on how to resolve this. Some clarity would be much appreciated.


Answer



In line three you use the fact that for positive reals $a,b$ from $a^n=b^n$ it follows that $a=b$. This is not longer true over the complex numbers(its not even true over the real numbers). For example $i^4=1^4$ but certainly we don't have $i=1$



Also showing that the above proof doesn't work for couplex numbers does not prove that the theorem is wrong. To show that the theorem is wrong you just have to give a counterexample. As you already noted $a=b=-1$ would do the job.



algebra precalculus - TI -nspire calculator: how to create equations that use variables that are also equations



I am trying to get my calculator to work out how to work out equations that rely on several other equations which I have defined as a variable. For example:



\begin{align}w&=3\\
x&=2y\\
y&=w+2\\
z&=2y+4x\end{align}




In the calculator I have created the variable $3→w$ which sets $w$ as 3. If I type $w$ and hit enter I get $3$. Now to enter the equation for $x$ I have to type:



$$2y→x(w)$$



Now if I want to find $x$ I have to type $x(3)$. I want to be able to just type $x$ and the calculator automatically realises that $x$ is $6$. In this instance it isn't so bad, I just have to type $x(3)$, but it makes the whole process of entering the variable $w$ redundant as I have to enter it in subsequent equations anyway.



When we get to the equation $z$ it gets very messy. I have to enter each variable as a function of such and such. So my question is, how do I get my calculator to save the answer to each variable so it will automatically calculate subsequent equations that use those variables, instead of typing each value for them manually?


Answer



This is only possible in a document, not in the calculator application itself.




Create a new document and add the calulator app (press 1).



Now open the menu (menu button) and select "Functions and Programs" (press 9), open the program editor (press 1) and create a new file (press 1).



Name your first function, in your case that would be y. Change the type to "Function" and click OK.



You are now presented with this text





Define y()=



Func



EndFunc




You can now write the function definition between "Func" and "EndFunc" and have it output with the "Return" statement. In your case you would write





Return w+2




Save and check your function with Ctrl+B. You can now use it in your calculator app, which should be empty for now. If you write now y() it will return w+2.



But if you save a value to w, in your case $3→w$, and write y() again, it will return 5.



For x() you can do the same and since you want to use y(), you can define the function like this to access y():





Return 2*y()




Writing x() should now return 10.



As a side note: You have to write the brackets after x and y, because the brackets indicate that a function has to be called, while x and y without brackets would indicate that they are stored values. Also you can't use the normal calulator app, since all the functions in there are pure functions and can't access stored variables. You have to stay within the document.


Limit of $5cdot (tan x)^{sin x}$ at $0^+$, and indeterminate forms

I am looking for the right hand limit
$$
\lim_{x\to 0^+} 5\cdot (\tan x)^{\sin x}
$$




I realize that I need to apply l'Hopital's rule but I'm having trouble getting the indeterminate form.

convergence divergence - If a sequence $(b_n)$ converges to $b$, $b ne 0$, prove that $(frac{1}{b_n})$ converges to $frac{1}{b}$



$(b_n) \rightarrow b$ if $\forall \epsilon > 0$ $\exists N \in \mathbb{N}$ such that if $n > N$, then $|b_n - b| < \epsilon$.




$|b_n - b| < \epsilon$



$|b_n b| \left|\dfrac{1}{b_n} - \dfrac{1}{b}\right| < \epsilon$



$\left|\dfrac{1}{b_n} - \dfrac{1}{b}\right| < \dfrac{\epsilon}{|b_n| |b|}$



If $(b_n)$ is convergent, then it is also bounded : $\exists M > 0 : M \ge |b_n| \forall n \in \mathbb{N}$, this leads to



$\left|\dfrac{1}{b_n} - \dfrac{1}{b}\right| < \dfrac{\epsilon}{M |b|}$




Convergence proofs that I have read so far try to manipulate the expression above into



$\left|\dfrac{1}{b_n} - \dfrac{1}{b}\right| < \epsilon$



Is this necessary? $\dfrac{\epsilon}{M |b|}$ is just another "$\epsilon$" for the $\left|\dfrac{1}{b_n} - \dfrac{1}{b}\right|$ sequence.



I could say: choose such $N$ so that $|b_n - b| < \epsilon M |b|$, which results in



$\left|\dfrac{1}{b_n} - \dfrac{1}{b}\right| <\dfrac{\epsilon M |b|}{M |b|}$




$\left|\dfrac{1}{b_n} - \dfrac{1}{b}\right| < \epsilon$



However, nothing in the convergence theorem states that $\epsilon$ of two such sequences need to be the same. My question: Is it valid to state



$\dfrac{\epsilon}{M |b|} = \epsilon'$ and use the convergence definition?



If $\forall \epsilon' > 0$ $\exists N \in \mathbb{N}$ such that if $n > N$, then $\left|\dfrac{1}{b_n} - \dfrac{1}{b}\right| < \epsilon'$, then $\left(\dfrac{1}{b_n} - \dfrac{1}{b}\right)$ converges.



Since




$\left|\dfrac{1}{b_n} - \dfrac{1}{b}\right| < \dfrac{\epsilon}{M |b|} = \epsilon'$ is equivalent to



$|b_n - b| < \epsilon$



this proves the statement.


Answer



I stopped reading when I saw "this leads to ..."; the inequality there is wrong.



Indeed, to prove the statement, let $b_{n},b \neq 0$ for all $n \geq 1$. If $n \geq 1$, then
$$

\bigg| \frac{1}{b_{n}} - \frac{1}{b} \bigg| = \frac{|b_{n}-b|}{|b_{n}||b|};
$$
there is some $N_{1} \geq 1$ such that $||b_{n}| - |b|| \leq |b_{n}-b| < |b|/2$ for all $n \geq N_{1}$ by convergence of $(b_{n})$, implying that $|b|/2 < |b_{n}|$ for all $n \geq N_{1}$, implying that
$$
\frac{|b_{n} -b|}{|b_{n}||b|} \leq \frac{2|b_{n}-b|}{b^{2}}
$$
for all $n \geq N_{1}$; moreover, given any $\varepsilon > 0$, by convergence of $(b_{n})$ again there is further some $N_{2} \geq 1$ such that $n \geq N_{2}$ implies $|b_{n}-b| < b^{2}\varepsilon/2$, showing that
$2|b_{n}-b|/b^{2} < \varepsilon$ for all $n \geq N_{2}$. Putting all the previous things together, we conclude that, for every $\varepsilon > 0$, we having $n \geq \max \{N_{1}, N_{2} \}$ implies
$$
\bigg| \frac{1}{b_{n}} - \frac{1}{b} \bigg| < \varepsilon.

$$


complex numbers - How to compute $sqrt{i + 1}$











I'm currently playing with complex numbers and I realized that I don't understand how to compute $\sqrt{i + 1}$. My math program, sage, returns



$$1.09868411346781 + 0.455089860562227*i$$



How does sage compute that number? I don't even see how one could rewrite $\sqrt{i + 1}$ in a number of the form $a+bi$.


Answer




$i + 1 = \sqrt2 \left ( {1 \over \sqrt 2} + {1 \over \sqrt 2} i\right ) \\
= \sqrt 2 \left( \cos \left( \pi \over 4 \right) + i \sin \left( \pi \over 4 \right) \right ) \\
= \sqrt 2 e^{i \pi \over 4}$



$\sqrt{i +1} =\left( 2\right)^{1 \over 4}e^{i \pi \over 8} = \left( 2\right)^{1 \over 4}\left( \cos \left( \pi \over 8 \right) + i \sin \left( \pi \over 8 \right)\right)$



Well, this is how Wolframalpha calculates.
The other root would be $\left( 2\right)^{1 \over 4}\left( \cos \left( 9\pi \over 8 \right) + i \sin \left(9 \pi \over 8 \right)\right)$


complex analysis - Is $int_{gamma} sec ^2z mathrm{d}z=0$?




Let $\gamma = \gamma(0;2)$. Is
$$\int_{\gamma} \sec ^2z \ \mathrm{d}z$$
equal to $0$?




I'm trying to answer this question using only tools like Cauchy Theorem or the Deformation Theorem since contour integration is treated later in the book I took this exercise from.




So I know that



$$\sec^2z = \frac{1}{\cos^2z}$$
so the points where holomorphy might fail are the zeros of $\cos z$. So



$$\sec^2 z =0 \iff z= \frac{1}{2}(2k+1)\pi$$
for $k \in \mathbb{Z}$. Now, in my path, I have two zeroes, $-\pi/2$ and $\pi/2$, and I don't see how creating a new path around any of those points can help me out here. Any help will be highly appreciated. Thanks in advance!



EDIT: I just realized that Fundamental theorem of calculus applies here to the function $\tan z$, so the integral is indeed zero.



Answer



Are you allowed to use path\line integrals?



If yes, then just write the parametrization for $\gamma$: $\gamma(t) = 2 \exp(it).$



Now,
$$
\int_\gamma \sec^2(z) \, \rm{d} z = \int_0^{2\pi} \sec^2(2 \exp(it)) \cdot 2i \exp(it) \, \rm{d} t = \left. \tan(2 \exp(it)) \right|_{t=0}^{2\pi} = 0.
$$




I guess that what you mean by the Fundamental theorem of calculus (you don't need any complex analysis).


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...