Saturday 30 April 2016

combinatorics - Proof that $sum_{k=0}^n binom{k}{p}$ = $binom{n+1}{p+1}$




I am currently doing a little self study on difference sequences and they use the following identity $$\sum_{k=0}^n \binom{k}{p}= \binom{n+1}{p+1}$$
I have never seen this identity without proof as if it is obvious, am I missing something is this an obvious result that perhaps is a consequence of some other theorem, if not could someone provide some intuition as to why this identity is true or even a proof. Any help is appreciated, thanks in advance.


Answer




It is more or less famously known as the hockey-stick identity. If you recall Pascal's triangle, notice that each number is the sum of the two above. Now take the orange number $84$ below. It is the sum of the two above it: $84=56+28$. Similarly, $28$ is the sum of the two numbers above it, and then the sum above it etc. so that we end up with



$$\sum_{k=0}^3\binom k6=\binom47$$



which more or less uses the definition of the binomial coefficient through Pascal's triangle.



enter image description here







If this is not so obvious, an induction proof isn't too hard:



$$\sum_{k=0}^{n+1}\binom kp=\binom{n+1}p+\sum_{k=0}^n\binom kp=\binom{n+1}p+\binom{n+1}{p+1}=\binom{n+2}{p+1}$$



The last step was the sum of two binomials equalling the next below it.



It is then clear that this holds for $n=1$ reqardless of $p$.


calculus - Gauss Divergence Theorem in the Plane..

Let $F:\mathbb R^2\longrightarrow \mathbb R^2$ be a $C^1$ vector field, $F(x, y)=(P(x, y), Q(x, y))$. Consider the smooth curve $\gamma:[0, 2\pi]\longrightarrow \mathbb R^2$ given by $\gamma(t)=(\sin(t), \cos(t))$ and let $B=\{(x, y)\in\mathbb R^2: x^2+y^2\leq 1\}$. How can I show that $$-\int_{\gamma} F\cdot \nu \ ds=\int\int_B\textrm{div}(F)\ dA,$$ where $\nu$ is the unit normal vector to $\gamma(t)$.



My Atempt: Firstly notice $\nu(\gamma(t))=(\sin(t), \cos(t))$ and $\|\gamma^{'}(t)\|=1$. Hence,
\begin{align*}
\displaystyle -\int_{\gamma}F\cdot \nu\ ds&=-\int_{0}^{2\pi} \langle F(\gamma(t)), \nu(\gamma(t))\rangle\|\gamma^{'}(t)\|\ dt\\
&=-\int_{0}^{2\pi} P(\gamma(t))\sin(t)+Q(\gamma(t))\cos(t)\ dt\\
&=-\int_{\gamma} P\ dx+Q\ dy\\

&=-\int\int_{B} \left(\frac{\partial Q}{\partial x}-\frac{\partial P}{\partial y}\right)\ dA\\
&=\int\int_B\left(\frac{\partial P}{\partial y}+\frac{\partial Q}{\partial x}\right)\ dA,
\end{align*}
but I don't know how to finish the proof.. Can anyone help me?

calculus - Finding limit of $limlimits_{hto 0} frac1{h}left(frac1{sqrt{x+h-2}}-frac1{sqrt{x-2}}right)$

As expected, if you plug in 0 into the initial equation, the answer is undefined or indeterminate. I tried multiplying the conjugate $\frac1{\sqrt{x+h-2}}+\frac1{\sqrt{x-2}}$ to the numerator and the denominator, but i couldn't simplify this equation enough to avoid the indeterminate value.




$$\lim_{h\to 0} \dfrac{\frac1{\sqrt{x+h-2}}-\frac1{\sqrt{x-2}}}{h}$$

arithmetic - Number game and divisibility: what is the proof that this trick (divisible by 8) works?



The number game is part of the longlasting French game "des chiffres et des lettres", which in UK is also known as "Countdown".



Short description: you have 6 tiles corresponding to numbers, and you must use all 4 operations (add, multiply, substract, divide) to obtain a randomly chosen three digit number.



I play this game quite often and stumbled upon a page which gave this trick to tell whether a (three digit) number is divisible by 8. Say the three digits are named H, T and U (for Hundreds, Tens and Units), then to find if HTU is divisible by 8, you can do this:




  • if H is even, then TU must be divisible by 8;


  • if H is odd, then TU must be divisible by 4, but not 8.



And indeed this works (insofar as I didn't find any counterexample); for instance 544 is divisible by 8 (8 * 68) since 5 is odd, and 44 is divisible by 4 but not 8.



But what is the proof?


Answer



Note that $1000$ is divisible by $8$, so we can just consider $H$, $T$, and $U$.



In fact, let's group $T$ and $U$ together to R, i.e. $R=10T+U$.




So, we now need to consider $100H+R$.



If $R$ is divisible by $8$ and $H$ is odd, then the remainder would be $100H = 100(2n+1) = 200n+100 \to 100 \to 4$.



If $R$ is divisible by $4$ but not by $8$, and $H$ is odd, then we can write $R=4(2p+1)$ and $H=2h+1$, so the remainder would be $200h+100+8p+4=8(25h+p+13) \to 0$.



One can consider the other two cases to prove the algorithm.


Friday 29 April 2016

functions - Partition of Set: Proof




Let $f : A\to B$. If $\{B_1,B_2,\dots,B_n\}$ is a partition of $B$, prove that $\{f^{-1}(B_1),f^{-1}(B_2),\dots,f^{-1}(B_n)\}$ is partition of $A$.





I approached it like following:
Since $f^{-1}$ exists $f$ must be one one and onto. So for each $x$ in $A$ there is one distinct image in $B$ under $A$. Converse, "for each $y$ in $B$ there is distinct pre-image under $f$". This implies that we can define a set $A_i \subseteq A$ which is the set of all elements in A whose image lies in $B_i$ under the $f$ i.e., $A_i=_f^{-1}(B_i)$. Since $f$ is onto $f$ covers all of $B$ and hence union of $A_i$s is equal to $A$.
Since $B_i$ is non-empty set, there exist a $y$ in $B_i$ such that $f(x)=y$. This means that $x$ is pre-image of $y$ under $f$. Hence $x$ belongs to $A_i$. So $A_i$ is non empty set.
Now since $B_i \cap B_j$ is empty if $i\ne j$, them there is no $y$ such that it belongs to both $B_i$ and $B_j$. Now $f$ is one one so there is no pre-image $x$ such that it belongs to both $A_i$ and $A_j$. So $A_i \cap A_j$ is empty. Hence set of all $A_i$s form a partition of $A$.



I think my logic is correct but can someone help me writing it in formal way. Also since there is one one correspondence between equivalence relations and partitions, how to approach this using relations.


Answer



No, here $f$ is not necessarily a bijection. This is my hint: show that if $X, Y\subset B$ then
$$f^{-1}(X \cap Y) = f^{-1}(X) \cap f^{-1}(Y)\quad\text{and}\quad f^{-1}(X \cup Y) = f^{-1}(X) \cup f^{-1}(Y)$$
where the preimage $f^{-1}(X):=\{a\in A: f(a)\in X\}$.
Can you take it from here?




Edit. As regards the intersection-property, see how to prove $f^{-1}(B_1 \cap B_2) = f^{-1}(B_1) \cap f^{-1}(B_2)$ . The union-property can be shown in a similar way.


real analysis - About the differentiability of $sum_{n=1}^inftyfrac{mu(n)sin(sqrt{2n^3}x)}{sqrt{2n^3}}e^{-isqrt{n^3}x}$

Combining the identity $(1)$ from [1], I am saying the specialization $a_n=\sqrt{2n^3}$ and $b_n=i\sqrt{n^3}$ (here $i$ denotes the imaginary unit, thus $i^2=-1$) for integers $n\geq 1$, and the explanation of the criterion of Fubini's theorem, see [2] if you need it, I can prove that $$\frac{1}{\zeta(3)}=\int_0^\infty\sum_{n=1}^\infty\frac{\mu(n)\sin(\sqrt{2n^3}x)}{\sqrt{2n^3}}e^{-i\sqrt{n^3}x}dx,\tag{A}$$
where $\mu(n)$ is the Möbius function.





Question. I was wondering about questions involving this function $$f(x):=\sum_{n=1}^\infty\frac{\mu(n)\sin(\sqrt{2n^3}x)}{\sqrt{2n^3}}e^{-i\sqrt{n^3}x}$$ defined on $[0,\infty)$ that I know how solve or I don't know how solve those.



I know that the function $f(x)$ is uniformly and absolutely convergent on $[0,\infty)$, but what about the differentiability? I am asking what relevant facts we can deduce about the differentiability of our function $f(x)$ for real numbers $0\leq x<\infty$.



Many thanks.




Feel free, if you prefer, add hints for some of previous question, instead of a full answer.




References, both from this Mathematics



[1] See the answer by D'Aurizio for Cantarini's lemma, identity $(1)$ from: Find the closed form for $\int_{0}^{\infty}\cos{x}\ln\left({1+e^{-x}\over 1-e^{-x}}\right)dx=\sum_{n=0}^{\infty}{1\over n^2+(n+1)^2}$.



[2] See the second paragraph of the answer by Eldredge: When can a sum and integral be interchanged?

proof verification - Prove using induction that $(2^n)+1=2$

I needed help proving this statement, this is what I have tried so far




Base case:



$n = 2$



$5 < 9$ $->$ $True$



Inductive step:



Assume that for some $k>=2$ , $(2^k)+1<3^k$ show that $P(k+1)$ holds




-> 2^(k+1) + 1
-> 2*(2^k) + 1
-> (1+1)*(2^k) + 1
-> 2^k + 2^k + 1
< 2^k + 3^k


This is where I get stuck I am not sure where to go from there or how to manipulate that to get 3^(k+1)



Any help would be appreciated thank you

elementary number theory - Is this proof for $11^{10} equiv 1 pmod{100}$ correct?



Show that $11^{10} \equiv 1\pmod{100}$



here, I asked a question about this same problem and all the answers say that I prove this by binomial expansion.



But by doing binomial expansion to show this is the same as doing $11^{10}$ mod $\ 100$ in the calculator, no property is being showed to prove this.



But I think this proof might be right.




$\gcd(11, 100) = 1$



$11^{φ(100)} \equiv 1 \pmod{100}$



$11^{40} \equiv 1 \pmod{100}$ $\ ⇒$



$11^{10} *11^{10} *11^{10}* 11^{10}\equiv 1\pmod{100}$



by Looking everybody mod $\ 100$, I can say that $11^{10} \equiv 1\pmod{100}$



Answer



This doesn't work because the solutions to $x^4 \equiv 1 \pmod{100}$ are $$x \equiv 1, 7, 43, 49, 51, 57, 93, 99 \pmod{100}.$$ So from $$ (11^{10})^4 \equiv 1 \pmod{100} $$ the best you can say is that $$ 11^{10} \equiv 1, 7, 43, 49, 51, 57, 93, \text{ or } 99 \pmod{100}. $$


Thursday 28 April 2016

number theory - Factorization of polynomial with prime coefficients



I'm interested in the problem linked with this answer.



Let $ f(x) = a_n + a_1 x + \dots + a_{n-1} x^{n-1} $ be polynomial with distinct $a_i$ which are primes.




(Polynomials like that for $n= 4 \ \ \ \ f(x) = 7 + 11 x + 17 x^2 + 19 x^3 $)




  • Is it for some $n$ possible that $x^n-1$ and $f(x)$ have some common divisors?



(Negative answer would mean that it is possible to generate circulant non-singular matrices with any prime numbers)



In other words





  • $x^n-1$ has roots which lie (as complex vectors) symmetrically in complex plane on the
    unit circle, can such root be also a root of $f(x) = a_n + a_1 x +
    \dots + a_{n-1} x^{n-1}$ in general case where $a_i$ are constrained
    as above?


Answer



This is certainly possible. The easiest way is to use pairs of twin primes and $n=4$. Such as
$$
f(x)=7+5x+11x^2+13x^3
$$

where $f(-1)=0$ and $x+1$ is a common factor of $f(x)$ and $x^4-1.$






Extending the same idea to third roots of unity. Consider
$$
f(x)=7+5x+17x^2+29x^3+31x^4+19x^5.
$$
Because $7+29=5+31=17+19=36$ we easily see that the third roots of unity $\omega=e^{\pm 2\pi i/3}$ are zeros of $f(x)$ as $f(\omega)=36(1+\omega+\omega^2)=0$. Therefore $f(x)$ has a common factor $\Phi_3(x)=x^2+x+1$ with $x^3-1$ and therefore also with $x^6-1$.







For an example of an odd $n$ I found the following. As
$$53=3+13+37=5+17+31=11+19+23$$ is the sum of three disjoint triples of primes, we can, as above, show that
$$
f(x)=3+5x+11x^2+13x^3+17x^4+19x^5+37x^6+31x^7+23x^8
$$
has the common factor $x^2+x+1$ with $x^9-1$.


calculus - Solving a limit with L'hospital rule or without it

So I have a limit that I want to solve:



$$\lim_{x\to0}\bigg(\frac{1+\ln(1-x^2)}{2x+1-\sin(x)}\bigg)^{\frac{1}{x^2}}$$



So I thought of using L'Hospital's rule, but it's not the $\displaystyle\frac{0}{0}$ situation.



Or Can it go for a different L'Hospital's rule situation, like it's $\displaystyle\frac{\infty}{\infty}$ but i get $1$ at the numerator?

Can I still use it or should I first manipulate the fraction somehow that it's plausible for using that rule, or is there another trick to use?



Any help would be appreciated.

abstract algebra - Do polynomials in two variables always factor in linear terms?

Consider a polynomial of one variable over $\Bbb C$:
$$p(x)=a_0+a_1x+\cdots+a_nx^n,\quad a_i\in\Bbb C.$$
We know from the Fundamental Theorem of Algebra that there exists $c,\alpha_i\in\Bbb C$ such that
$$p(x)=c(x-\alpha_1)\cdots(x-\alpha_n),$$

i.e. we can factor $p$ in linear terms.



Now, what about polynomials $p(x,y)$ in two variables?




Is it still true that we can factor $p(x,y)$ in linear terms? I.e. can we always write
$$p(x,y)=c(\alpha_1x+\beta_1y+\gamma_1)\cdots(\alpha_nx+\beta_ny+\gamma_n)$$
for some $c,\alpha_i,\beta_i,\gamma_i\in\Bbb C$?


Why can we assume a statement is true for $n = k$, when using induction?

I know the principle of mathematical induction. The only thing that causes my confusion is that we suppose a statement is true for $n=k$ then we prove the statement is also true for $n=k+1$ but how can we suppose $n=k$ to be true? What if a statement is true for $n=k+1$ and is not true for $n=k$? Does $k$ mean to be starting from $1$ or $2$ if in the base case we prove the statement to be valid for $n=1$? Please help me with this confusion.

linear algebra - Generating a random matrix with all eigenvalues equal to one



I want to generate a random matrix whose all eigenvalues are equal to one. How can I do it?
I know one method to generate matrices with given eigenvalues is to generate a random orthogonal matrix $Q$ and then construct some diagonal matrix $A$ with desired eigenvalues on the diagonal, then $Q^T A Q$ will be a random matrix with desired eigenvalues. However if I want all eigenvalues equal to 1, then $A$ must be an identity matrix, and then $Q^T A Q$ will also always generate an identity matrix, which is not random at all.


Answer



Let $A$ be an upper triangular matrix with $1$s along the diagonal.
If you have a complete set of eigenvectors, then $A$ must be the identity matrix. But an upper triangular matrix with $1$s along the diagonal doesn't have a complete set of eigenvectors.
The eigenvalues are all $1$, and that will still be true for $Q^TAQ$ because
$$\det(A-\lambda I)=\det(Q^T(A-\lambda I)Q)=\det(Q^TAQ-\lambda I)$$


real analysis - There is no polynomial $f(x)$ with integer coefficients such that $f(n)$ is prime for all non-negative integers $n$.



I am currently working on a homework problem from my Real Analysis class:



enter image description here




So far, I was able to prove part(a), that is, $f(kp) \mid p$. I'm quite confused as to how I can use this result to prove part (b). Also, I don't see the intuition in how $f(x)-p$ would be of any help to reach a contradiction. Any suggestions would be helpful.


Answer



Like paw88789, I'm assuming that the problem asks you to prove that there is no nonconstant $f$ that satisfies the described property. So suppose otherwise that there is a nonconstant $f(x)=\sum_{i=0}^r a_ix^i$ such that $f(k)$ is prime for all nonnegative integers $k$.



From (a), $f(kp)$ is divisible by $p$. But for $k\geq 0$, $f(kp)$ is prime so that $f(kp)=p$ for all $k\geq 0$. But this means the polynomial $g(x)=f(xp)-p$ has infinitely many roots $x=0,1,2,\ldots$ By the theorem that you are allowed to assume, we infer that $g(x)$ is identically $0$. But
$$
g(x)=\sum_{i=1}^r(a_ip^i)x^i
$$
so we have $a_i=0$ for all $i\geq 1$. This means $f(x)$ is the constant polynomial $f(x)=p$. Contradiction.



Wednesday 27 April 2016

linear algebra - Does the set of matrix commutators form a subspace?



The following is an interesting problem from Linear Algebra 2nd Ed - Hoffman & Kunze (3.5 Q17).



Let $W$ be the subspace spanned by the commutators of $M_{n\times n}\left(F\right)$:
$$C=\left[A, B\right] = AB-BA$$
Prove that $W$ is exactly the subspace of matrices with zero trace.



Assuming this is true, one can construct $n^2 - 1$ linearly independent matrices, in particular

$$[e_{i,n}, e_{n,i}]\ \text{for $1\le i\le n-1$}$$
$$[e_{i,n}, e_{j,n}]\ \text{for $i\neq j$}$$
where $e_{i,j}$ are the standard basis with $0$ entry everywhere except row $i$ column $j$ which span the space of traceless matrices.



However, I have trouble showing (or rather, believing, since this fact seems to be given) that the set of commutators form a subspace. In particular, I am having difficulty showing that the set is closed under addition. Can anyone shed some light?


Answer



The set of matrix commutators is in fact a subspace, as every commutator has trace zero (fairly easy to prove) and every matrix with trace zero is a commutator (stated here but I know of no elementary proof), and the set of traceless matrices is clearly closed under linear combinations.



However, the problem is talking about the subspace spanned by the set of matrix commutators, which means the set of linear combinations of matrix commutators. This is by definition a subspace. This is probably because the proof that every matrix with trace zero is a commutator is difficult (although I'm not sure that this is the case).




Hope that clears things up a bit. If not, just ask!


discrete mathematics - Strong mathematical induction without basis step



I'm reading about strong mathematical induction in Susanna Epp's Discrete Mathematics, and here's the principle as it's stated in the textbook:






  1. P(a), P(a + 1), . . . , and P(b) are all true. (basis step)

  2. For any integer k ≥ b, if P(i) is true for all integers i from a through k, then P(k + 1) is true. (inductive step)




The principle is followed by the text that's confusing me:




Strictly speaking, the principle of strong mathematical induction can

be written without a basis step if the inductive step is changed to
“∀k ≥ a − 1, if P(i) is true for all integers i from a through k,
then P(k + 1) is true.” The reason for this is that the statement “P(i
) is true for all integers i from a through k” is vacuously true for k
= a−1. Hence, if the implication in the inductive step is true, then the conclusion P(a) must also be true,∗ which proves the basis step



∗If you have proved that a certain if-then statement is true and if you also know that the hypothesis
is true, then the conclusion must be true.





I understand why $k = a − 1$ makes the statement $\forall i \in Z ((a \leq i \leq k) \land P(i)) $ vacuously true, but can't grasp why replacing $k \geq b$ (and hence $k \geq a$ since $b \geq a$) to $k \geq a-1$ proves the basis step implicitly. Why is it?


Answer



Because the statements $ P(a), ..., P(a-1) $ are all true, since there are no statements in this list. The author is using a somewhat confusing but common convention with ellipses: when you list $ firstElement ... lastElement $ where $lastElement$ precedes $firstElement$, you interpret that as an empty list. I will add, the author should have written the basis step as "for all $i$ with $a \leq i \leq b $, $P(i)$ is true," so that the interval of integers was more clear.


How do these two summations equate?



Apparently, the summation

$$
\sum_{j = i + 1}^n \frac{1}{j - i + 1}
$$
is equal to the summation
$$
\sum_{k=1}^{n - i} \frac{1}{k + 1}
$$
I don't grasp the intuition behind why.


Answer



Set $k=j-i$ so when $j=i+1$ then $j-i=1$ so $k=1$. Now when $j=n$ we have $k=j-i=n-i$.



real analysis - Evaluate the limit using only the following results





Let $u_2>u_1>0$ and also let $u_{n+1}=\sqrt{u_n u_{n-1}}$ for all $n \geq 2$. Then prove that the sequence $\{u_n\}$ converges.




For this, use of only the following results is permissible,




  1. Archimedean Property.


  2. For two sequence $\{x_n\}$ and $\{y_n\}$





$$\displaystyle\lim_{n\to\infty}\left(x_n+y_n\right)=\displaystyle\lim_{n\to\infty}x_n+\displaystyle\lim_{n\to\infty}y_n$$



$$\displaystyle\lim_{n\to\infty}\left(x_n\cdot y_n\right)=\left(\displaystyle\lim_{n\to\infty}x_n\right)\cdot\left(\displaystyle\lim_{n\to\infty}y_n\right)$$



provided they exists.




  1. For a sequence $\{z_n\}$, the limit $\displaystyle\lim_{n\to\infty}z_n$ doesn't exist is equivalent to saying that, $$\exists \varepsilon>0\mid \left\lvert z_n-l\right\rvert\geq\varepsilon \ \forall l \in \mathbb{R} \land \forall n\geq n_0 (\in \mathbb {N})$$




I have been able to prove that $\displaystyle\lim_{n\to\infty}\left(u_n u_{n+1}^2\right)=u_1u_2^2$. Now one can conclude from 1 and 2 that the limit must be $\sqrt[3]{u_1u_2^2}$ but that happens only if we can prove that the limits exist. And this is exactly where I am stuck. Using only the three mentioned results I can't prove that. Any help will be appreciated.


Answer



This is similar to what you was trying, but if it helps... (apologies for my poor English)



Supposing you are allowed to use induction, then
$ u_{n+1}^2=u_n u_{n-1}\implies u_{n+1}^2 u_n=u_{n}^2 u_{n-1}=...=u_2^2 u_1 $



Making $ u_2^2 u_1=a $ you have $ u_{n+1}=\sqrt{a/u_n}=\sqrt[4]{a u_{n-1}} $ and again by induction $ u_{n+1}=a^{s_n} u_i^{c_n} $

$ i\in \{1,2\} $, $ s_n $ is a partial sum of a geometric series, and $ c_n $ a power of $\frac{1}{4}$, all depending on the parity of $ n $.



Now you can use the result number $ (3) $ to prove that each element on the right member of the last equality has a limit (I think this is an easier problem than the initial), and then use result $ (2) $ to conclude.


calculus - Nonsensical result in the midst of calculating an integral via substitution.




I was just calculating an integral via a trigonometric substitution and ended up with $\color{red}{ \text{something pretty nonsensical} }$ but $\color{blue}{ \text{reversing the substitution} }$ seemed to clean it up.
$$\begin{aligned} \int_{0}^{\frac{\pi}{2}} \dfrac{\text{d}\theta}{3+5\cos \theta} \ & \overset{t=\tan \frac{\theta}{2}}= \ \dfrac{1}{4} \color{red}{ \log \left| \dfrac{2+t}{2-t} \right| \Bigg|}_{\color{red}{0}}^{ \color{purple}{\infty }} \\ & \ \ = \dfrac{1}{4} \color{blue}{ \log \left| \dfrac{2+\tan\frac{\theta}{2}}{2-\tan\frac{\theta}{2}} \right| \Bigg|_{0}^{\frac{\pi}{2}} } \end{aligned}$$
Why is that the case? Is it something to do with the nature of the substitution?
Is there something I'm not considering when performing the substitution?



Any help would be much appreciated.



Thank you.




Edit: It turns out that $\color{purple}{\tan\frac{\pi}{4} =1}$. Problem solved!


Answer



When $\theta$ goes from $0$ to $\frac{\pi}2$, $\frac{\theta}2$ goes from $0$ to $\frac{\pi}4$ so $\tan\frac{\theta}2$ goes from $0$ t0 $1$, not $0$ to $\infty$.


Tuesday 26 April 2016

algebra precalculus - How to express $max(x+y,0)$ in terms of $max(pm x, 0)$ and $max(pm y, 0)$



Suppose we define $X^+, X^-$ as $\max(X, 0)$ and $\max(-X, 0)$ respectively. Then, given $Z = X + Y$, I've been trying to figure out how to express $Z^+$ and $Z^-$ in terms of $X^\pm$ and $Y^\pm$, which is supposedly possible.



I know that $\max(x, y) = \frac{x+y+|x-y|}2$, and so $Z^+ = X^+ + Y^+ + \frac{|X+Y|-|X|-|Y|}{2}$, but I'm unsure what to do with this remaining term, I can't seem to figure out how to express it in terms of the other quantities. I have considered breaking he domain $X, Y$ up into regions where $X+Y\ge 0$, $X\ge 0$ and $Y\ge 0$ and flip-flopping the signs, but this seemed like too many cases to be the true solution.



How exactly do you do this? I can't seem to see it.



Answer



You cannot express $(X+Y)^+$ alone in terms of $X^\pm$ and $Y^\pm$, and likewise $(X+Y)^-$, but you can express the two of them together:
$$
(X+Y)^+ - (X+Y)^- = X + Y = (X^+ - X^-) + (Y^+ - Y^-).
$$

(Part (b) of that exercise in Rosenthal's book kind of gives it away.)


sequences and series - Looking for a closed form for $a_m =sum_{k=0}^{infty}binom{k+m}{m}frac{1}{4^{k}(2(k+m))!}$



I have this sequence
$$
a_m =\sum_{k=0}^{\infty}\binom{k+m}{m}\frac{1}{4^{k}(2(k+m))!}
$$
and there seems to exist a patern arising when it is evaluated by WA. It involves $\cosh(1/2)$ and $\sinh(1/2)$. Bellow are the first
$$

\begin{align*}
a_0 &= \cosh\left(\frac{1}{2}\right)\\
a_1 &= \sinh\left(\frac{1}{2}\right)\\
a_2 &= \frac{1}{2}\left(\cosh\left(\frac{1}{2}\right )-2\sinh\left(\frac{1}{2} \right ) \right )\\
&\vdots\\
\end{align*}
$$
but $\cosh(1/2)$ and $\sinh(1/2)$ keeps showing up for $a_3$, $a_4$, $\cdots$



Could anyone find a general expression for $a_m$ involving $\cosh(1/2)$ and $\sinh(1/2)$.




I'd apprecaite any help, thanks.


Answer



$$a_m = \sum_{k\geq 0}\frac{(k+m)! 4^{-k}}{m! k! (2k+2m)!}=\frac{4^m}{m!}\cdot\left.\frac{d^m}{dx^m}\sum_{k\geq 0}\frac{(x/4)^{k+m}}{(2k+2m)!}\right|_{x=1} $$
and now you may recognize in the last series part of the Taylor series of $\cosh(\sqrt{x}/2)$.


calculus - Baby Rudin ("Principles of Mathematical Analysis"), chapter 3 exercise 3




If $s_1=\sqrt 2$,
and $$s_{n+1}=\sqrt {2+\sqrt{s_n}}$$




Prove that ${s_n}$ is converging, and that $s_n<2$ for all $n=1,2\ldots$



My attempt to use mathematical induction: if $s_n<2$ then I use
$s_{n+1}=\sqrt {2+\sqrt s_n}$, to prove $s_{n+1}<\sqrt {2+2}=2$.
I guess $s_n$ is an increasing sequence, I can use the $s_n$ is bounded and increasing, and then prove that $s_n$ converges.


Answer



To give a name to your idea, you want to use the monotone convergence theorem; a bounded and increasing sequence converges to the supremum of that set, indeed a standard approach would be to use induction.





Lemma: $s_n$ is bounded




Base case:
$n=1$



$$s_1= \sqrt{2} <2$$



Suppose now for arbitrary $k$




$$ s_k< 2$$



Now observe as the inductive step:
$$ s_{k+1}=\sqrt{2 + \sqrt{s_k}}< \sqrt{2+2}=2$$



By the principle of mathematical induction we have found that $2$ is a bound for all $n\in
\mathbb N$
.




Lemma: $s_n$ is increasing





Base case:
$$s_2-s_1>0$$
$$\sqrt{2 + \sqrt{\sqrt{2}}}- \sqrt{2}>\sqrt{2 + 0}- \sqrt{2}=0 $$



Now suppose for arbitrary $k$ we have:
$$ s_{k}-s_{k-1}>0 \implies s_k > s_{k-1}\implies \sqrt{s_k}> \sqrt{s_{k-1}}$$



We now make our inductive step:

$$ s_{k+1}-s_k=\sqrt{2 + \sqrt{s_k}}-\sqrt{2 + \sqrt{s_{k-1}}}>\sqrt{2 + \sqrt{s_{k-1}}}-\sqrt{2 + \sqrt{s_{k-1}}}=0$$
Hence we have an increasing sequence for all $n \in \mathbb N$ this completes the proof and our sequence is indeed convergent $\square$.


elementary set theory - proof: The set N of natural numbers is an infinite set



DEFINITION 1: A set $S$ is finite with cardinality $n \in\mathbb N$ if there is a bijection from the set $\left\{0, 1, ..., n-1 \right\}$ to $S$. A set is infinite if its not finite.



THEOREM 1: The set $\mathbb N$ of natural numbers is an infinite set.



Proof: Consider the injection $f :\mathbb N \rightarrow \mathbb N$ defined as $f(x) = 3x$. The range of $f$ is a subset of the domain of $f$.




I understand that $f(x) = 3x$ is not surjective and thus not bijective since for example the range does not contain number $2$. But what would happen if we were to define $f: \mathbb N\rightarrow \mathbb N$ as $f(x) = x$? It is a bijective function. Doesn't that make the set of natural numbers finite according to the definition? What am I missing can somebody please tell me?


Answer



No. The definition of finite is $f:\{0,1,...,n-1\}\to S$ is bijective.



We know $f:\mathbb N\to\mathbb N$ via $f(n) = n$ is bijective, but this maps $\mathbb N$ onto $\mathbb N$. It does not map $\{0,1,...,n-1\}$ onto $\mathbb N$.



Basically, this prove $\mathbb N$ is finite if $\mathbb N$ is finite.


probability - Coupon Collector Problem with multiple copies and X amount of coupons already collected

I have a variant of the coupon collector problem where there are $n$ different coupons being collected, which are being drawn with equal probability and with replacement, but 2 copies of each coupon is needed. Now say I have a total of $X$ number of relevant coupons collected of the $2n$ total number of coupons for a full set, and want to know how many extra unneeded coupons you should have. What would be an equation for that?
I have found on Wikipedia an equation for the number of draws needed to get a full set of one of each coupon, and number of draws needed to get a full set of multiple copies of each coupon.



$E(T) = n\log{n} + γn + {\frac{1}{2}} + O({\frac{1}{n}})$, where $\gamma \approx 0.5772156649$



$E(Tm) = n\log{n} + (m-1)n\log\log{n} + O(n)$, as $n→∞$




Where $m$ is the number of copies of each coupon to be collected, so 2 in this case, and $Tm$ is the first time $m$ copies of each coupon are collected.



I also found this from a previous question. Coupon Collector's Problem with X amount of coupons already collected.




The probability $p_i$ of selecting a new coupon thus equals $\frac{n-i+1}{n}$, and the expected number of draws needed to draw a new coupon equals $\frac{1}{p_i} = \frac{n}{n-i+1}$. As such, the expected value for the time needed to draw all $n$ coupons can be calculated as:



$$E[T] = \frac{n}{n} + \frac{n}{n-1} + \ldots + \frac{n}{1} = n \sum_{k=1}^{n}{\frac{1}{k}}$$



In this case, however, we have already drawn $X$ unique coupons. As such, the estimated number of draws needed to find all $n$ coupons equals:




$$E[T] = E[t_{X+1}] + E[t_{X+2}] + \ldots + E[t_n] = n \sum_{k=1}^{n-X} \frac{1}{k}$$




So to find the total number of drawn coupons upon collecting the $X^{th}$ unique coupon, the equation would be
$$n \sum_{k=1}^{n}{\frac{1}{k}}-n \sum_{k=1}^{n-X} \frac{1}{k} $$



The total number of just unneeded duplicate coupons would be
$$n \sum_{k=1}^{n}{\frac{1}{k}}- n \sum_{k=1}^{n-X} \frac{1}{k} - X $$




I'm not sure how to combine these two equations, $E(Tm) = n\log{n} + (m-1)n\log\log{n} + O(n)$, as $n→∞$ and $n \sum_{k=1}^{n}{\frac{1}{k}}-n \sum_{k=1}^{n-X} \frac{1}{k}$, to get the total number of coupons drawn upon having X number of relevant coupons collected towards a full collection of 2 of each coupon. Then with that equation just subtract X (the number of relevant coupons collected) from the total number to get the total unneeded coupons. I will admit I’m not in a math profession or have done any higher level math lately, but if I understand correctly the first equation is more of an approximation of the value, while the second equation is more exact. So I'm not sure how to combine them or if they can be easily combined.



Also I somewhat understand $O(n)$ and its use, I’m not sure how to input it with the rest of the equation into wolframalpha or even excel, the end goal of where I want to us this equation. The maximum number of coupons would be about 100 if that helps. If it would be easier it, the total number of coupons that have only 1 copy and number of coupons that have 2 copies collected could be used as an input instead of the total number of relevant coupons collected.

calculus - What's wrong with this argument? (Limits)

In order to compute $\displaystyle \lim_{x \to \infty} \sqrt{9x^2 + x} - 3x$ we can multiply by the conjugate and eventually arrive at a limit value $1/6$.



But what about the line of reasoning below, what is wrong with the argument and why? I can't think of a simple explanation, I had one involving the limit definition but I believe there should be a less complicated one.



Here's the argument:
Clearly for large $x$ we can say $\sqrt{9x^2 + x} \approx \sqrt{9x^2} = 3x$. Hence $$ \lim_{x \to \infty} \sqrt{9x^2 + x} - 3x = \lim_{x \to \infty} 3x - 3x = 0 \ . $$ So the limit ought to be zero, easy!



What goes wrong and why?

algebra precalculus - Why does $2+2=5$ in this example?





I stumbled across the following computation proving $2+2=5$



calculation proving 2+2=5




Clearly it doesn't, but where is the mistake? I expect that it's a simple one, but I'm even simpler and don't really understand the application of the binomial form to this...


Answer



The error is in the step where the derivation goes from
$$\left(4-\frac{9}{2}\right)^2 = \left(5-\frac{9}{2}\right)^2$$
to
$$\left(4-\frac{9}{2}\right) = \left(5-\frac{9}{2}\right)$$



In general, if $a^2=b^2$ it is not necessarily true that $a=b$; all you can conclude is that either $a=b$ or $a=-b$. In this case, the latter is true, because $\left(4-\frac{9}{2}\right) = -\frac{1}{2}$ and $\left(5-\frac{9}{2}\right) = \frac{1}{2}$. Once you have written down the (false) equation $-\frac{1}{2} = \frac{1}{2}$ it is easy to derive any false conclusion you want.


Monday 25 April 2016

calculus - Integral ${largeint}_0^1left(-frac{operatorname{li} x}xright)^adx$

Let $\operatorname{li} x$ denote the logarithmic integral
$$\operatorname{li} x=\int_0^x\frac{dt}{\ln t}.$$

Consider the following parameterized integral:
$$I(a)=\int_0^1\left(-\frac{\operatorname{li} x}x\right)^adx.$$




Can we find a closed form for this integral?




We can find some special values of this integral:
$$I(0)=1,\,\,I(1)=1,\,\,I(2)=\frac{\pi^2}6,\,\,I(3)\stackrel?=\frac{7\zeta(3)}2$$
The last value was suggested by numeric computations, and I do not yet have a proof for it.





Can we prove the conjectured value of $I(3)$?




One could expect that $I(4)$ might be a simple rational (or at least algebraic) multiple of $\pi^4$ but I could not find such a form.




Can we find closed forms for $I(4),I(5)$ and other small integer arguments?


integration - Find $lim_{n rightarrow infty}frac{1}{n} int_{1}^{infty} frac{mathrm dx}{x^2 log{(1+ frac{x}{n})}}$



Find:



$$\lim_{n \rightarrow \infty} \frac{1}{n} \int_{1}^{\infty} \frac{\mathrm dx}{x^2 \log{(1+ \frac{x}{n})}}$$



The sequence $\frac{1}{nx^2 \log{(1+ \frac{x}{n})}}=\frac{1}{x^3 \frac{\log{(1+ \frac{x}{n})}}{\frac{x}{n}}}$ converges pointwise to $\frac{1}{x^3}$. So if we could apply Lebesgue's Dominated Convergence Theorem, we have:




$\lim_{n \rightarrow \infty} \frac{1}{n} \int_{1}^{\infty} \frac{\mathrm dx}{x^2 \log{(1+ \frac{x}{n})}}=\lim_{n \rightarrow \infty} \int_{1}^{\infty} \frac{\mathrm dx}{x^3}=\frac{1}{2}$



I have a problem with finding a majorant. Could someone give me a hint?


Answer



I showed in THIS ANSWER, using only Bernoulli's Inequality the sequence $\left(1+\frac xn\right)^n$ is monotonically increasing for $x>-n$.



Then, we can see that for $x\ge 1$ and $n\ge1$, the sequence $f_n(x)$ given by



$$f_n(x)=n\log\left(1+\frac xn\right)$$




is also monotonically increasing. Therefore, a suitable dominating function is provided simply by the inequality



$$\frac{1}{n\log\left(1+\frac xn\right)}\le \frac{1}{\log(1+x)}\le \frac{1}{\log(2)}$$



Therefore, we have



$$\frac{1}{nx^2\log\left(1+\frac xn\right)}\le \frac{1}{x^2\log(2)}$$



Using the dominated convergence theorem, we can assert that




$$\begin{align}
\lim_{n\to \infty}\int_1^\infty \frac{1}{nx^2\log\left(1+\frac xn\right)}\, dx&=\int_1^\infty \lim_{n\to \infty}\left(\frac{1}{nx^2\log\left(1+\frac xn\right)}\right)\,dx\\\\
&=\int_1^\infty\frac{1}{x^3}\,dx\\\\
&=\frac12.
\end{align}$$


notation - Data Storage Question Help

Apologies if this question is in the wrong area, I'm fairly new here!
I'm currently studying a Computer Science degree and I'm so bad at Maths I should probably be ashamed. I'm learning, but since I'm teaching myself, this next part I'm totally confused as to whether I'm right or wrong. Any help would be really advantageous!




Question: An average of 48.8 million public photos were uploaded to the Flickr photo sharing website each month during 2013. If the average size of an uploaded photo is 3 MB, how much storage space does a month of uploaded public photos take up? Give your answer to 3.s.f in both Terabytes (TB) and bytes (b).





I know I have to use a scientific notation for this question but trying to work it out is confusing me! Here's what I've got so far!



1MB = 2^20 or 1,048,576
3MB = (2^20 x 3) or 3,145,728


So, to work out the above question




(2^20 x 3MB) x 48,800,000 (pictures)
=153,511,530,000,000 MB


To give my answer in TB (this is where I'm starting to get confused)



1 MB = 9.537109375E-7 TB
153,511,530,000,000 MB = 146405625.1934 TB



I suspect that the answer is right, I'm just not sure what the scientific notation is. I have no idea what E-7 means and I found that conversion from a data conversion website.



Then, when trying to convert it into bytes, I got some ridiculous number like



160968506081300000000 


Basically, i'm one very confused lady right now!



I'd be very grateful for some insight and criticism into where I'm going wrong!




Many thanks in advance,
Chloe

calculus - Proving the smoothness (indefinitely differentiability) of $2 cos^{-1}(1-x)$.

I am trying to prove that $\frac{d^n\theta}{dx^n}$ where $x = 1-\cos\frac{\theta}{2}$ exists $\forall n > 0$. I found that $\frac{d\theta}{dx} = \frac{2}{\sin\frac{\theta}{2}}$ and based on this answer I think I only need to prove that $\sin\frac{\theta}{2}$ is smooth (notice is a function of $\theta$ and its firsts derivatives looks like fun).



I also found that $\cos^{-1}(x) = \frac{\pi}{2} + \sum_{k=0}^\infty \frac{x^{1+2k}\left(\frac{1}{2}\right)_k}{k!+2kk!}$ (at wolfram alpha). But I am not sure that the existence of its Taylor Series implies its smoothness. I thought that by linearity of differentiation and because all terms in the series are polynomials (smooth funcions) there could be a way to make it work.



Is there a simple (elegant) way to prove its smoothness? Until now my only approach is to find a pattern of the n-th derivative, however I am actively trying to avoid that because of this.



edit:
I am trying to prove that $\theta = 2 \cos^{-1}(1-x)$ is smooth (in other words, has infinitely many derivatives $\exists \frac{d^n\theta}{dx^n}\, \forall n>0$)




I tried different approaches, but based on the Taylor Expansion of $\cos^{-1}$ I found that $\theta = 2 \cos^{-1}(1-x)$ can be expressed by polynomials (which are smooth functions) and therefore its derivatives.



I have never worked with Taylor Series, so I am not really confident about this last statement.



Side note: My original expression was $x=1-\cos\frac{\theta}{2},\, -1 \le x \le 1$ (which implies $\theta = 2 \cos^{-1}(1-x)$) and because of that I derived $\frac{d\theta}{dx} = \frac{2}{\sin\frac{\theta}{2}}$ applying $\frac{d}{dx}$ to both sides.

reference request - Value in retracing mathematicians' steps (specifically Galois)?

So I'd like to learn Galois Theory, which I am probably not "qualified" for in an ordinary sense (I've never done abstract algebra, and I'm just now learning linear algebra in my vector calculus course), but the way I'd like to approach it is to try to sort of retrace Galois' steps, which, my thinking is, necessarily implies that I wouldn't need abstract algebra, because Galois certainly didn't have Dummit and Foote open for reference while he was messing around with permutation groups.



There are a couple of reasons why I'd like to do this, and a couple of reasons why it might be a bad idea, so I'm just trying to get the opinions of people who actually know what Galois theory entails. Here are the reasons why I want to try this:




1) I feel a gap in my education where high school "algebra" is concerned. You don't (or I should say, "I didn't") really get introduced to the idea of proofs and the abstract "why it works" of mathematics until calculus, so most of my algebra education consists of "here's how you can factor this particular quadratic equation," or "here's this neat trick for xyz." And this is all stuff that Galois, Abel, Gauss, etc., were wrestling with on the advanced, abstract level that I'd like to acquaint myself with.



1a) I expect that I will really like abstract algebra/group theory and that algebra, in some form, could very well be where I end up mathematically, so it seems like this is one of the most important areas to clear up and really understand on a deeper level.



2) (This is not all that distinct from 1 or 1a.) I get the impression that the "discovering" part of mathematics is essential to the abstract viewpoint. That is, in much the same way the Wittgenstein wrote that he doubts people will understand the Tractatus unless they've thought the stuff already, I get the feeling that abstract mathematics is best understood after you've come up with the raw thoughts on your own. So the idea would be that, given that group theory at least partially had its start in Galois' ideas, I feel like this would be a good way to get that background for group theory in the general sense.



3) This is less relevant for the actual question so feel free to ignore it in your response: I'm a little tired of just being walked through mathematics (when I say "walked through" I picture a dog on a leash being walked through a maze without being given a chance to explore and find the dead ends and whatnot). I like what I'm learning, but I feel like neither in college (currently a freshman) nor high school are we given the time (or are expected) to stop and follow a line of thought or just really understand the concepts we are learning. I realize that math classes are structured like this because it's probably a more efficient way to accumulate a wide array of important tools for later. But I don't want to miss the part that's drawing me to math: the understanding part, the finding-the-structures part.



(So (3), is probably outside of the realm of this question but I just thought I'd give some context.)




For (1-2), my main question is "where to start?":



I was thinking of reading some of Lagrange's stuff ("Résolution algébrique des équations" and maybe some other stuff from his oeuvres... It's all - incredibly - online... seriously astounding that you can find all of it online...) and maybe something from Abel's oeuvres (as per Spivak's recommendation in Calculus), but will this be sufficient to get anywhere? (At the "end" of all of this, I'd probably read Emil Artin's Galois Theory, or something more modern on the subject.) Is there other mathematics that I'll need but don't have right now (like I said, I'm in the early portion of a vector calc/linear algebra course, and in high school I just went up to calculus and I read Spivak's this past summer)? Am I overlooking some massive obstacle that will ultimately prevent me from getting anywhere (or that will make this project absolutely useless in some way)?



Note that I'm not intending to fully "rediscover" this stuff. I know that it'd be pretty presumptuous to even imagine that I could make the jumps that Galois did. I just want to build a sort of scaffolding for the real thing.



This ended up being way longer than I intended, so, if you got this far (even if you don't respond), thanks for taking time to read all of it.

Sunday 24 April 2016

Indeterminate forms



Is $\dfrac{1}{\frac{0}{0}-1}$ an indeterminate form? I thought only $\dfrac 00,\,\dfrac\infty\infty$ and any form that can be represented in those two are indeterminate. Moreover, how do we know if a form is indeterminate?



P.S.



For people who says that this question doesn't reflect OP's effort, I came through it when calculating $$\lim_{t\rightarrow 0} \frac{1}{\frac{2\sin^2\frac{t}{2}\sin t}{(1-\cos t)(t-\sin t)}-1}.$$ I went to a Wikipedia page on indeterminate forms and there wasn't any mention of it.


Answer




Let us consider the main purpose of this topic to solve the limit and discuss the method of solving it.



As I mentioned in the comments, the use of L'Hopital's rule is not determined by some or other "indeterminate forms". The theorem has very specific assumptions. Furthermore, I do not know what it means to be "completely indeterminate" and how that would specify the condition of being "indeterminate".



Suppose we needed to find $$\lim _{t\to 0} \frac{f(t)}{g(t)} $$
If $\lim _{t\to 0} f(t) = \lim_{t\to 0} g(t) =0$ OR $\lim_{t\to 0} f(t) =\pm\infty$ and $\lim _{t\to 0} g(t) =\pm\infty$, then
$$\lim _{t\to 0} \frac{f(t)}{g(t)} = \lim _{t\to 0} \frac{f'(t)}{g'(t)} $$
Of course, $f$ and $g$ must be differentiable around $t=0$, but that is not an obstacle for this particular problem.


Total order on complex numbers




The common argument against the existence of a total order on complex numbers that observes the known axioms on real numbers is that 0 < i or 0 < -i. But why? 0 is not even an imaginary number.




There's a possibility of treating ni < -oo for all n in the set of natural numbers, such that we have the real line on the right and the imaginary line on the left, where a comparison operation compares the imaginary part before the real part.



The thing is, both negative numbers and imaginary numbers are not scalars that satisfy the usual axioms of multiplication. Why must imaginary numbers satisfy what even negative numbers cannot satisfy? In order to establish the 2-D complex plane, we just have to see the set of complex numbers as a cross product of 2 sets of real numbers, where the imaginary part comes before the real part, the opposite way of writing a complex number conventionally, namely, ai + b. Well, we've been using cross products to establish multiple dimensions for a long time. I don't see a problem here.



------------ Appendix ------------



Recall that imaginary numbers are derived to be more negative than negative numbers in the first place, namely, i i = -1 and (-1)(-1) = 1. So, it makes intuitive sense to treat them as numbers less than negative infinity. However, I forgot that they also cancel out like real numbers, namely, i + -i = 0 i = 0. The key is to not treat 0 i = 0. The problem is that they are designed to be like real numbers, except for the i sign, which is similar to the - sign.



In any case, I was trying to define an INumber interface, but not being able to compare numbers is really inconvenient. It means that they cannot participate in many known formal verification and optimization techniques. The reason why I ask this question here is that if mathematicians can agree on this definition, it will be easy to adopt it in ISO, IEC and IEEE standards. Of course, I can define anything I want, but who will use my definition? If the new definition works, we will have safer and faster computer programs. It makes perfect economic sense.




I know I can simply try not to use complex numbers at all, but why invent them in the first place? Plus, it's really convenient to express wave functions in terms of complex numbers. Nowadays, what is not a wave in physics and engineering? By the way, we mostly deal with equations and not inequalities or any sort. So, it's not as meaningful as I pitch it here. Anyways.



I will work out the addition requirements before posting another question. Thank you all for your participation.


Answer



Note that the proof does not prove that there is no total ordering of the complex numbers. It is easy to order the complex numbers totally.



What it does prove is that there is no total ordering of the complex numbers which is compatible with their algebraic structure.



You say you want to have both $i<0$ and $-i<0$, but that is not compatible with the algebraic structure. Namely, if we have $i<0$ then adding $-i$ to both sides gives $0<-i$. Since you're asserting $-i<0$ which contradicts this, your ordering doesn't obey the rule

$$ awhich is one of the axioms for an ordered field.


Saturday 23 April 2016

Exercise about field extensions




Consider $a_1,\ldots,a_n\in \mathbb Z$.



i) Suppose $a_1,\ldots, a_n$ are pairwise relatively prime. I have to see by induction on n that $[\mathbb Q(\sqrt a_1,\ldots,\sqrt a_n):\mathbb Q]=2^n$




Once I proved the equality is true for $n=1$, I suppose it is true for $n-1$, so let's prove it for $n$:
Applying the tower law:
$[\mathbb Q(\sqrt a_1,\ldots,\sqrt a_n):\mathbb Q]=[\mathbb Q(\sqrt a_1,\ldots,\sqrt a_n):\mathbb Q(\sqrt a_1,\ldots,\sqrt a_{n-1})] [\mathbb Q(\sqrt a_1,\ldots,\sqrt a_{n-1}):\mathbb Q]$



By induction, $[\mathbb Q(\sqrt a_1,\ldots,\sqrt a_{n-1}):\mathbb Q]=2^{n-1}$



So we only have to see that $[\mathbb Q(\sqrt a_1,\ldots,\sqrt a_n):\mathbb Q(\sqrt a_1,...,\sqrt a_{n-1})]=2$



$[\mathbb Q(\sqrt a_1,\ldots,\sqrt a_n):\mathbb Q(\sqrt a_1,\ldots,\sqrt a_{n-1})]=[\mathbb Q(\sqrt a_1,\ldots,\sqrt a_{n-1})(a_n):\mathbb Q(\sqrt a_1,\ldots,\sqrt a_{n-1})]=deg(Irr(\sqrt a_n,\mathbb Q(\sqrt a_1,\ldots,\sqrt a_n))$




$Irr(\sqrt a_n,\mathbb Q(\sqrt a_1,\ldots,\sqrt a_{n-1})=X^2-a_n??$



I have to see that $\sqrt a_n \notin Q(\sqrt a_1\ldots,\sqrt a_{n-1})$. By contradiction,



if$ \sqrt a_n \in Q(\sqrt a_1,\ldots,\sqrt a_{n-1}) \to \sqrt a_n= a+b\sqrt a_{n-1}$ where $a,b\in Q(\sqrt a_1,\ldots,\sqrt a_{n-1}$. How can I get to a contradiction???



ii) Consider $P$ the set of prime numbers and $F$ an extension of $\mathbb Q$: $F=\mathbb Q (\sqrt p, p\in P)$ Which is the degree of $F/\mathbb Q$? Is it finitely generated?



Could you help me with this problem please?




Thank you for your time and help.


Answer



I think that the most elegant proof of this question, which is a consequence of the fact that



Square roots of different square free positive integers are linearly independent over $\mathbb Q$,



can be found in:



http://www.thehcmr.org/issue2_1/mfp.pdf



complex analysis - Describe the set whose points satisfy the following relation: $|z^2 - 1| < 1$



There is a hint which states to use polar coordinates, but I feel like that complicates the problem more. As far as trying it myself, I get lost very early on. If we take $z = r(\cos{\theta} + i\sin{\theta})$, then we have



$|r^2(\cos{2\theta} + i\sin{2\theta}) - 1| < 1$




But I have no idea how to find the modulus of this point with that extra $-1$ in there.


Answer



Hint



Taking the square, we get after simplifying



$r^2<2\cos(2\theta)$



The set is inside the curve defined by its polar equation




$r=\sqrt{2\cos(2\theta)}$


Friday 22 April 2016

analysis - Improper integral convergence: $int_{0}^{1}{frac{mid{log(x)}mid^a}{sqrt{1-x^2}}}dx$



As its told on the title I want to check the convergence/divergence of the improper integral when $a\in \mathbb{R}$: \begin{equation}
\int_{0}^{1}{\frac{\mid{\log(x)}\mid^a}{\sqrt{1-x^2}}}dx
\end{equation}
So, it's improper at $x=0$ and $x=1$, so I split the integral in :
\begin{equation}
\int_{0}^{\frac{1}{2}}{\frac{\mid{\log(x)}\mid^a}{\sqrt{1-x^2}}}dx + \int_{\frac{1}{2}}^{1}{\frac{\mid{\log(x)}\mid^a}{\sqrt{1-x^2}}}dx
\end{equation}

I see, that on the first one the integrals it's like $\int\mid{\log(x)}\mid^a dx $ by the comparison limit test, but I don't know how to prove that $\int\mid{\log(x)}\mid^a dx $ converges.



Hopefully you can help me. Much thanks!


Answer



Note that



$$\int_{0}^{\frac{1}{2}}{\frac{\mid{\log(x)}\mid^a}{\sqrt{1-x^2}}}dx$$



converges $\forall a$ by comparison test with $\frac1{\sqrt x}$.




For the second part



$$\int_{\frac{1}{2}}^{1}{\frac{\mid{\log(x)}\mid^a}{\sqrt{1-x^2}}}dx
$$



let $1-x^2=y^2$



$$\int_{\frac{\sqrt 3}{2}}^{0}{\frac{\mid{\log(\sqrt{1-y^2}}\mid^a}{y}}\cdot \left(\frac{-y}{\sqrt{1-y^2}}\right)dy =
\int_{0}^{\frac{\sqrt 3}{2}}{\frac{\mid{\log(\sqrt{1-y^2}}\mid^a}{\sqrt{1-y^2}}}dy
$$




and note that for $y\to 0$



$$\mid\log\left(\sqrt{1-y^2}\right)\mid\sim \frac{y^2}{2}$$



thus the integral converges for $-2a<1$ that is $a>-\frac12$ and diverges for $-2a\ge1$ that is $a\le-\frac12$ by comparison with $y^{2a}$.


calculus - Quadratic Formula Returns Different Root Signs



My question is as follows:



I am currently working on a problem set for "Integration of Rational Functions By Partial Fractions" and I came across the following problem:




$$\int_0^1 \frac{2}{2x^2+3x+1} \,dx$$



Now, the issue I have is with factoring the demoninator.



When I used the quadratic formula:



$$\begin{equation*} x = \frac{-b \pm \sqrt{b^{2} -4ac}}{2a} \end{equation*}$$



I came up with the roots:




$$\biggl(x-\frac{1}{2}\biggl)\biggl(x-1\biggl)$$



I then multiplied the left hand root by 2:
$$(2x-1)(x-1)$$



However, when I multiplied out these two roots, I came out with:
$$2x^2-3x+1$$



I know I am most likely doing something wrong, but I have looked for other questions similar to this one and I have been having a hard time finding out an accurate explanation of what my error is in this problem.




(edit)



The format for quadratic roots is as follows:
$$(x-a)(x-b)$$



Therefore, if either a or b is negative, then the resulting root would be positive.



As a result, the two roots would therefore be:




$$(2x+1)(x+1)$$



Thanks


Answer



HINT: it is $$2x^2+3x+1=(x+1)(2x+1)$$


algebra precalculus - needs solution of the equation ${(2+{3}^{1/2}})^{x/2}$ + ${(2-{3}^{1/2}})^{x/2}$=$2^x$

$$\left(2+{3}^{1/2}\right)^{x/2} + \left(2-{3}^{1/2}\right)^{x/2} = 2^x.$$
Clearly $x = 2$ is a solution. i need others if there is any. Please help.

Generic triangular numbers sequence formula

I know that I can get the nth element of the following sequence




1   3   6  10  15  21


With the formula



(n * (n + 1)) / 2


where n is the nth number I want. How can I generalise the formula to get the nth element of the following sequences where by following sequences I mean




1 -> 1   3   6   10  15  21
2 -> 2 5 9 14 20
3 -> 4 8 13 19
4 -> 7 12 18
5 -> 11 17
6 -> 16

integration - Is each "elementary + finite functions" function "elementary + finite functions"-integrable?

It is known that there exist elementary functions which are not elementary integrable, i.e. there exists no elementary anti derivative. Example: $f(x) = e^{-x^2}$.



Let $A$ be the set of elementary functions $f: \mathbb{R} \rightarrow \mathbb{R}$. Then:





  1. Add finite many Riemann integrable functions $f: \mathbb{R} \rightarrow \mathbb{R}$ to $A$.

  2. Add all compositions of $A$-functions to $A$.

  3. Repeat 2. until the the set does not grow anymore. (Is this process guaranteed to end after finite steps? If not, 1. should only allow functions which leads to an end.)



Let $f \in A, g: \mathbb{R} \rightarrow \mathbb{R}, g(x) = \int_0^x f(s) \mathrm{d}s$. My question: Does there exist a set of functions (to be chosen in 1. so that $f \in A \Rightarrow g \in A$ is always true?



PS: Sorry for the wording of the title, I failed to come up with something better.




Edit: If it makes the task easier, the functions in 1. may have a finite number of parameters. For example adding all the functions $f_k: x \mapsto \int_0^x e^{t^k} \mathrm{d}t$ is also allowed now.

Thursday 21 April 2016

elementary number theory - System of Linear Congruences

Find all $x$ such that



\begin{align}
x&\equiv 1 \pmod {12}\\
x&\equiv 4 \pmod {21}\\
x&\equiv 18 \pmod {35}
\end{align}



Im not quite sure if this system of linear congruence is solvable. Since

$\gcd(12,21) =3$, $\gcd (12,35)=1$ and $\gcd(21,35) = 7$, and the CRT states that "If(m1, m2) = 1, then the system has its complete solution a single resident class (mod m1.....mr).

linear algebra - Looking for insightful explanation as to why right inverse equals left inverse for square invertible matrices



The simple proof goes:




Let B be the left inverse of A, C the right inverse.



C = (BA)C = B(AC) = B



This proof relies on associativity yet does not give any insight as to why this surprising fact about matrices is true.



AC means a bunch of linear combinations of of columns of A.
CA means a bunch of linear combinations of rows of A. Completely different numbers get multiplied in each case.




The proof above is just a bunch of syntactical steps not having much to do with matrices directly, I cannot see how CA=AC=I.



Can anyone shed some light on this?


Answer



In fact, this isn't about matrices per se, but about inverses in general, and perhaps more specifically about inverses of functions. The same argument works for any function that has a left and a right inverse (and for elements of a monoid or ring, though these can also be interpreted as "functions" via an appropriate setting).



If you really want to try to understand the proof in terms of "meaning", then you should not think of matrices as a bunch of columns or a bunch of numbers, but rather as functions, i.e., as linear transformations.



Say $A$ is an $m\times n$ matrix; then $A$ is "really" a linear transformation from $\mathbb{R}^n$ to $\mathbb{R}^m$: the columns of $A$ are the images of the standard basis vectors of $\mathbb{R}^n$ under the transformation. If $B$ is a right inverse of $A$, then $B$ is $n\times m$, and $AB$ acts like the identity transformation on $\mathbb{R}^m$. In particular, $AB$ has to be onto, so the rank of $AB$ is $m$; since the rank of $AB$ is at most the rank of $A$, then the rank of $A$ has to be $m$; since the rank of $A$ is at most $\max(m,n)$, then $m\leq n$. This tells us that $A$ is onto (full rank), and that it has at least as many columns as it has rows.




If $C$ is a left inverse of $A$, then $C$ must be an $n\times m$ matrix, and $CA$ acts like the identity on $\mathbb{R}^n$. Because $CA$ is one-to-one, then $A$ has to be one-to-one. In particular, it's nullspace is trivial. That means that it cannot have more columns than rows (that would require a nontrivial nullspace, by the Rank-Nullity Theorem); since it has at least as many columns as it has rows, $A$ has exactly the same number of columns as rows, so $m=n$.



Moreover, $A$ is now one-to-one and onto. So it is in fact bijective. So it is in fact invertible. Invertible matrices have unique inverses by definition, so $B$ and $C$ have to be equal: they have no choice in the matter. It isn't about the details of the "recipe", it's about the properties of functions: once you have a function that is one-to-one and onto, it has an inverse and the inverse is unique.






I honestly think that trying to puzzle out the details of the "recipe" is not insightful here: it is staring at the bark of a single tree instead of trying to appreciate the forest.



But if you must (and I really think you shouldn't), then you want to realize that $AC$ and $CA$ are talking in a different language: the columns of $A$ specify a basis, $\gamma$, and tell you how to express the elements of $\gamma$ in terms of the standard basis $\beta$; it provides a "translation" from $\gamma$ to $\beta$. That is, $A=[\mathrm{Id}]_{\gamma}^{\beta}$. The inverse, $C$, explains how to express the elements of the standard basis $\beta$ in terms of the vectors in $\gamma$, $C=[\mathrm{Id}]_{\beta}^{\gamma}$.
$AC$ talks in the language of the standard basis $\beta$, $CA$ talks in the language of $\gamma$. Then it becomes clear why "the same recipe" (not really) should work. It's not really the same recipe, because in $CA$ you "hear" vectors in $\gamma$, translate them into $\beta$, and then translates them back in to $\gamma$. But in $AC$ you "hear" vectors in $\beta$, translate them into $\gamma$, and then back into $\beta$. The "translation recipes" are the same, whether you do $\beta\to\gamma$ first or you do it second (translating English to Russian is the same, whether you are translating something written originally in English into Russian, or something that was translated into English first).




$A$ establishes a bijection between the vectors expressed in terms of $\gamma$, and the vectors expressed in terms of $\beta$. $C$ is the bijection going "the other way". Both $AC$ and $CA$ are the identity, but they are the identity of slightly different structures: $AC$ is the identity of "$\mathbb{R}^n$-with-basis-$\beta$", and $CA$ is the identity of "$\mathbb{R}^n$-with-basis-$\gamma$." Only when you forget that $AC$ and $CA$ are being interpreted as matrices on vector-spaces-with-basis do you realize that in fact you have the same "formula" for the expressions (i.e., that both matrices are "the" identity matrix).



If you want to stare at the bark so intently that you must think of matrices in terms of "linear combinations of rows" or "linear combinations of columns", you are going to miss a lot of important properties for matrices. Matrices are really functions; multiplication of matrices is really composition of functions. The aren't a bunch of vectors or a bunch of numbers thrown into a box that you multiply based on some ad hoc rule. They are functions.



Compare how easy, and, yes, intuitive, it is to realize that matrix multiplication is associative because it is just "composition of functions", vs. figuring it out by expanding the double summations of an entry-by-entry expression of $(AB)C$ and $A(BC)$: you are going in the wrong direction for 'intuitive' understanding. Staring at those summations will not tell you why multiplication is associative, it will only tell you it is. Trying to puzzle out why a right inverse is also a left inverse in terms of "adding columns" and "adding rows" is not going to help either: you need to think of it in terms of functions.


Wednesday 20 April 2016

real analysis - Show that $sqrt{2+sqrt{2+sqrt{2...}}}$ converges to 2




Consider the sequence defined by
$a_1 = \sqrt{2}$, $a_2 = \sqrt{2 + \sqrt{2}}$, so that in general, $a_n = \sqrt{2 + a_{n - 1}}$ for $n > 1$.
I know 2 is an upper bound of this sequence (I proved this by induction). Is there a way to show that this sequence converges to 2? What I think is that the key step is to prove 2 is the least upper bound of this sequence. But how?


Answer



Let $ x = \sqrt {2 + \sqrt {2 + \sqrt {2 + \cdots}}} $. Then, note that $$ x^2 = 2 + \sqrt {2 + \sqrt {2 + \cdots}} = 2 + x \implies x^2 - x - 2 = 0. $$Note that the two solutions to this equation are $x=2$ and $x=-1$, but since this square root cannot be negative, it must be $2$.


real analysis - Proving an inequality using the mean value theorem



I am trying to show that



$|f(x)-f(y)|<|x-y|$,



for the function $f$ to be defined as $f:[0,+\infty)\mapsto [0,+\infty)$, $f(x)=(1+x^2)^{1/2}$, using the mean value theorem.




I have done this:



Since $f$ is differentiable on $[0,+\infty)$, then there is a point $x_0$, $x

$f(x)-f(y)=(x-y)f'(x_0)$,



by the mean value theorem. Hence,



$|f(x)-f(y)|=|x-y||f'(x_0)|=|x-y||x_0 (1+{x_0} ^2)^{-1/2}|\leq|x-y||x_0|\leq|x-y|M<|x-y|$




where M is a constant.



Can someone tell me if this is correct?


Answer



Hint:




  • First prove the general result: if $f:\mathbb{R}\to \mathbb{R}$ is a differentiable function and $|f'(x)| < M$ for all $x\in\mathbb{R}$ then for all $x,y\in\mathbb{R}$ the inequality $$|f(x)-f(y)| < M|x-y|$$ holds. The proof is very similar to what you have done in the question.


  • Next prove that if $f(x) = \sqrt{1+x^2}$ then $|f'(x)| < 1$. To do this consider $f'(x)^2 = \frac{x^2}{1+x^2}$.



  • Combinding the two results above gives the desired result.



summation - sum of this series: $sum_{n=1}^{infty}frac{1}{4n^2-1}$




I am trying to calculate the sum of this infinite series after having read the series chapter of my textbook: $$\sum_{n=1}^{\infty}\frac{1}{4n^2-1}$$



my steps:



$$\sum_{n=1}^{\infty}\frac{1}{4n^2-1}=\sum_{n=1}^{\infty}\frac{2}{4n^2-1}-\sum_{n=1}^{\infty}\frac{1}{4n^2-1}=..help..=sum$$



I am lacking some important properties, I feel I am coming to the right step and cannot spit that out..


Answer



Hint: Partial Fraction decomposition:$$\frac{1}{4n^2-1}=\frac{1}{(2n-1)(2n+1)}=\frac12[\frac{1}{2n-1}-\frac{1}{2n+1}]$$

You must then compute the closed form of
$$\sum_{n=1}^k[\frac{1}{2n-1}-\frac{1}{2n+1}]$$
Can you do that?
Note that
$$\sum_{n=1}^k\frac{1}{2n-1}=\frac11+\frac13+...+\frac1{2k-1}=\frac1{2\cdot 0+1}+\frac1{2\cdot 1+1}+...+\frac1{2(k-1)+1}=\sum_{n=0}^{k-1}\frac{1}{2n+1}=\sum_{n=1}^{k}\frac1{2n+1}+\frac{1}{2\cdot 0+1}-\frac1{2k+1}$$


abstract algebra - On divisors of $p^n-1$

This question raised from discussions around my previous question. This may seem trivial or easy, but I am so confused and can't see the answer. So I will be so grateful if you would help me please.

For the natural number $n$ and prime number $p$, it is likely possible that there exists a polynomial $f(x)\in\mathbb{Z}[x]$ of the form $f(x)=\prod_{i=1}^{m}(x^{\lambda_i}-1)$ such that $deg(f)>n$ and $f(p)\mid p^n-1$. For example, take $p=3$, $n=2$ and $f(x)=(x-1)^3$ we have $(3^1-1)(3^1-1)(3^1-1)\mid 3^2-1$. However, with assumptions $p\geq 3$ and $n\geq 4$, I could not find any such polynomial $f$ with $\deg(f)>n$ and $f(p)\mid p^n-1$. So it made me to claim that the following statement might be true:




Let $p\geq 3$ be a prime number and $n\geq 4$. Then for every $f(x)\in\mathbb{Z}[x]$ of the form $f(x)=\prod_{i=1}^{m}(x^{\lambda_i}-1)$ such that $\deg(f)>n$, we have $f(p)\not\mid p^n-1$.




Is the above assertion true? If yes, would you please hint me how to prove it (or refer me to a reference)?
Thank you in advance.

Tuesday 19 April 2016

Can some one explain to me what is going on here - power of complex number



So here is the question and the work to solve it, but I have no idea how one knows to do the first step or what the first step is...



$$ \begin{align}
(6-i\sqrt{12})^{12} &= \left[\sqrt{48}\left(\cos\left(\frac{\pi}{6}\right) - i\sin\left(\frac{\pi}{6}\right)\right)\right]^{12}\\

&= (\sqrt{48})^{12} \left[\cos\left(\frac{12\pi}{6}\right) - i\sin\left(\frac{12\pi}{6}\right)\right]\\
&=48^6
\end{align}
$$


Answer



Step 1 Write your number in polar coordinates, i.e. $z=r(\cos t+i\sin t)$.



Step 2 Use De Moivre's theorem, that $(\cos t+i\sin t)^n=\cos nt+i\sin nt$.


real analysis - Example for a continuous function that has directional derivative at every point but not differentiable at the origin



Can we find a function $f:\mathbb R^n\to\mathbb R$ that such that $f$ is continuous and $\partial_v f(p)$ exists for all $p\in\mathbb R^n$ and $v\in\mathbb R^n$. But $f$ is not differentiable at $0$?




Is such function $f$ exists?



Here give a example that has directional derivative everywhere, but it's not continuous at the origin.


Answer



Consider the polar coordinates $\ (r\,\ \Phi)\ $ in $\ \mathbb R^2.\ $ Function $ f:\mathbb R^2\rightarrow\mathbb R,\ $ which in polar coordinates is given by:



$$ f(r\,\ \Phi)\,\ :=\,\ r\cdot\sin(3\cdot\Phi) $$



is continuous everywhere, is infinitely differentiable outside the origin $\ O,\ $ and $\ f\ $ has all directional derivative at $\ O,\ $ but $\ f\ $ is not differentiable at $\ O.$



Monday 18 April 2016

probability - Expected number of die rolls for Tyche's die



Tyche's die is a custom D&D magic item that I'm working on. The part of its functioning which is relevant to this question is as follows:
Tyche's die can have 2, 4, 6, 8, 12, or 20 faces. If not 20-sided, when you roll the maximum score on the die, it transforms into the next stage (coin becomes tetrahedron, tetrahedron becomes cube and so-on). If not a coin, when you roll a 1, it transforms into the previous stage (icosahedron becomes dodecahedron, dodecahedron becomes octahedron and so-on).




The objective being of rolling a $20$, and assuming it starts as a coin, how many times can I expect to roll the die before that occurs?



More generally, given a set of $n$ dice ${d_1,...d_n}$ with numbers of faces $f_1,...,f_n$; given the probability of changing from the die $d_k$ to the die $d_{k+1}$ or $d_{k-1}$ is $\frac{1}{f_k}$; and given $f_{k-1}, starting with $d_1$, what is the number of die rolls I can expect to make before rolling $f_n$?



I wrote a Python script which simulated the specific version of the problem for me $300,000$ times and it seems to be around $124.4$, with a standard deviation of about $95$. However, I'm not sure how to go about solving this analytically.



I came up with trying to decompose the problem by solving it assuming we only have the two last dice and then adding dice progressively. However, even then I'm not sure how to take into account the probability of undoing progress.


Answer



You have a Markov chain. Let $E(n)$ be the expected number of rolls to get a $20$ given that your are rolling an $n$ sided die. You can set up a set of simultaneous equations. The first is $$E(20)=\frac 1{20}+\frac {18}{20}(E(20)+1)+\frac 1{20}(E(12)+1)$$
because you have $\frac 1{20}$ chance of rolling $20$ and being done, $\frac {18}{20}$ chance of adding one roll and being just where you started, and $\frac 1{20}$ chance of getting a $1$ and dropping back to a $12$ sided die. The next is

$$E(12)=\frac 1{12}(E(20)+1)+\frac {10}{12}(E(12)+1)+\frac 1{12}(E(8)+1)$$
and so on. I fed it to Alpha with $a=E(20)$ and so on and got
$$E(20) = 52, E(12) = 84, E(8) = 104, E(6) = 116, E(4) = 122, E(2) = 124$$
in agreement with your simulation


complex analysis - Help with improper integral

I need help solving this integral:



$$\int_0 ^\infty \frac{\sin(x)}{x} dx$$




I have a help that says that try to calculate the integral of $$\frac{e^{iz}}{z}$$ for a "proper path"... but I don't know how to use that, help.

modular arithmetic - How to solve $x^2 + x = 0 (mod 10^5)$?



I've tried to solve it in the following way:




First let's solve $x^2+x\equiv 0\ (\mod10)$. Solutions are $-1, 0, 4, 5$.



So $-1$ and $0$ are obviously solutions of $x^2 + x \equiv 0\ (\mod 10^5)$.
Now let's solve this for $4$ and $5$:



Let $x = 10k + 4$, then put this $x$ into $x^2 + x \equiv 0\ (\mod 100)$. So after a few operation we get $k=-2\ (\mod 10)$. Now let $l = 10k + 2$ and $x = 10\cdot (10k + 2) + 4$. Now we can solve our equation by modulo $1000$. Then we keep doing similar steps until modulo $10^5$ solution is found.



The problem is we need to operate really big numbers. And we need to do it twice (for both roots $4$ and $5$). Is there any less complicated solution?


Answer




$10 = 2 \times 5$, so you should really do this mod $2^5$ and mod $5^5$ and use CRT.
Since $x^2 + x = x (x + 1)$ and $\gcd(x,x+1) = 1$, $x^2+x \equiv 0 \mod p^n$ (where $p$ is prime) iff either $x \equiv 0 \mod p^n$ or $x \equiv -1 \mod p^n$.


calculus - How to prove that continuous functions are Riemann-integrable?



In other words, how to prove




A continuous function over a closed interval is Riemann-integrable. That is, if a function $f$ is continuous on an interval $[a, b]$, then its definite integral over $[a, b]$ exists.




Edit:




The Definite Integral as a Limit of Riemann Sums :




Let $f(x)$ be a function defined on a closed interval $[a, b]$. We say that a number $I$ is the definite integral of $f$ over $[a, b]$ and that $I$ is the limit of the Riemann sums $\sum \limits_{k=1}^n f(c_k)\Delta x_k$ if the following condition is satisfied:



Given any number $\epsilon \gt 0$, there is a corresponding number $\delta \gt 0$ such that for every partition $P = \{x_0, x_1, ... , x_n\}$ of $[a, b]$ with $\|P \| < \delta$ and any choice of $c_k$ in $[x_{k-1}, x_k]$, we have
$$ \left| \sum_{k=1}^n f(c_k) \Delta x_k - I \ \right| \lt \epsilon .$$



Answer



First note that a precise formulation of your question is:





How do you prove that every continuous function on a closed bounded interval is Riemann (not Darboux) integrable?




You can find a proof in Chapter 8 of these notes.



Here is a rough outline of this handout:



I. I introduce the ("definite") integral axiomatically. One of the axioms is that the set of integrable functions on $[a,b]$ should contain all the continuous functions.
II. I prove that the Fundamental Theorem of Calculus follows (easily) from the axioms.
III. I introduce Riemann integrable functions (which are exactly what you wrote above) and verify that the class of Riemann integrable functions on $[a,b]$ satisfies the axioms of I. In particular:
IV: I prove that every continuous function is Riemann integrable.




Later I talk about the Darboux integral and how it compares to the Riemann integral. But it was an intentional decision to present the Riemann integral first. This is what students are expecting from their previous courses, and it is not so bad to work with, at least for a while.


Sunday 17 April 2016

Finding Fourier series y=|1-x|



I've been struggling with finding Fourier series of the given function for a while now
enter image description here



I've calculated my coefficients using formulas :










Though my results does approximate function sufficiently enough (see the picture), I'm pretty sure that I did something wrong, as online calculators give me different values for coefficients, and the sum of some series(like (-1)^n/n^2) does not match expected values(integrals are calculated correctly). Is there something wrong with how I approach solving those integrals for coefficients or maybe there are some other pitfalls that I could have encountered? I would really appreciate it if you could help me out at least with calculating . Thank you in advance.



enter image description here



Resulting Furier series




enter image description here


Answer



Here are my calculations. I haven't thoroughly checked them, but hopefully they will help.



$a_n = \frac {1}{2}[\int_{-2}^1 (1-x)\cos \frac {n\pi}{2} x - \int_1^2 (1-x)\cos \frac {n\pi}{2} x]\\
\frac 12[\frac {2(x-1)}{n\pi}\sin\frac {n\pi}{2} x + \frac {4}{n^2\pi^2} \cos\frac {n\pi}{2} x|_1^2 + \frac {2(x-1)}{n\pi}\sin\frac {n\pi}{2} x + \frac {4}{n^2\pi^2} \cos\frac {n\pi}{2} x|_1^{-2}]$



Evaluated at $x = 2,$ and $x = -2,$



$\sin\frac {n\pi}{2} x = 0\\

\cos\frac {n\pi}{2} x = (-1)^n$



at $x = 1, (x-1) = 0$



$\cos\frac {n\pi}{2} x = 0$ if $n$ is odd, $-1$ if $n\equiv 2 \pmod 4$, $1$ if $n\equiv4\pmod 4$



$a_n = -\frac {4}{n^2\pi^2}, \frac {8}{n^2\pi^2},-\frac {4}{n^2\pi^2},0 $ when $n \equiv 1,2,3,4$ respectively.



$b_n = \frac {1}{2}[\int_{-2}^1 (1-x)\sin \frac {n\pi}{2} x - \int_1^2 (1-x)\sin \frac {n\pi}{2} x]\\
\frac12[\frac {4}{n^2\pi^2} \sin\frac {n\pi}{2} x - \frac {2(x-1)}{n\pi}\cos\frac {n\pi}{2} x]$




Evaluated at:



$x = 2\\
\frac {2}{n\pi}\cos n\pi\\(-1)^n\frac {2}{n\pi}$



$x = -2\\
\frac {6}{n\pi}\cos n\pi\\(-1)^n\frac {6}{n\pi}$



$x = 1\\

\frac {4}{n^2\pi^2} \sin\frac {n\pi}{2}\\
\frac {4}{n^2\pi^2}, 0 ,-\frac {4}{n^2\pi^2}, 0$



$b_n = -\frac{4}{n\pi} - \frac {4}{n^2\pi},\frac{4}{n\pi},-\frac{4}{n\pi} + \frac {4}{n^2\pi}, \frac{4}{n\pi}$


continuity - Find all multiplicative continuous functions on $(0,infty)$


If $x>0,y>0$ and if $f(xy)=f(x)f(y)$, then $f=\, ?$




I tried the problem. And got it as $f(x)^n=f(x^n)$.
But answer is $f(x)=x^n$. How?
$f$ is a continuous function.

contest math - Is there a 2019-digit number such that the sum of squares of its digits is a square number?




Is there a number with $2019$ digits, all digits are positive, such that the sum of squares of its digits is a square number ?




The case is clear if the number has $n^2$ digits. Since $n^2\cdot(3^2+4^2)=n^2\cdot5^2$ is a square number, we can take the number $33\dots344\dots4$, where $3$ and $4$ appeared $n^2$ times, to see that exists a number with that property for every $2n^2$-digit number. and for others cases I made some progress, but I could not figue out the case $n=2019=3\cdot673$. I can't see why it would't exist.


Answer



Let $a$ count the number of $1$'s in your $2019$-digit number, $b$ the number of $2$'s, $c$ the number of $3$'s, etc., up to $h$ the number of $8$'s and $k$ the number of $9$'s. Then the sum of the squares of the digits is




$$a+4b+9c+16d+25e+36f+49g+64h+81k$$



with



$$a+b+c+d+e+f+g+h+k=2019$$



So all we need is for



$$2019+3b+8c+15d+24e+35f+48g+63h+80k$$




to be a square with $b+c+d+e+f+g+h+k\le2019$



Since $2019+3\cdot2=2025=45^2$, and since non-negative combinations of $3b+8c$ form every integer greater than $13$ all by themselves, you have to go pretty high up to find a square that is not expressible as the sum of the squares of $2019$ nonzero digits. It might be of interest to find the first square that gets missed.


real analysis - bijective measurable map existence



Does there exist bijective measurable maps between $\mathbb{R}$ and $\mathbb{R}^n$?



If so, could you give me an example of that?



Thank you.


Answer




Polish space is a topological space that is isomorphic to a complete separable metric space, for example $\Bbb R^n$ for any $n\in \Bbb N$. For the proof of the following fact, see e.g. here.




Any uncountable Polish space is Borel isomorphic (there exists a bimeasurable bijection) to the space of real numbers $\Bbb R$ with standard topology.



elementary number theory - Prove $ sqrt{2k}$ is irrational where $ k$ is an odd integer.

Question:



Prove $ \sqrt{2k}$ is irrational where $ k$ is an odd integer.



My attempt:




Proof by contradiction:



Now, assume $ \sqrt{2k}$ is rational. Then, $ \sqrt{2k} = \frac{a}{b}$ where $a,b \in Z$, $b$ not equal $0$ and $a,b$ have no common factors.



$ \sqrt{2k} = \frac{a}{b} \implies 2k = \frac{a^{2}}{b^{2}} \implies (b^{2})(2k) = a^{2} \implies 2|a^{2} \implies 2|a, \ since \ 2 \ is\ prime \implies \exists c \in Z$ such that $ a = 2c$



Then, $ (2k)(b^{2}) = a^{2} \implies (2k)(b^{2}) = 4c^{2} \implies kb^{2} = 2c^{2} \implies 2|kb^{2} \implies 2 | b^{2} \implies 2|b$, since $2$ is prime.



So, $ 2|a$ and $ 2|b$ , a contradiction.

calculus - $f$ convex strictly decreasing function , is $f'(x+delta)-f'(x)$ convex



Assume you have a strictly decreasing convex differentiable function $f(x)$, $x \in \Bbb R^+$, I am wondering if the increment of the first derivative is also convex; i.e.,

$$g(x) = f'(x+\delta) - f'(x)$$ where $\delta$ is any positive number.



What I concluded :



I can say that $f'(x)$ is a strictly increasing function, also since $f(x)$ is strictly decreasing, $f'(x)$ is always negative, meaning that it increases and approaches zero as $x \to \infty $, now I can visualize $f'$ as concave and the difference : $f'(x+\delta)-f'(x)$ to be decreasing but not sure how to show its convexity (if it is).


Answer



To complete gerw's comment and to avoid leaving the question unanswered, let me give an explicit counterexample.



The question (letting $g=f'$) is equivalent to the following one: If $g\colon(0,\infty)\to\mathbb{R}$ is continuous, strictly increasing and bounded above, is $g(x+\delta)-g(x)$ convex for $\delta>0$? Here is a counterexample:
$$

g(x)=\frac{x-\sin x}{1+x-\sin x}.
$$
We have
$$
g'(x)=\frac{1-\cos x}{(1+x-\sin x)^2}\ge0,\quad 0\le g(x)\le1.
$$
$g$ is increasing, bounded above and $\lim_{x\to\infty}g(x)=1$, but it is not concave. Here is the graph of $g(x+\delta)-g(x)$ for $\delta=0,1+0,2\,k$, $k=0,1,2,3,4$.
enter image description here



Clearly $g(x+\delta)-g(\delta)$ is not convex.




If you want a counterexample in terms of $f$ let
$$
f(x)=\int_0^x(g(t)-1)\,dt.
$$


Is the function $frac{operatorname{Harmonic}(x)}{operatorname{Airy}(x)}$ a convex function for real numbers $0



For real numbers $0\leq x\leq 1$, I know how to exlore some aspects of the graph ( I was playing with Wolfram Alpha online calculator about its plot, see next paragraph) of the function $$f(x)=\frac{H_x}{\operatorname{Ai}(x)},$$ where $\operatorname{Ai}(x)$ is the Airy function, see this MathWorld and $H_x$ is the harmonic number.




The code in Wolfram Language is



plot Harmonic(x)/Airy(x), for 0



But I don't know if it is possible to prove that this function is convex $f''(x)\geq 0$ for $0Wolfram Alpha online calculator with this code



d^2/dx^2 Harmonic(x)/Airy(x)





Question. Is it possible to prove that $$\frac{H_x}{\operatorname{Ai}(x)}\tag{1}$$ is a convex function for real numbers in the open interval $(0,1)$? Many thanks.



Answer



Mathematica gives
$$f''(0)=\frac{\pi \left(\pi \Gamma \left(\frac{2}{3}\right)^2-4 \sqrt[6]{3} \zeta (3)\right)}{\Gamma \left(\frac{1}{3}\right)} \approx -0.0162334,$$
and continuity implies that the second derivative remains negative near $x=0$. Thus $f$ is not convex over the interval $(0,1)$.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...