Wednesday 30 April 2014

elementary set theory - Disjoint union with limsup






For any sets $A_n,n\in\mathbb{N}$ consider
$$
A^+:=\limsup_{n\to\infty}A_n:=\bigcap_{n=1}^{\infty}\bigcup_{k=n}^{\infty}A_k,~~~~~E_m:=\bigcup_{n\geq m}A_n.
$$
Show that the sets $E_m, m\geq 1$ can be written as a disjoint union
$$
E_m=A^+\uplus\biguplus_{n\geq m}(E_n\setminus E_{n+1}).
$$






I do not have a working idea. I started with writing $E_m$ as a disjoint union, i.e.
$$
E_m=A_m\uplus\biguplus_{i=m+1}^{\infty}A_i\setminus\bigcup_{j=m}^{i-1}A_j=A_m\uplus\biguplus_{i=m+1}^{\infty}\bigcap_{j=m}^{i-1}A_i\setminus A_j
$$



and additionally I see that
$$

A^+=\bigcap_{n=1}^{\infty}E_n.
$$
But I do not know if this is helpful...



Would be great to get a help resp. answer.



With kind regards,



math12


Answer




Hint: For $x \in E_m$ consider
$$\sup_{A_k \ni x} k.$$
Show that if this supremum is infinite, then $x \in A^{+}$. Otherwise, denote this supremum by $n$ and show that $x \in E_n \setminus E_{n+1}$.


Functional equation for the given function

For instance, there is functional equation for Lambert W function $z=W(z) e^{W(z)}$
And moreover, there is differential one: $z(1+W)\frac{dW}{dz}=W$.
At the same time, there is no known functional equation for Bessel J $J(z)$ function. Or at least, I don't know such an equation.



Is there some kind of relation between the function, differential equations and functional equations? Is it possible to prove that every function is a solution of some functional equation? Is it possible to construct the equation from known differential equation or function definition through series?

Binomial coefficient as a summation series proof?




Alright, so I was wondering if the following is a well known identity or if its existence provides any real benefits other than serving as a time-saver when dealing with higher values for combinations.



After screwing around with some basic combinations stuff, I noticed the following:



$$ \sum_{i=1}^{n-1} i = \begin{pmatrix}n\\2\\ \end{pmatrix}$$



To prove this, I used Gauss' method to simplify the summation, and I wrote n choose 2 in terms of factorials to simplify the right side.



$$ \frac{ (n-1) n } {2} = \frac{ n! } { (2!) (n-2)! } $$




$$ 2!(n-2)!(n-1)(n) = 2n! $$



$$ 2(n-2)!(n-1)(n) = 2n! $$



$$ (n-2)!(n-1)(n) = n! $$



$$ n! = n! $$



I did this on lunch break one day over the summer. I'm in high school, so my math skills are very subpar on this forum, but I was hoping some people might discuss it and/or answer my aforementioned questions. I didn't see anything about it on here or Google, for that matter. If you found this banal or rudimentary, just let me know and I'll refrain from posting until I come up with something more interesting. Regardless, I hope you found it worth your time.


Answer





I was wondering if the following is a well known identity




Not only is it well-known, but it's part of a much larger group. In general, we have




$$\sum_{k=0}^nk~(k+1)~\cdots~(k+p)~=~(p+1)!~{n+p+1\choose n-1}~=~(p+1)!~{n+p+1\choose p+2}$$





The whole idea is to rewrite the summand as $(p+1)!~\displaystyle{p+k\choose p+1}.~$ See also Faulhaber's formulas.


real analysis - Uniformly convergent implies equicontinuous

I'm trying to prove that if I have a sequence of continuously differentiable functions $f_n$ that converge uniformly on $[a,b]$, then $\{f_n\}$ is equicontinuous.



My idea is to use uniform convergence to deal with the "tail" and then use continuity to deal with the finitely many $f_n$'s left. But I'm having trouble writing it down...

calculus - Given $lim_{xto0}frac{sin x}{x}=1$, why is it true $lim_{xto 0}frac{sin(tan x)}{tan x} = lim_{tan xto 0}frac{sin(tan x)}{tan x}=1$?




Given $\lim_{x\to0}\frac{\sin(x)}{x}=1$, why is it true $\lim_{x\to
> 0}\frac{\sin(\tan(x))}{\tan(x)} = \lim_{\tan(x)\to
> 0}\frac{\sin(\tan(x))}{\tan(x)}=1$
?





I know the theorem that sates that $\lim_{x\to0}\frac{\sin(x)}{x}=1$; nonetheless, I have seen numerous times that, given a function $f$ for which this works, they do something like $\lim_{x\to0}\frac{\sin(f(x))}{f(x)}=1$, the last time I saw something like this is $\lim_{x\to 0}\frac{\sin(\tan(x))}{\tan(x)} = \lim_{\tan(x)\to 0}\frac{\sin(\tan(x))}{\tan(x)}=1$.



I want to know how can I apply this in general, correctly, since up until now, I've applying it informally kind of $\lim_{x\to0}\frac{\sin(\text{something in terms of $x$})}{\text{same something in terms of $x$}}=1$.



Could you help me with this?



thanks in advance.


Answer




It is true in general that if $\lim_{x\to a}f(x)=0$, and $f(x)\neq0$ close to $a$, then
$$
\lim_{x\to a}\frac{\sin(f(x))}{f(x)}=1
$$

This is rather straightforward to prove from the $\epsilon$-$\delta$ definition of limits.



For your special case, we have $f(x)=\tan(x)$ and $a=0$.


radicals - Can you get any irrational number using square roots?



Given an irrational number, is it possible to represent it using only rational numbers and square roots(or any root, if that makes a difference)?



That is, can you define the irrational numbers in square roots, or is it something much deeper than that? Can pi be represented with square roots?


Answer



The smallest set of numbers closed under the ordinary arithmetic operations and square roots is the set of constructible numbers. The number $\sqrt[3]{2}$ is not constructible, and this was one of the famous greek problems: the duplication of the cube.




If you allow roots of all orders, then you're talking about equations that can be solved by radicals. Galois theory explains which equations can be solved in this way. In particular, $x^5-x+1=0$ cannot. But it clearly has a real root.


calculus - What is the limit $ underset{ntoinfty}{lim} frac {{n!}^{1/n}}{n} $

What is the following limit equal to and how do I prove it?




$$ \underset{n\to\infty}{\lim} \frac {{n!}^{1/n}}{n}. $$




I've been trying for a while and I can't seem to get it.

inequality - How can I prove that $x-{x^2over2}



How can I prove that $$\displaystyle x-\frac {x^2} 2 < \ln(1+x)$$ for any $x>0$




I think it's somehow related to Taylor expansion of natural logarithm, when:



$$\displaystyle \ln(1+x)=\color{red}{x-\frac {x^2}2}+\frac {x^3}3 -\cdots$$



Can you please show me how? Thanks.


Answer



Hint:



Prove that $\ln(1 + x) - x + \dfrac{x^2}2$ is strictly increasing for $x > 0$.




edit: to see why this isn't a complete proof, consider $x^2 - 1$ for $x > 0$. It's strictly increasing; does that show that $x^2 > 1$? I hope not, because it's not true!


sequences and series - What is the exact value of $sumlimits_{x=1}^∞frac1{x^x}$?



On the internet I found an evaluation of the integral $\displaystyle \int_0^1 x^{-x} \,\mathrm{d}x$ which results in $$\sum_{x=1}^\infty \frac{1}{x^x}.$$
Seeing this graphically I found that the sum does seem to converge to approximately $1.291$. So as it converges shouldn't we be able to find its exact value? Can someone please tell me what this value is or how I may be able to find it?



Edit:




While searching about this I came across Sophomore's Dream and this question which is quite interesting but none of them can quite give a closed value (in terms of known constants like $\pi$, $\phi$ or e). So does $\displaystyle \sum_{x=1}^\infty \frac{1}{x^x} $ or $\displaystyle \int_0^1 x^{-x} \,\mathrm{d}x$ not have such a value? Is it irrational? So many questions.


Answer



The most you can say/prove is that the integral and the series are equal (easy to prove), and that the integral has no elementary anti-derivative (hard to prove, but there's one if you know some Galois theory). There's no known closed form for this value. It's not known whether it's rational/irrational.


Tuesday 29 April 2014

elementary number theory - Linear diophantine equation $100x - 23y = -19$



I need help with this equation: $$100x - 23y = -19.$$ When I plug this into Wolfram|Alpha, one of the integer solutions is $x = 23n + 12$ where $n$ is a subset of all the integers, but I can't seem to figure out how they got to that answer.


Answer



$100x -23y = -19$ if and only if $23y = 100x+19$, if and only if $100x+19$ is divisible by $23$. Using modular arithmetic, you have

$$\begin{align*}
100x + 19\equiv 0\pmod{23}&\Longleftrightarrow 100x\equiv -19\pmod{23}\\
&\Longleftrightarrow 8x \equiv 4\pmod{23}\\
&\Longleftrightarrow 2x\equiv 1\pmod{23}\\
&\Longleftrightarrow x\equiv 12\pmod{23}.
\end{align*}$$
so $x=12+23n$ for some integer $n$.


abstract algebra - Show that either $f$ is irreducible or $f$ factors into irreducible polynomials of degree $3$ over $K$.


Let $f(x)$ be an irreducible polynomial of degree $6$ over field $K$.



If $L$ is a field extension of $K$ and $[L: K]=2$ then show that either $f$ is irreducible or $f$ factors into irreducible polynomials of degree $3$ over $K$.




Attempt:




If $f$ is irreducible then we are done.



Otherwise $f$ factors into either $1+5$ or $2+4$ or $3+3$ degree polynomials.



I am unable to derive a contradiction for the first two cases.



Please give some hints.

real analysis - Calculus Question: Improper integral $int_{0}^{infty}frac{cos(2x+1)}{sqrt[3]{x}}text dx$



How to evaluate integral $$\int_{0}^{\infty}\frac{\cos(2x+1)}{\sqrt[3]{x}}\text dx?$$ I tried substitution $x=u^3$ and I got $3\displaystyle\int_{0}^{\infty}u \cos(2u^3+1)\text du$. After that I tried to use integration by parts but I don't know the integral $\displaystyle\int \cos(2u^3+1)\text du$. Any idea? Thanks in advance.


Answer




$$\color{blue}{\mathcal{I}=\frac{\Gamma(\frac{2}{3})\cos(1+\frac{\pi}{3})}{2^{2/3}}\approx-0.391190966503539\cdots}$$






\begin{align}
\int^\infty_0\frac{\cos(2x+1)}{x^{1/3}}{\rm d}x
&=\int^\infty_0\frac{\cos(2x+1)}{\Gamma(\frac{1}{3})}\int^\infty_0t^{-2/3}e^{-xt} \ {\rm d}t \ {\rm d}x\tag1\\
&=\frac{1}{\Gamma(\frac{1}{3})}\int^\infty_0t^{-2/3}\int^\infty_0e^{-xt}\cos(2x+1) \ {\rm d}x \ {\rm d}t\\
&=\frac{\cos(1)}{\Gamma(\frac{1}{3})}\int^\infty_0\frac{t^{1/3}}{t^2+4}{\rm d}t-\frac{2\sin(1)}{\Gamma(\frac{1}{3})}\int^\infty_0\frac{t^{-2/3}}{t^2+4}{\rm d}t\tag2\\
&=\frac{\cos(1)}{2^{2/3}\Gamma(\frac{1}{3})}\int^\infty_0\frac{t^{1/3}}{1+t^2}{\rm d}t-\frac{\sin(1)}{2^{2/3}\Gamma(\frac{1}{3})}\int^\infty_0\frac{t^{-2/3}}{1+t^2}{\rm d}t\tag3\\

&=\frac{\cos(1)}{2^{5/3}\Gamma(\frac{1}{3})}\int^\infty_0\frac{t^{-1/3}}{1+t}{\rm d}t-\frac{\sin(1)}{2^{5/3}\Gamma(\frac{1}{3})}\int^\infty_0\frac{t^{-5/6}}{1+t}{\rm d}t\tag4\\
&=\frac{\pi\left(\cos(1)-\sqrt{3}\sin(1)\right)}{2^{2/3}\Gamma(\frac{1}{3})\sqrt{3}}\tag5\\
&=\frac{2\pi\cos(1+\frac{\pi}{3})}{2^{2/3}\frac{2\pi}{\Gamma(\frac{2}{3})\sqrt{3}}\sqrt{3}}\tag6\\
&=\frac{\Gamma(\frac{2}{3})\cos(1+\frac{\pi}{3})}{2^{2/3}}
\end{align}






Explanation:
$(1)$: $\small{\displaystyle\frac{1}{x^n}=\frac{1}{\Gamma(n)}\int^\infty_0t^{n-1}e^{-xt}{\rm d}t}$
$(2)$: $\displaystyle\cos(a+b)=\cos(a)\cos(b)-\sin(a)\sin(b)$
$(2)$: $\small{\displaystyle\int^\infty_0e^{-ax}\sin(bx){\rm d}x=\frac{b}{a^2+b^2}}$
$(2)$: $\small{\displaystyle\int^\infty_0e^{-ax}\cos(bx){\rm d}x=\frac{a}{a^2+b^2}}$
$(3)$: $\displaystyle t\mapsto 2t$
$(4)$: $\displaystyle t\mapsto \sqrt{t}$
$(5)$: $\small{\displaystyle\int^\infty_0\frac{x^{p-1}}{1+x}{\rm d}x=\pi\csc(p\pi)}$
$(6)$: $\small{\displaystyle \Gamma(z)=\frac{\pi\csc(\pi z)}{\Gamma(1-z)}}$, $\small{\displaystyle a\cos{x}-b\sin{x}=\sqrt{a^2+b^2}\cos(x+\arctan{\frac{b}{a}})}$


calculus - Functions cannot be integrated as simple functions











Since I was a college student, I was told there were many functions that cannot be integrated as simple functions.(I'll give the definition of simple functions at the end of the article). As a TA for calculus now, I've been asked for integrated various functions, certainly, most of them are integrable(in the sense of simple functions). However, how could I know that certain functions are not integrable not merely because I cannot integrate them. (There was one time that one integration on the question sheet daunted all the TAs I asked).



Does anyone know THEORETICAL REASONS why certain functions cannot be integrated as simple funtions? Or could you refer to certain reference containing such materials? Or, could you show me by example, certain "good" function actually don't have "good" integration, I think one famous example could be "$\frac{\sin x}{x}$".



Simple functions:
The functions which is the summations(subtractions),multiplications(divisions), and compositions of the following functions(as well as the functions generated by these operations): $x, \sin x , \cos x, \log x, \sqrt[n]{x}, e^x$.


Answer



A search for "integration in finite terms" will get you many useful results. This paper by Rosenlicht is a very good place to start.




Bibliographic details: Maxwell Rosenlicht, Integration in Finite Terms, American Mathematical Monthly 79 (1972) 963-972.


algebra precalculus - Question about Properties of Exponents

I'm just doing some assignments and I was a little confused by this, could someone explain why the second equation is not equal like the first one? From what I've learned, if you take the inverse you switch the exponential sign (positive to neg for example)
$$\frac{3^{1}} {1^{}} = \frac{1^{}} {3^{-1}}$$
$$\frac{1^{1}} {3^{}} ≠ \frac{3^{}} {1^{-1}}$$

linear algebra - How to find the eigen values of the given matrix




Given the matrix




\begin{bmatrix}
5&1&1&1&1&1\\1&5&1&1&1&1\\1&1&5&1&1&1\\1&1&1&4&1&0\\1&1&1&1&4&0\\1&1&1&0&0&3
\end{bmatrix}



find its eigen values(preferably by elementary row/column operations).




Since I don't know any other method other than elementary operations to find eigen values so I tried writing the characteristic polynomial of the matrix which is follows:





\begin{bmatrix}
x-5&-1&-1&-1&-1&-1\\-1&x-5&-1&-1&-1&-1\\-1&-1&x-5&-1&-1&-1\\-1&-1&-1&x-4&-1&0\\-1&-1&-1&-1&x-4&0\\-1&-1&-1&0&0&x-3
\end{bmatrix}




Using $R1=R1-(R2+R3+R4+R5+R6)$




\begin{bmatrix}
x&-x+8&-x+8&-x+6&-x+6&-x+4\\-1&x-5&-1&-1&-1&-1\\-1&-1&x-5&-1&-1&-1\\-1&-1&-1&x-4&-1&0\\-1&-1&-1&-1&x-4&0\\-1&-1&-1&0&0&x-3

\end{bmatrix}



Answer



Call your matrix $A$. Let $B=A-3I$.
$$
B=\pmatrix{
2&1&1&1&1&1\\
1&2&1&1&1&1\\
1&1&2&1&1&1\\
1&1&1&1&1&0\\

1&1&1&1&1&0\\
1&1&1&0&0&0}.
$$
$B$ has two identical columns (4 and 5), so we try to remove one of them. Perform the column operation $C5\leftarrow C5-C4$ followed by the inverse row operation $R4\leftarrow R4+R5$:
$$
\pmatrix{
2&1&1&1&0&1\\
1&2&1&1&0&1\\
1&1&2&1&0&1\\
2&2&2&2&0&0\\

1&1&1&1&0&0\\
1&1&1&0&0&0}.
$$
We now get a zero eigenvalue at the $(5,5)$-th position. Remove the fifth row and column:
$$
\pmatrix{
2&1&1&1&1\\
1&2&1&1&1\\
1&1&2&1&1\\
2&2&2&2&0\\

1&1&1&0&0}.
$$
The first two rows in this matrix sans $I_2$ are identical, so we try to remove a duplicate row. Do $R1\leftarrow R1-R2$ and then the inverse row operation $C2\leftarrow C2+C1$:
$$
\pmatrix{
1&0&0&0&0\\
1&3&1&1&1\\
1&2&2&1&1\\
2&4&2&2&0\\
1&2&1&0&0}.

$$
We now get another eigenvalue $1$ at the top-left position. Remove the first row and column:
$$
\pmatrix{
3&1&1&1\\
2&2&1&1\\
4&2&2&0\\
2&1&0&0}.
$$
Do $R1\leftarrow R1-R2$ and $C2\leftarrow C2+C1$ again:

$$
\pmatrix{
1&0&0&0\\
2&4&1&1\\
4&6&2&0\\
2&3&0&0}.
$$
Now we get another eigenvalue $1$ at the top-left position. Remove the first row and column:
$$
\pmatrix{

4&1&1\\
6&2&0\\
3&0&0}.
$$
We may now calculate its characteristic polynomial by hand. It is
\begin{align}
&(x-4)(x-2)x - 6x -3(x-2)\\
=\,&x^3 - 6x^2 + 8x - 6x - 3x + 6\\
=\,&x^3 - 6x^2 - x + 6\\
=\,&(x-6)(x^2-1).

\end{align}
The eigenvalues of this matrix are $6,1,-1$. Therefore the eigenvalues of $B$ are $0,1,1,6,1,-1$ and the eigenvalues of $A=B+3I$ are $3,4,4,9,4,2$.


sequences and series - How to find the square root of $sqrt{20+sqrt{20+sqrt{20 + cdots}}}$



Generally, I know how to calculate the sq roots or cube roots, but I am confused in this question, don't know how to do this:



$$\sqrt{20+\sqrt{20+\sqrt{20 + \cdots}}}$$



Note: Answer given in the key book is $5$.

Not allowed to use calculator.


Answer



HINT:



Let $\displaystyle S=\sqrt{20+\sqrt{20+\sqrt{20+\cdots}}}$ which is definitely $>0$



$\displaystyle\implies S^2=20+S\iff S^2-S-20=0$



But we need to show the convergence of the sum


measure theory - Almost surely convergence to $0$ if and only if convergence to $0$ in probability




I am working on this question:




Prove that $X_{n}\rightarrow 0$ a.s. if and only if for every $\epsilon>0$, there exists $n$ such that the following holds: for every random variable $N:\Omega\rightarrow\{n,n+1,\cdots\}$, we have $$P\Big(\{\omega:|X_{N(\omega)}(\omega)|>\epsilon\}\Big)<\epsilon.$$




Is this question equivalent to asking me to prove "almost surely convergence to $0$ if and only if convergence to $0$ almost surely"?



If so, the direction $(\Rightarrow)$ can be proved following this: Convergence in measure and almost everywhere




However, isn't the direction $(\Leftarrow)$ not generally true? I can surely prove that there exists a subsequence $X_{k_{n}}$ of $X_{n}$ converges to $0$ almost surely...



Could someone tell me what this question is really asking about? I don't really want to spend time proving a wrong thing..



Thank you!


Answer



First, $X_n\to 0$ a.s. iff for any $\epsilon>0$, there exists $n\ge 1$ s.t. $\mathsf{P}(\sup_{m\ge n}|X_m|>\epsilon)<\epsilon$. In the following we fix $\epsilon>0$.



$(\Rightarrow)$ Suppose that $X_n\to 0$ a.s. Then since for any r.v. $N$ on $\{n,n+1,\ldots\}$, $|X_{N}|\le \sup_{m\ge n}|X_m|$, the result follows from the above statement.




$(\Leftarrow)$ Let $N_n':=\inf\{m\ge n:|X_m|>\epsilon\}$. Define $N_n:=N_n'1\{N_n'<\infty\}+n1\{N_n'=\infty\}$. Then $\{\sup_{m\ge n}|X_m|>\epsilon\}=\{|X_{N_n}|>\epsilon\}$. However, there exists $n\ge 1$ s.t. $\mathsf{P}(|X_{N_n}|>\epsilon)<\epsilon$.


Monday 28 April 2014

induction - Find the fundamental error in the proof



The proof provided has a "fundamental error" in it, but I can't figure out what the error is.




Statement: For all non-negative integers $n$, $n$ is even.



Proof: Let $P(n)$ be the open sentence: $n$ is even.




Base Case: We show that $P(0)$ is true. Since $0=2(0)$, then 0 is even and the statement is true for $n=0$



Inductive Hypothesis: We assume the $P(i)$ is true for $0\leq i\leq k$. That is, $i$ is even for all $i$ in the range $0 \leq i \leq k$ for some $k \in \mathbb Z$ for $k \geq 0$.



Inductive Conclusion: We show that $P(k+1)$ is true. Conside $k+1$. Since $k+1 = k-1+2$ and since $0 \leq k-1 \leq k$, the by the inductive hypothesis, $k-1$ is even and 2 is even. The sum of two even numbers is even, and thus $k+1$ is even as required.




Obviously this is not true, and I have an idea as to the error, but I'm not 100% sure. I think it has something to do with the fact that they are using the strong induction method in the hypothesis, but have only provided 1 base case. Other than that, I can't see anything wrong with the actual proof method (but the proof is obviously wrong).


Answer




Let us rewrite the inductive step instantiated for $k=0$:




Inductive Conclusion: We show that $P(1)$ is true. Conside $1$. Since $1 = -1+2$ and since $0 \leq -1 \leq 0$, the by the inductive hypothesis, $-1$ is even and 2 is even. The sum of two even numbers is even, and thus $1$ is even as required.



elementary set theory - Why do the rationals, integers and naturals all have the same cardinality?



So I answered this question: Are all infinities equal? I believe my answer is correct, however one thing I couldn't explain fully, and which is bugging me, is why the rationals $\mathbb Q$, integers $\mathbb Z$ and naturals $\mathbb N$ all have cardinality $| \mathbb Q |=| \mathbb Z | = |\mathbb N| = \aleph_0$, when $\mathbb N \subsetneq \mathbb Z \subsetneq \mathbb Q$.




The basic proof of the other question, from my answer, is that for any $r$ in $\mathbb Q$ there is a single function $f(r)$ such that $f(r) \in \mathbb Z$, and the same holds between $\mathbb Z$ and $\mathbb N$. Because there is this 1:1 transformation possible, there must be the same number of numbers in the three sets, because otherwise there would be a number in one set for which the bijection could not produce a number of the other set, and this is not so.



Now, Belgi explained why $1<2$ in the other question by defining the value 0 as the cardinality of the empty set $\emptyset$, 1 as the cardinality of a set of sets containing only the empty set, and 2 as the set of sets containing the empty set and a set containing the empty set, then proceeding as follows:




Now that we have defined the natural numbers we can define when one number
is smaller than another. The definition is $x and $x\neq y$ .




Clearly $\{\emptyset\}\neq\{\emptyset,\{\emptyset\}\}$ and $\{\emptyset\}\subset\neq\{\emptyset,\{\emptyset\}\}$
So this a proof, by definition, of why $1<2$.




... however, by the same definition, because $\mathbb N \subsetneq \mathbb Z \subsetneq \mathbb Q$, then $| \mathbb N | < | \mathbb Z | < |\mathbb Q|$ and thus a maximum of one of these quantities can be the ordinal quantity $\aleph_0$.


Answer



The first thing you need to ask yourself, about finite sets, is this: When do two sets have the same cardinality?



The way mathematics works is to take a property that we know very well, and do our best to extract its abstract properties to describe some sort of general construct which applies in as many cases as possible.




So how do we compare the sizes of two finite sets? If we can write a table, in one column the set $A$ and in the other the set $B$, and each element from $A$ appears in a unique cell; and each element of $B$ appears in a unique cell. If this table have no rows in which there is only one element, then the sets $A$ and $B$ are equal. For example:
$$\begin{array}{lc}
\text{Two equal sets:} &
\begin{array}{c|c|c}A & a_1& a_2\\\hline B & b_1& b_2\end{array} \\
\text{Non-equal sets:} &
\begin{array}{c|c|c|c}A & a_1 & a_2 & a_3\\\hline
B & b_1 & b_2\end{array} &
\end{array}$$



It is clear that this methods captures exactly when two sets have the same size. We don't require one set to be the subset of another; nor we require that they share the same elements. We only require that such table can be constructed.




Well, the generalization is simply to say that there exists a function from $A$ to $B$ which is injective and surjective, namely every element of $A$ has a unique element of $B$ attached to it; and every element of $B$ has a unique element of $A$ attached to it.



It turns out, however, that this notion has a quirky little thing about infinite sets: infinite sets can have proper subsets with the same cardinalities.



Why is this happening? Well, infinity is quite the strange beast. It goes on without an end, and it allows us to "move around" and shift things in a very nice way. For example consider the following table:
$$\begin{array}{c|c|c|c|c|c}
\mathbb N&0&1&\cdots&n&\cdots\\\hline
\mathbb N\setminus\{0\}& 1&2&\cdots & n+1&\cdots
\end{array}$$




It is not hard to see that this table has no incomplete rows and that every element of the left set ($\mathbb N$) appears exactly once, and every element of the right set ($\mathbb N\setminus\{0\}$) appears exactly once!



This can get infinitely more complicated, and so on and so on.






One can ask, maybe we are thinking about it the wrong way? Well, the answer is that it is possible. We can define "size" in other ways. Cardinality is just one way. The problem is that there are certain properties we want the notion of "size" to have. We want this notion to be anti-symmetric and transitive, for example.



Namely, if $A$ is smaller or equal than $B$ and $B$ is smaller or equal in size than $A$, then $A$ and $B$ have the same size; if $B$ also has the same size as $C$ as well, then $A$ and $C$ are of the same size too. It turns out that the notion described by functions has these properties. Other notions may lack one or both. Some notions of "size" lack anti-symmetry, others may lack transitivity.




So it turns out that cardinality is quite useful and it works pretty fine. However it has a peculiarity... well, who hasn't got one these days?



To overcome this, we need to change the way we think a bit: proper subset need not have a strictly smaller cardinality. It just should have not a larger cardinality. This is the right generalization from the finite case, rather than the naive "strict subset implies strictly smaller".






To read more:





  1. Is there a way to define the "size" of an infinite set that takes into account "intuitive" differences between sets?


Sunday 27 April 2014

real analysis - Proof regarding finding the two sets $S$ and $T$ are equinumerous

I have a hard time doing this proof. Can anyone help me?




Show that the following pairs of sets S and T are equinumerous by finding a
specific bijection between the sets in each pair.



$S = [0,1]$ and $T = [0,1)$


algebra precalculus - needs solution of the equation ${(2+{3}^{1/2}})^{x/2}$ + ${(2-{3}^{1/2}})^{x/2}$=$2^x$

$$\left(2+{3}^{1/2}\right)^{x/2} + \left(2-{3}^{1/2}\right)^{x/2} = 2^x.$$
Clearly $x = 2$ is a solution. i need others if there is any. Please help.

Linear algebra question about definition of basis



From Wikipedia:



"A basis $B$ of a vector space $V$ over a field $K$ is a linearly independent subset of $V$ that spans (or generates) $V$.(1)



$B$ is a minimal generating set of $V$, i.e., it is a generating set and no proper subset of B is also a generating set.(2)



$B$ is a maximal set of linearly independent vectors, i.e., it is a linearly independent set but no other linearly independent set contains it as a proper subset."(3)




I tried to prove (1) => (3) => (2), to see that these are equivalent definitions, can you tell me if my proof is correct:



(1) => (3):
Let $B$ be linearly independent and spanning. Then $B$ is maximal: Let $v$ be any vector in $V$. Then since $B$ is spanning, $\exists b_i \in B, k_i \in K: \sum_{i=1}^n b_i k_i = v$. Hence $v - \sum_{i=1}^n b_i k_i = 0$ and hence $B \cup \{v\}$ is linearly dependent. So $B$ is maximal since $v$ was arbitrary.



(3) => (2):
Let $B$ be maximal and linearly independent. Then $B$ is minimal and spanning:



spanning: Let $v \in V$ be arbitrary. $B$ is maximal hence $B \cup \{v\}$ is linearly dependent. i.e. $\exists b_i \in B , k_i \in K : \sum_i b_i k_i = v$, i.e. $B$ is spanning.




minimal: $B$ is linearly independent. Let $b \in B$. Then $b \notin span( B \setminus \{b\})$ hence $B$ is minimal.



(2) => (1):
Let $B$ be minimal and spanning. Then $B$ is linearly independent:
Assume $B$ not linearly independent then $\exists b_i \in B, k_i \in K: b = \sum_i b_i k_i$. Then $B \setminus \{b\}$ is spanning which contradicts that $B$ is minimal.


Answer



The proof looks good (appart form the obvious mix up in the numbering). One thing which is not totally precise:
In your second proof you write
Let $v\in V$ be arbitrary. $B$ is maximal hence $B\cup\{v\}$ is linearly dependent. i.e. $\exists b_i\in B,k_i\in K: \sum_i b_ik_i=v$, i.e. $B$ is spanning.

To be precise you have $k_i\in K$ not all vanishing such that $k_0v+\sum_i k_ib_i=0$. Since $B$ is linearly independent $k_0=0$ implies $k_i=0$ for all $i$, therefore $k_0\neq 0$ and $v$ is in the span of $B$.


sequences and series - How do I derive the formula for the sum of squares?




I was going over the problem of finding the number of squares in a chessboard, and got the idea that it might be the sum of squares from $1$ to $n$. Then I searched on the internet on how to calculate the sum of squares easily and found the below equation:$$\sum_{i=0}^n i^2 = \frac{(n^2+n)(2n+1)}{6}.$$



And then I searched for an idea on how to come up with this equation and found this link, but what I would like to know is that if (hypothetically)I have to be the first person in the world to come up with this equation, then can someone please tell some ideas on how to approach a problem like this.


Answer




Actually, the answer in your link confused me a bit. But here is an alternative derivation (somehow similar to/same as? the answer in your link).



Look at the sequence of cubes $s_{1,n,3}=1^3+2^3+\cdots+n^3$ and the shifted sequence $s_{2,n+1,3}=2^3+\cdots+(n+1)^3$. Subtracting the former sequence from the latter obviously yields $(n+1)^3-1$. At the same time $s_{2,n,3}-s_{1,n,3}$ is given by



$$ \sum_{i=1}^n \left((i+1)^3-i^3\right) = \sum_{i=1}^n \left(3i^2+3i+1\right)=3s_{1,n,2}+3s_{1,n,1}+n.$$



In other words,



$$(n+1)^3-1=s_{2,n,3}-s_{1,n,3}=3s_{1,n,2}+3s_{1,n,1}+n.$$




Assuming that you know that $s_{1,n,1}=\sum_{i=1}^n i= \frac{n(n+1)}{2}$ you can solve for $s_{1,n,2}=\sum_{i=1}^n i^2$, which is what you have been looking for.



EDIT This can be generalized as follows (and this maybe gives a better motivation).



Suppose we wish to compute $\sum_{i=1}^n i^M$ for some positive integer $M$, and we want to find a recursive formula that expresses $\sum_{i=1}^n i^M$ in terms of $\sum_{i=1}^n i^{M-1}$, $\sum_{i=1}^n i^{M-2}$, etc.



The key idea is to look at the sum $S=\sum_{i=1}^n (i+a)^M$, for some positive integer $a$. By the binomial theorem, $(i+a)^M=\sum_{k=0}^{M}\binom{M}{k}i^ka^{M-k}$, so



$$\sum_{i=1}^n (i+a)^M=S=\sum_{k=0}^{M}\Bigl(\binom{M}{k}a^{M-k}\sum_{i=1}^n i^k\Bigr)=\binom{M}{0}a^M\sum_{i=1}^n i^0+\cdots+\binom{M}{M}a^0\sum_{i=1}^n i^{M}.$$




Conversely, notice how $S$ is just the 'right-shifted' (by $a$) analogue of $\sum_{i=1}^n i^{M}$; thus
$$\sum_{i=1}^n (i+a)^M-\sum_{i=1}^n i^M=\underbrace{(n+a)^M+(n+a-1)^M+\cdots+(n+1)^M-a^{M}-(a-1)^{M}-\cdots-1^M}_{=B}.$$



In other words,



$$\sum_{k=0}^{M-1}\Bigl(\binom{M}{k}a^{M-k}\sum_{i=1}^n i^k\Bigr)=S-\sum_{i=1}^n i^M=B$$



and this allows us to write any sum of the form $\sum_{i=1}^n i^k$ in terms of the 'remaining' sums $\sum_{i=1}^n i^q$, for $q\neq k$. In particular, we may obtain an induction formula.


combinatorics - Prize Probability



The question is as follows:



"Each meal at a fast food restaurant comes with a prize. There are three types of prizes, $A$, $B$, and $C$. Each meal comes with prize $A$ with probability $0.5$, prize $B$ with probability $0.4$, and prize $C$ with probability $0.1$, independently of previous meals.
You buy four meals. What’s the probability that you get a prize of each type?"




This is a very basic question, but I can't figure it out. My attempt was to do $4$ choose $3$ multiplied by each of the probabilites, $0.1, 0.4, 0.5$, but $0.8$ is incorrect. Not sure how to approach this problem. Intution behind solution would be appreciated


Answer



You need to account for the fact that, if you get all three prizes, one of them comes twice. So you could get $AABC$, $CBCA$, etc. The number of ways to choose a permutation of two $A$'s, one $B$, and a $C$ is $\binom{4}{2} \times 2 \times 1 = 12$ (the number of ways to count the other similar permutations is the same). The probability of any such permutation occurring is $(0.5)^2(0.4)(0.1) = 0.01$. Similarly, the probability of a permutation where $B$ is repeated is $(0.5)(0.4)^2(0.1) = 0.008$ and the probability of a permutation where $C$ is repeated is $(0.5)(0.4)(0.1)^2 = 0.002$. Adding over all such permutations gives a $0.24$ probability of getting all three prizes.


Saturday 26 April 2014

probability - Expected number of rolls for fair die to get same number appear twice in a row?



We repeatedly roll a fair die until any number appear twice in a row. I want to find the expected number of rolls until we stop. I am thinking this is a geometric distribution, but how would I apply the distribution formula here, would the probability of throwing two numbers in a row be $\frac{1}{36}$?


Answer



Let X be the number of rolls.
$P(X=2) = \frac{1}{6}$
$P(X=3) = (\frac{5}{6})^1\frac{1}{6}$
$P(X=4) = (\frac{5}{6})^2\frac{1}{6}$
So, $P(X=k) = (\frac{5}{6})^{k-2}\frac{1}{6}$
Let $q=\frac{5}{6}$ and $p = \frac{1}{6}$




$E(X) = \sum_{k=2}^{\infty}kP(X=k)$
$\quad\quad\;\;= \sum_{k=2}^{\infty}kq^{k-2}p$
$\quad\quad\;\;= \frac{p}{q}\sum_{k=2}^{\infty}kq^{k-1}$
$\quad\quad\;\;= \frac{p}{q}(\sum_{k=0}^{\infty}kq^{k-1} - \sum_{k=0}^{1}kq^{k-1})$
$\quad\quad\;\;= \frac{p}{q}(\sum_{k=0}^{\infty}kq^{k-1}) - p$
$\quad\quad\;\;= \frac{p}{q}(\frac{1}{1-q})^2 - \frac{p}{q}$


sequences and series - Proof for $e^z = lim limits_{x rightarrow infty} left( 1 + frac{z}{x} right)^x$



Q1: Could someone provide a proof for this equation (please focus on this question):



$$e^z = \lim \limits_{x \rightarrow \infty} \left( 1 + \frac{z}{x} \right)^x$$



Q2: Is there any corelationn between above equation and the equation below (this would benefit me a lot because i at least know how to proove this one):




$$e = \lim \limits_{x \rightarrow \infty} \left( 1 + \frac{1}{x} \right)^x$$


Answer



Let $y=x/z$. Then
$$
\lim \limits_{x \rightarrow \infty} \left( 1 + \frac{z}{x} \right)^x
=
\lim \limits_{y \rightarrow \infty} \left( 1 + \frac{1}{y} \right)^{yz}
=
\left(\lim \limits_{y \rightarrow \infty} \left( 1 + \frac{1}{y} \right)^{y}\right)^z

= e^z
$$



But note that this assumes that you have a definition of $a^x$ and know that $a^{xy}=(a^x)^y$ and that $x\mapsto a^x$ is continuous.


Friday 25 April 2014

calculus - Prove there's $x_0$ such that $f'(x_0)=0$



Let $f:\mathbb{R}\rightarrow \mathbb{R}$ differentiable at $\mathbb{R}$ and:
$$\lim_{x\rightarrow \infty}\left( f(x)-f(-x) \right) = 0$$



Show there's $x_0$ such that $f'(x_0) = 0$.




I tried to use the limit definition, but couldn't get much further.
I'll be glad for guidance.



Thanks.


Answer



If $f$ is a constant function, the result is easy.



Here is an outline of one way to prove such an $x_0$ exists if $f$ is not a constant function:




  1. Choose $a, b$ with $f(a)< f(b)$ for some $a,b$.



  2. Show that there is an $N>\max\{|a|,|b|\}$ so that either $\max\{f(N), f(-N)\}f(a)$ (choose $N$ so that $f(N)$ is within $|f(b)-f(a)|/2$ of $f(-N)$; drawing a picture will help here).


  3. Use the result of 2. and the Intermediate Value Theorem to show $f(c)=f(d)$ for some $c\ne d$.


  4. Wrap things up with Rolle's Theorem.









Alternatively:




If $f'(x)$ is never $0$, then by Darboux's Theorem, $f'$ is either positive everywhere or negative everywhere). In this case $f$ is either strictly increasing or strictly decreasing. But then, for $x>1$, $|f(x)-f(-x)|> |f(1)-f(-1)|>0$; whence $\lim\limits_{x\rightarrow\infty} \bigl(f(x)-f(-x)\bigr)\ne0$.


sequences and series - Repeatedly taking differences on a polynomial yields the factorial of its degree?

Consider a function such that it takes in polynomial function and creates an array of its outputs and then using that array creates another new array by calculating the absolute difference between the first $2$ values and keeps doing this until it reaches an array full of zeros.




This is much easier to show you by example.



For example take $F(x)= x^2$, the first array would be



$1,4,9,16,25,36,49,64,81$ and so on, the second would be



$3,5,7,9,11,13,15,17,19$ ( the difference between the first value and the second one)



but the third one is where it gets interesting as if we continue the pattern we would get an array filled with only $2$'s and after that it would only be zeros.




Lets do another example, $F(x)=x^3$



$1,8,27,64,125,216,343$



$7,19,37,61,91,\dotsc$ but here is the interesting part if we continue this



$12,18,24,30,\dotsc$ and once more then we get



$6,6,6,6,6,\dotsc$ after that it would just be an array of zeros




There are $2$ main observation that I made about this



Firstly, the value that is begin repeated indefinitely is equals to the factorial of the functions power. Meaning that for $F(x)=x^2$ the value being repeated is $2!$. For $F(x)=x^3$ , it's $3! $ and this is true for all polynomials (I tried it up to $x^7$, after that it got too messy)



Secondly, the value that is repeated always occurs on the $n$th iteration of the function. Meaning that for $F(x)=x^2$, we have to go through the processes $2$ times before we find the value. For $F(x)=x^3$, we have to go through it $3$ times before getting the value.



Is there any way to prove this and does this mean anything at all?

calculus - were Irrational numbers discovered at Archimedes's age?



Archimedes axiom states a property of real numbers, while the real numbers include all the rational numbers and all the irrational numbers.




I wonder were Irrational numbers discovered at Archimedes's age?



I think the question is equivalent to ask : Does Hippasus( he is sometimes credited with the discovery of the existence of irrational numbers)
live earlier than Archimedes?




Hippasus of Metapontum (/ˈhɪpəsəs/; Greek: Ἵππασος, Híppasos; fl. 5th
century BC)



Archimedes of Syracuse (/ˌɑːkɪˈmiːdiːz/;2 Greek: Ἀρχιμήδης; c. 287

BC – c. 212 BC)




P.S. I am Chinese , I don't understand these BCs


Answer



The OP wrote: "Archimedes axiom states a property of real numbers, while the real numbers include all the rational numbers and all the irrational numbers." It should be clear that the property in question is referred to as the Archimedes axiom only since about 1880 when the term was introduced by Otto Stolz.



The OP further asks: "I wonder were Irrational numbers discovered at Archimedes's age?"



The answer is most likely negative. The Greeks thought mostly in terms of proportions among whole numbers. The idea of systematizing the number system to include roots and other irrationals is mainly due to Simon Stevin who lived almost two thousand years later.



calculus - Purpose Of Adding A Constant After Integrating A Function



I would like to know the whole purpose of adding a constant termed constant of integration everytime we integrate an indefinite integral $\int f(x)dx$. I am aware that this constant "goes away" when evaluating definite integral $\int_{a}^{b}f(x)dx $. What has that constant have to do with anything? Why is it termed as the constant of integration? Where does it come from?



The motivation for asking this question actually comes from solving a differential equation $$x \frac{dy}{dx} = 5x^3 + 4$$ By separation of $dy$ and $dx$ and integrating both sides, $$\int dy = \int\left(5x^2 + \frac{4}{x}\right)dx$$ yields $$y = \frac{5x^3}{3} + 4 \ln(x) + C .$$



I've understood that $\int dy$ represents adding infinitesimal quantity of $dy$'s yielding $y$ but I'am doubtful about the arbitrary constant $C$.


Answer



Sometimes you need to know all antiderivatives of a function, rather than just one antiderivative. For example, suppose you're solving the differential equation
$$

y'=-4y
$$
and there's an initial condition $y(0)=5$. You get
$$
\frac{dy}{dx}=-4y
$$
$$
\frac{dy}{y} = -4\;dx
$$
$$

\int\frac{dy}{y} = \int -4\;dx
$$
$$
\log_e y = -4x + C.
$$
Here you've added a constant. Then
$$
y = e^{-4x+C} =e^{-4x}e^C
$$
$$

5=y(0)= e^{-4\cdot0}e^C = e^C
$$
So
$$
y=5e^{-4x}.
$$



(And sometimes you need only one antiderivative, not all of them. For example, in integration by parts, you may have $dv=\cos x\;dx$, and conclude that $v=\sin x$.)


elementary set theory - Cardinality of set of real continuous functions



The set of all $\mathbb{R\to R}$ continuous functions is $\mathfrak c$. How to show that? Is there any bijection between $\mathbb R^n$ and the set of continuous functions?


Answer



The cardinality is at least that of the continuum because every real number corresponds to a constant function. The cardinality is at most that of the continuum because the set of real continuous functions injects into the sequence space $R^{N}$ by mapping each continuous function to its values on all the rational points. Since the rational points are dense, this determines the function.



The Schroeder-Bernstein theorem now implies the cardinality is precisely that of the continuum.




Note that then the set of sequences of reals is also of the same cardinality as the reals. This is because if we have a sequence of binary representations $.a_1a_2..., .b_1b_2..., .c_1c_2...$, we can splice them together via $.a_1 b_1 a_2 c_1 b_2 a_3...$ so that a sequence of reals can be encoded by one real number.


Thursday 24 April 2014

Evaluate limit of the series: $lim_{ntoinfty} left[frac{1}{n^{2}} + frac{2}{n^{2}} + frac{3}{n^{2}} + cdots + frac{n}{n^{2}}right]$



I am pretty confident that the following limit is $0.5$:



$$\lim_{n\to\infty} \left[\frac{1}{n^{2}} + \frac{2}{n^{2}} + \frac{3}{n^{2}} + \cdots + \frac{n}{n^{2}}\right]=\lim_{n\to\infty} \left[\frac{1+2+3+ \cdots +n}{n^{2}}\right]=\lim_{n\to\infty} \left[\frac{n^2+n}{2n^{2}}\right]=\frac{1}{2}$$




However one of the students argued that if we write limit of sum as sum of individual limits, it will be zero. Why we cannot write limit of sum as sum of limits in this case?



I've been taught that if individual limits exist, the limit of sum is equal to the sum of limits. It would be helpful to get an explanation or a reference to similar rules for limits of series.


Answer




However one of the students argued that if we write limit of sum as sum of individual limits, it will be zero. Why we cannot write limit of sum as sum of limits in this case?




For the sum of two sequencesn we have the property "limit of the sum is the sum of limits" (if both sequences have a limit) and by repeatedly applying this, we have this property for any finite number of sequences (terms).




As often is the case, you cannot simply extend this to the infinite case; i.e. you cannot assume the same property will hold when the number of sequences (terms) is not finite.



A simpler counterexample would be the sum of $n$ terms, all equal to $\tfrac{1}{n}$; obviously we have:
$$\underbrace{\frac{1}{n}+\frac{1}{n}+\ldots+\frac{1}{n}}_{\mbox{$n$ terms}} = \frac{n}{n}=1$$
but every individual sequence (term) clearly tends to $0$: $\frac{1}{n} \stackrel{n\to \infty}{\longrightarrow} 0$.


elementary set theory - Induction without base case?



I'm doing a bit of research on set theory. So far it's quite interesting. Right now I'm reading about transfinite induction. The book states the following theorem about induction in a well-ordered set:



Let $(X,<)$ be a well-ordered set. Let $P$ be a property which may hold for elements of $X$. Suppose that, for all $x \in X$, if every element $y

The theorem doesn't require a base case to hold. The book mentions that a base case is not needed here because if $x$ is the smallest element of $X$, then there are no elements $y


$P(n)$ is the statement "$n>1000$" ,



(which is obviously false for $n=1$, say)



Then we get something like: Suppose that $n$ is such that $P(m)$ holds whenever $m1000$. Thus $P(n)$ holds $\forall n \in \mathbb{N}$.



There's obviously something wrong with the "proof". I suppose it's because "$P(m)$ holds whenever $m1000$"? But I'm not too sure...



Later on, the book states the version of induction for ordinals:




Let $P$ be a property of ordinals, assume that




  • $P(0)$ is true,


  • $P(\alpha)$ implies $P(s(\alpha))$ for any ordinal $\alpha$ ($s(\alpha)$ is the successor ordinal of $\alpha$)


  • If $\lambda$ is a limit ordinal and $P(\beta)$ holds for all $\beta < \lambda$, then $P(\lambda)$ holds.




Then $P(\alpha)$ is true for all ordinals $\alpha$.




This time a base case is required. But I read Wikipedia (http://en.wikipedia.org/wiki/Mathematical_induction) near the bottom under Transfinite Induction and it says "strictly speaking, it doesn't..." so I'm pretty confused.


Answer



Strictly speaking, it is true that you don't need to assume the base case because of the following: suppose you have a well-ordered set $X$ with minimal element $x_0$. Then to prove that the property $P(x)$ holds for every element of $X$ you must prove, as you said, that if you assume the property to hold for all elements $y

elementary set theory - A version of Zorn's lemma



The version of Zorn's lemma that I have found more often is




Zorn's Lemma (1) If every chain belonging to the partially ordered set $S$ has an upper

bound in $S$ then $S$ contains a maximal element.




I read the following version of Zorn's lemma, which I find somewhat seemingly different from (1):




Zorn's lemma (2) In any ordered set where every chain has a supremum in the set, any
element $b$ has a maximal element $m$ such that $m\ge b$.





where a chain is a totally ordered subset and the supremum is defined as the least upper bound. As to the ordered set, the book does not define what it means to be ordered, but I know that many authors define an ordered set as a set where a partial ordering is defined, although I am not sure that a total ordering is not what is intended here. My translation is literal. This version of Zorn's lemma is said to be equivalent to in an ordered set every chain is included in a maximal chain (with respect to inclusion), and I know that this last lemma, if we mean partially ordered by ordered, is the "usual" Hausdorff maximal principle, which I know to be equivalent to (1).



In particular I am not sure that we can substitute the upper bound with the supremum and the maximal element with $m$. Moreover, I suppose, although the wording of the book is not very clear to me, that any element $b$ has a maximal element $m$ such that $m\ge b$ means that there exists a $m$ in the ordered set h that, for all $b$ in the ordered set, $m\ge b$; but if the ordering is not total, I do not think that we generally can compare the maximal element of the "usual version" of Zorn's lemma with all $b\in S$.



Are such two versions of Zorn's lemma equivalent and, if they are, how can it be seen? I heartily thank you for any answer!



$^1$V. Manca, Logica matematica, an Italian book of introductory mathematical logic and theoretical informatics.


Answer



Zorn's lemma implies the axiom of choice. But we can ask whether or not we can restrict Zorn's lemma to a seemingly smaller class of partial order, and still have the axiom of choice (which would then suggests that the restricted version is equivalent to the full version, since choice implies Zorn's lemma in full).




The answer is positive. For example, it suffices to require that only well-ordered chains have upper bounds. You can also demand more that above every elements lies a maximal element, not just that there exists one.



You ask about an additional requirement, not just that the chain has an upper bound. We want that chains will have a supremum, which is a much stronger requirement indeed.



But analyzing the proofs that start from Zorn's lemma and prove various choice-equivalent principles, we see that in all of them the chains have supremums. For example the usual proof that Zorn's lemma implies the axiom of choice take as a partial order all the partial choice functions ordered by inclusion. Since the union of a chain of choice functions is a choice function, every chain has a supremum. So the axiom of choice holds.



You could also consider the proof of Hausdorff's maximality principle, every partial order has a maximal chain; or every chain can be extended to a maximal chain. Again here we have that given a chain $\mathcal C$ in the partial order considered for Zorn's lemma, $\bigcup\mathcal C$ is an element in the partial order as well, and it is the supremum of the chain.



So the answer is positive. Both statements are equivalent.







Note the difference, and how you changed the order of quantification, by the way. From $\forall b\in S\exists m\in S(b\leq m\land m\text{ is maximal}$ to $\exists m\in S\forall b\in S(b\leq m\land m\text{ is maximal}$. The second statement says, essentially that $m$ is a maximum which is a stronger condition.



It is also provably false. Just look at the partial order on $\{a,b\}$, where $a\neq b$ and $x\leq y\iff x=y$. There, every chain is a singleton, so it has a supremum. And there is a maximal element. But there is no maximum.


geometry - Constructing a circle that internally tangents a circle $gamma$ and passes through two internal points.

The full details of this problem is given as follows




Construct a circle $\gamma$ with center $O_\gamma$ , and
place two points $A$ and $B$ inside $\gamma$. That does not lie on the edge of the circle. Explain the construction of a point $C$, such that the circle $ABC =\beta$, is internally tangential to $\gamma$.




Now $ABC$ means a circle that passes through the points $A$,$B$ and $C$. I have made a drawing, but I am unable to mathematicaly construct the point $C$.
I already know that for most pairs $A$,$B$ there are two possible choices for $C$. Eg $C_1$ and $C_2$. See the following figure




Drawing



Can anyone show me or help me in finding the placement of $C$, given $A$ and $B$?
The figure is only but a sketch, but I know that the centre of the circle obviously has to lie on the perpendicular bisector of A and B, after that I am clueless.

limit of sequence with factorial



How do you show that:
$\lim\limits_{n\to \infty} \frac{\left(\frac{n}{2}\right)^{\frac{n}{2}}}{n!}=0$
using the squeeze theorem (I'd like to avoid using Stirling's formula, too). I tried rearranging it a bit into $\lim\limits_{n\to \infty} \frac{\left(\sqrt{n}\right)^{n}}{\left(\sqrt{2}\right)^{n}n!}$ , but i can't really figure out what to do next. Thanks!


Answer



The desired limit is equivalent to




$$lim_{n->\infty} {\frac{n^n}{(2n)!}}$$



Since



$$n^n < 2n*(2n-1)*...*(n+1)$$



we have the majorant



$$\frac{1}{n!}$$




which clearly tends to 0, if n tends to infinity.


Wednesday 23 April 2014

real analysis - Closed form of $ln^n tan x, dx$




Here is an integral I am really stuck at. I am pretty sure that a general closed form of the integral:



$$\mathcal{J}=\int_0^{\pi/2} \ln^n \tan x\, {\rm d}x, \;\; n \in \mathbb{N}$$



exists. Well if $n$ is odd , then the integral is obviously zero due to symmetry. On the contrary if $n$ is even then the closed form I seek must contain the beta dirichlet function however I am unable to reach it. Setting $m=2n$ then:



$$\int_{0}^{\pi/2}\ln^m \tan x\, {\rm d}x=\int_{0}^{\infty}\frac{\ln^m u}{u^2+1}\, {\rm d}u= 2\int_{0}^{1}\frac{\ln^m u}{u^2+1}\, {\rm d}u$$



If we expand the denominator in a Taylor series, namely $1+x^2=\sum \limits_{n=0}^{\infty} (-1)^n x^n$ then the last integral is written as:




$$2\int_{0}^{1}\ln^m x \sum_{n=0}^{\infty}(-1)^n x^n \, {\rm d}x = 2\sum_{n=0}^{\infty}(-1)^n \int_{0}^{1}x^n \ln^m x \, {\rm d}x = 2 \sum_{n=0}^{\infty}\frac{(-1)^n (-1)^m m!}{\left ( n+1 \right )^{m+1}}= 2 (-1)^m m! \sum_{n=0}^{\infty}\frac{(-1)^n}{\left ( n+1 \right )^{m+1}}$$



Apparently there is something wrong here. I used the result



$$\int_{0}^{1}x^m \ln^n x \, {\rm d}x = \frac{(-1)^n n!}{\left ( m+1 \right )^{n+1}}$$



as presented here.



Edit/ Update: A conjecture of mine is that the closed form actually is:




$$\int_0^{\pi/2} \ln^{m} \tan x \, {\rm d}x=2m! \beta(m+1), \;\; m \;\;{\rm even}$$



For $m=2$ matches the result $\displaystyle \int_0^{\pi/2} \ln^2 \tan x\, {\rm d}x= \frac{\pi^3}{8}$.


Answer



We want to compute:



$$ I_m = \int_{0}^{\pi/2}\left(\log\tan x\right)^{2m}\,dx = \int_{0}^{1}\frac{\left(\log t\right)^{2m}}{1+t^2}\,dt=\sum_{k\geq 0}(-1)^k\int_{0}^{1}t^{2k}(\log t)^{2m}\,dt$$
that is:





$$ I_m = (2m)!\sum_{k\geq 0}\frac{2(-1)^k}{(2k+1)^{2m+1}}=|E_{2m}|\cdot\left(\frac{\pi}{2}\right)^{2m+1}.$$




The last identity was already proved here.


complex analysis - Entire function real and imaginary part product




Let $f$ be an entire function such that $\Re(f(z))*\Im(f(z))\ge0$ for all $z$ then $f$ is constant. Prove or give contradicting example.
I know about Louisville's theorem and Cauchy–Riemann equations but I don't see how to use them in this situation. I've tried to bound $|f(z)|$ but I could only show that $|f(z)|\le\Re(f(z))+\Im(f(z))$. I would like a hint.


Answer



Write $f(z) = u(z) + iv(z)$, so that $u=\Re(f)$ and $v = \Im(f)$. The trick to many of these problems is to find a suitable auxiliary function: as Daniel Fischer mentioned, a good candidate would be a function whose real or imaginary part is $uv$. How about $g(z) = f(z)^2 = u(z)^2-v(z)^2 + 2i\,u(z)v(z)$? By hypothesis, $\Im(g(z))\geq 0$ for all $z$ . . .


algebra precalculus - How to factor $ax^{2}+bxy+cy^{2},,aneq 0$?




Question: Factor: $3x^{2}-5xy-12y^{2}$



Answer: $(x-3y)(3x+4y)$



What are the exact steps to finding this answer from the original question (factored form from standard form, respectively)?


Answer



To factor $3x^2 - 5xy - 12y^2$, we first split the linear term, then factor by grouping. To split the linear term, we must find two numbers with product $3 \cdot -12 = -36$ and sum $-5$. They are $-9$ and $4$. Hence,
\begin{align*}
3x^2 - 5xy - 12y^2 & = 3x^2 - 9xy + 4xy - 12y^2 && \text{split the linear term}\\
& = 3x(x - 3y) + 4y(x - 3y) && \text{factor by grouping}\\

& = (3x + 4y)(x - 3y) && \text{extract the common factor}
\end{align*}
To check that the answer is correct, we multiply the factors. Observe that doing so amounts to performing the steps of the factorization in reverse order.



In your second example of $5x^2 - 14x + 8$, to split the linear term, we must find two numbers with product $5 \cdot 8 = 40$ and sum $-14$. They are $-10$ and $-4$. Hence,
\begin{align*}
5x^2 - 14x + 8 & = 5x^2 - 10x - 4x + 8 && \text{split the linear term}\\
& = 5x(x - 2) - 4(x - 2) && \text{factor by grouping}\\
& = (5x - 4)(x - 2) && \text{extract the common factor}
\end{align*}




In general, if $ax^2 + bx + c$, $a \neq 0$, is a quadratic polynomial with rational coefficients, then it factors with respect to the rationals if there exist two rational numbers with product $ac$ and sum $b$. In particular, if $r$, $s$, $t$, and $u$ are rational numbers such that
$$ax^2 + bx + c = (rx + s)(tx + u) \tag{1}$$
then $a = rt$, $b = ru + st$, and $c = su$. If you perform the multiplication
\begin{align*}
(rx + s)(tx + u) & = rx(tx + u) + s(tx + u)\\
& = rtx^2 + rux + stx + su\\
& = rtx^2 + (ru + st)x + su\\
& = ax^2 + bx + su
\end{align*}

you will notice that we can obtain $a = rt$, $b = ru + st$, and $c = su$ by matching coefficients, as Dietrich Burde stated.



We can prove this assertion by treating equation 1 as an algebraic identity. Since it is an identity, equation 1 holds for each value of the variable. In particular, it holds for $x = 0$, $x = 1$, and $x = -1$. Setting $x = 0$ in equation 1 yields
$$c = su \tag{2}$$
Setting $x = 1$ in equation 1 yields
\begin{align*}
a + b + c & = (r + s)(t + u)\\
& = r(t + u) + s(t + u)\\
& = rt + ru + st + su \tag{3}
\end{align*}

Since $c = su$, we can cancel $c$ from the left hand side and $su$ from the right hand side of equation 3 to obtain
$$a + b = rt + ru + st \tag{4}$$
Setting $t = -1$ in equation 2 yields
\begin{align*}
a - b + c & = (r - s)(t - u)\\
& = r(t - u) - s(t - u)\\
& = rt - ru - st + su \tag{5}
\end{align*}
Since $c = su$, we can cancel $c$ from the LHS and $su$ from the RHS of equation 5 to obtain
$$a - b = rt - ru - st \tag{6}$$

Adding equations $4$ and $6$ yields
$$2a = 2rt \tag{7}$$
Dividing both sides of equation 7 by $2$ yields
$$a = rt \tag{8}$$
Since $a = rt$, we can cancel an $a$ from the LHS of equation 4 and $rt$ from the RHS of equation 4 to obtain
$$b = ru + st \tag{9}$$
Our derivation of equations 2, 8, and 9 proves the claim.


summation - why doesn't proof of sum of two rational number is rational not proving the irreducibility of fraction $frac{ad+bc}{bd}$?



When I was comparing proof for $\sqrt{2}$ and sum of two rational numbers, I found that the proof of two rational number did not mention anything about common factor in the ratio.




one proof I found for sum of two rational numbers:




The sum of any two rational numbers is rational.



Proof.



Suppose r and s are rational numbers. [We must show that r + s is
rational.] Then, by the definition of rational numbers, we have




                                        r = a/b    for some integers a and b with b ≠ 0.

s = c/d for some integers c and d with d ≠ 0.


So, by substitution, we have



                                        r + s = a/b + c/d


= (ad + bc)/bd


Now, let p = ad + bc and q = bd. Then, p and q are integers [because
products and sums of integers are integers and because a, b, c and d
are all integers. Also, q ≠ 0 by zero product property.] Hence,



r + s = p/q , where p and q are integers and q ≠ 0.



Therefore, by definition of a rational number, (r + s) is rational.

This is what was to be shown.



And this completes the proof.




Unlike $\sqrt{2}$ proof which state that because both p and q for p/q is prove to be even, therefore contradicting the premises that it is a irreducible faction, The sum proof did not mention if $\frac{ad+bc}{bd}$ might have a common factor , hence make this proof incomplete. Why is that?


Answer



There is nothing incomplete in the proof. The proof shows the following statement:





If there exist such integers $a,b$ that $r=\frac ab$ and there exist such integers $c,d$ that $s=\frac cd$, then there also exist such integers $p, q$ that $r+s = \frac pq$.




Remember that this is all you need. A number $x$ is rational if and only if there exist some integers $m,n$ so that $x=\frac mn$.



The fact that there also exist two (almost) uniquely determined coprime integers $m', n'$ such that $x=\frac{m'}{n'}$ does not mean that every pair of integers that forms $x$ is coprime, only that one such pair exists.



On the other hand, for $x=\sqrt 2$, the proof shows that no such pair exists.


Tuesday 22 April 2014

combinatorics - Find the Number of non decreasing Sequence

We know the No. of Non decreasing Sequence of length N is (9+N)CN How can we find the number of decreasing Sequence in a Range [a,b] of length 1 to N;

Finding three numbers that are pairwise not relatively prime, but with $gcd(a,b,c)=1$



Find integers $ a,b, $ and $ c $ where $ \gcd(a,b,c) = 1 $, but $ \gcd(a,b) \neq 1 $, $ \gcd(a,c) \neq 1 $, and $ \gcd(b,c) \neq 1 $.



I tried so many combinations but I can't find 3 integers that meet these requirements.




I even though $ (0,0,0) $ works, because I tried to convince myself 1 is the first positive integer where 0 has a divisor, because you can't divide by 0. I am not sure if there is a more systematic approach to this.


Answer



Consider $a = 6, b= 10, c = 15$



An easy way to construct these is by considering three prime $2, 3, 5$, then pairwise multiply them.


Rigorous proof of the argument that one cannot square $-1$ in complex number problems

I would like to ask a fundamental question with regards to the imaginary number and it is something many beginners are told are wrong, but I would like to seek a rigorous proof of why it is wrong. It is a question I faced when my student asked me this. Take for example, $(-1)^{1/6}$. We can compute this in 2 different ways:
$$(-1)^{1/6}=[(-1)^{2}]^{1/12}=1$$

or
$$(-1)^{1/6}=[(-1)^{1/3}]^{1/2}=(-1)^{1/2}=i$$
I understand both the methods above are wrong, and the typical response is that you cannot square $-1$. Is there a rigorous proof as to why this method is flawed? Perhaps using abstract algebra or Galoise Theory?



Thank you!

real analysis - How to calculate $lim_{n to infty} frac 1{3n} +frac 1{3n+1}+cdots+frac 1{4n}$?



Could you please help me calculate this limit:

$\lim_{n \to \infty} \frac 1{3n} +\frac 1{3n+1}+\cdots+\frac 1{4n}$.



My best try is :



$\lim_{n \to \infty} \frac 1{3n} +\frac 1{3n+1}+\cdots+\frac 1{4n}=\lim_{n \to \infty}\sum_{k=3n}^{4n}\frac 1n$



$\frac 14 \leftarrow \frac{n+1}{4n}\le \sum_{k=3n}^{4n}\frac 1n \le \frac{n+1}{3n} \to \frac 13$.



Thanks.


Answer




Hint: Represent this expression as a Riemann sum:
$$\frac{1}{n}\sum_{k=0}^{n}\frac{1}{3+\frac{k}{n}}\begin{array}{c}{_{n\rightarrow\infty}\\ \longrightarrow\\}\end{array} \int_0^1\frac{dx}{3+x}=\ln\frac43.$$


algebra precalculus - Proving $|z_1z_2|=|z_1||z_2|$ using exponential form of a Complex Number



Problem:





Prove $$|z_1z_2|=|z_1||z_2|$$ where $z_1,z_2$ are Complex Numbers.




I tried to solve this using the exponential form of a Complex Number.



Assuming $z_1=r_1e^{i\theta_1}$ and $z_2=r_2e^{i\theta_2},$
I got $$|z_1z_2|=|r_1e^{i\theta_1}\times r_2e^{i\theta_2}|= |r_1 r_2e^{i(\theta_1+\theta_2)}|$$
Unfortunately I cannot think of how to proceed further. Any help would be greatly appreciated! Many thanks in anticipation!



Answer



$$|re^{i\theta}|=|r|$$



So



$$|z_{1}|=|r_{1}| $$



$$|z_{2}|=|r_{2}|$$



$$|z_{1}z_{2}|=|r_{1}r_{2}|$$




$r_{1},r_{2}$ are real numbers and so $|r_{1}r_{2}|=|r_{1}||r_{2}|=|z_{1}||z_{2}|$



$$|z_{1}z_{2}|=|z_{1}||z_{2}|$$


calculus - Determine $lim_{nto infty} nleft(1+(n+1)ln frac{n}{n+1}right)$



Determine $$\lim_{n\to \infty} n\left(1+(n+1)\ln \frac{n}{n+1}\right)$$
I noticed the indeterminate case $\infty \cdot 0$ and I tried to get them all under the $\ln$, but it got more complicated and I reached another indeterminate form. The same happened when I tried to use Stolz-Cesaro.



EDIT: is there an elementary solution, without l'Hospital or Taylor series?



Answer



Let $\frac {n}{n+1}=e^x.$ As $n\to \infty,\;\; x\to 0$ through negative values.The expression is $$e^x\cdot \frac {1+x-e^x}{(1-e^x)^2}.$$ Applying l'Hopital's Rule to $\frac {1+x-e^x}{(1-e^x)^2}$ we get a limit of $\frac {-1}{2}$ as $x\to 0.$ And the far left term $e^x$ in the expression goes to $1$ as $x\to 0.$


integration - How to solve this integral $int frac 1{sqrt { cos x sin^3 x }} mathrm dx $





Question : $$\int \frac 1{\sqrt { \cos x \sin^3 x }} \mathrm dx $$




I don’t know where to start. I had tried many methods but they didn’t work.



Can anyone help me solving this ?
Thank you


Answer



HINT: substitute $\text{u}:=\tan\left(x\right)$. Then the integrand will change to $\frac{1}{\text{u}^\frac{3}{2}}$.


calculus - Easy Double Sums Question: $sum_{m=1}^{infty} sum_{n=1}^{infty} frac{1}{(m+n)!}$



How to calculate $\sum_{m=1}^{\infty} \sum_{n=1}^{\infty} \dfrac{1}{(m+n)!} $ ?



I don't know how to approach it . Please help :)



P.S.I am new to Double Sums and am not able to find any good sources to study it , can anyone help please ?


Answer



This is the same as

$$
\sum_{m=1}^\infty \sum_{k=m+1}^\infty \frac{1}{k!}
$$
We can rearrange terms, noting that for each value of $k$ there will be terms only with
$k > m$. There are $k-1$ possible values of $m$ that satisfy $k>m$. So
$$
\sum_{m=1}^\infty \sum_{k=m+1}^\infty \frac{1}{k!} = \sum_{k=1}^\infty \frac{k-1}{k!}
$$
The last trick is to note that it will be much easier to sum $\frac{k}{k!}$ so break up the numerator:
$$

\sum_{k=1}^\infty \frac{k-1}{k!} = \sum_{k=1}^\infty \frac{k}{k!} - \sum_{k=1}^\infty \frac{1}{k!} = \sum_{k=1}^\infty \frac{1}{(k-1)!}- \sum_{k=1}^\infty \frac{1}{k!}
$$
And this in turn is
$$
\frac{1}{0!} + \sum_{k=2}^\infty \frac{1}{(k-1)!} - \sum_{k=1}^\infty \frac{1}{k!}
= 1 +\sum_{j=1}^\infty \frac{1}{j!} - \sum_{k=1}^\infty \frac{1}{k!}
$$
So far, only rearrangement of terms has happened. Now we note that $\sum_{k=1}^\infty \frac{1}{k!}$ is absolutely convergent, so the rearrangement of terms is valid; and the tow sums left cancel, so the answer is
$$1
$$



Monday 21 April 2014

calculus - Find the value of : $lim_{xtoinfty}sqrt{x+sqrt{x+sqrt{x+sqrt{x}}}}-sqrt{x}$



$\lim_{x\to\infty}\sqrt{x+\sqrt{x+\sqrt{x+\sqrt{x}}}}-\sqrt{x}$




I tried conjugating and it didn't lead me anywhere please help guys.



Thanks,


Answer



You can get the following :



$$\begin{align}\sqrt{x+\sqrt{x+\sqrt{x+\sqrt x}}}-\sqrt x&=\frac{\sqrt{x+\sqrt{x+\sqrt x}}}{\sqrt{x+\sqrt{x+\sqrt{x+\sqrt x}}}+\sqrt x}\\&=\frac{\sqrt{1+\left(\sqrt{x+\sqrt x}\right)/x}}{\sqrt{1+\left(\sqrt{x+\sqrt{x+\sqrt x}}\right)/x}+1}\end{align}$$



Now divide both the numerator and the dinominator by $\sqrt x$.


summation - Prove that for $n in mathbb{N}, sumlimits_{k=1}^{n} (2k+1) = n^{2} + 2n $



I'm learning the basics of proof by induction and wanted to see if I took the right steps with the following proof:




Theorem: for $n \in \mathbb{N}, \sum\limits_{k=1}^{n} (2k+1) = n^{2} + 2n $



Base Case:



let $$ n = 1 $$ Therefore $$2*1+1 = 1^{2}+2*1 $$ Which proves base case is true.



Inductive Hypothesis:



Assume $$\sum_{k=1}^{n} (2k+1) = n^{2} + 2n $$




Then $$\sum_{k=1}^{n+1} (2k+1) = (n+1)^{2} + 2(n+1) $$
$$\iff (2(n+1) +1)+ \sum_{k=1}^{n} (2k+1) = (n+1)^{2} + 2(n+1) $$
Using inductive hypothesis on summation term:
$$\iff(2(n+1) +1)+ n^{2} + 2n = (n+1)^{2} + 2(n+1) $$
$$\iff 2(n+1) = 2(n+1) $$



Hence for $n \in \mathbb{N}, \sum\limits_{k=1}^{n} (2k+1) = n^{2} + 2n $ Q.E.D.



Does this prove the theorem? Or was my use of the inductive hypothesis circular logic?



Answer



Your proof looks fine, but if you know that
$$1+2+...+n=\frac{n(n+1)}{2}$$
then you can evaluate
$$\sum_{k=1}^n(2k+1)=2\sum_{k=1}^n k+\sum_{k=1}^n1=\rlap{/}2\frac{n(n+1)}{\rlap{/}2}+n=n^2+2n$$


real analysis - Some questions about Riemann integration

I just learn some basic definition about



Let $f, g:[0,1] \rightarrow \Bbb R$ be




$ f(x) =
\begin{cases}
1, & \text{if $x \in \Bbb Q$} \\
0, & \text{otherwise}
\end{cases}$



$g(x) =
\begin{cases}
1, & \text{if $x=1/n, n=1,2,...$} \\

0, & \text{otherwise}
\end{cases}$



We know $f$ is not Riemann integrable, but $g$ is.



So my first question is, is it true that if the set of discontinuous points is a dense set, then that function is not Riemann integrable.



My second question is we know $h:[0,1] \rightarrow \Bbb R$ by $h(x)=1$ is integrable and has value $1$. So if we have a dense set $D$ in $[0,1]$ which cardinality of $D$ and $D^c$ are equal, and define $ u(x) =
\begin{cases}
1, & \text{if $x \in D$} \\

0, & \text{otherwise}
\end{cases}$



Can we define a similar 'integral' to say the value of the 'integral' = $1/2$



Thank you!

functional analysis - Prove sets of continuous mappings are the same


Let $C([0,T];C(\overline{U}))$ denote the set of all continuous functions $u:[0,T]\rightarrow C(\overline{U})$ with
$$\|u\|_{C([0,T];C(\overline{U}))}:=\max_{0\leqslant t \leqslant T} \|u(t)\|<\infty$$



Prove that $C([0,T];C(\overline{U}))=C([0,T]\times \overline{U})$





I am skeptical this is even true. I feel like we could apply a theorem from topology regarding the product space, but am having little success. Not really sure how to approach such a problem. Are there any counter examples that disprove the above? Any help would be much appreciated.

elementary number theory - Base system and divisibility

I have seen the following one. Please give the proof of the observation.
We know that, The difference between the sum of the odd numbered digits (1st, 3rd, 5th...) and the sum of the even numbered digits (2nd, 4th...) is divisible by 11. I have checked the same for other numbers in different base system. For example, if we want to know 27 is divisible by 3 or not.
To check the divisibility for 3, take 1 lees than 3 (i.e., 2) and follow as shown bellow
now 27 = 2 X 13 + 1 and then
13 = 2 X 6 + 1 and then
6 = 2 X 3 + 0 and then

3 = 2 X 1 + 1 and then
1 = 2 X 0 + 1
Now the remainders in base system is
27 = 11011
sum of altranative digits and their diffrence is ( 1 + 0 + 1) - (1 + 1) = 0
So, 27 is divisible by 3.
What I want to say that, to check the divisibility of a number K, we will write the number in K-1 base system and then we apply the 11 divisibility rule. How this method is working.Please give me the proof. Thanks in advance.

real analysis - Evaluate $int cos^2xsin^4xmathrm{d}x$





Evaluate integral $$\int \cos^2x\sin^4x\mathrm{d}x.$$




Attempt. Setting $\tan x=t$, gives:
$$\int \cos^2x\sin^4x\mathrm{d}x =\int \frac{1}{1+t^2} \,\left(\frac{t^2}{1+t^2}\right)^2 \frac{\mathrm{d}t}{1+t^2}=\int \frac{t^4}{(1+t^2)^4} \mathrm{d}t,$$
which does not seem to be elementary.




Thank in advance for the help.


Answer



Here is to integrate economically,



$$\cos^2x\sin^4 x = \frac 18 \sin^2 2x (1-\cos 2x)= \frac {1}{16}-\frac {1}{16}\cos 4x-\frac 18 \sin^22x\cos 2x$$



Thus,



$$\int \cos^2x\sin^4x\mathrm{d}x = \frac {x}{16}-\frac {1}{64}\sin4x-\frac{1}{48}\sin^32x +C$$


Sunday 20 April 2014

elementary set theory - Use of Cantor Schroder-Bernstein theorem?


Use the Schröder-Bernstein theorem to show that there is a bijection between two intervals $[0,1]\subseteq \Bbb R$ and $[1,\infty)\subseteq \Bbb R$, thus they have the same cardinality.




What about the sets $(0,5)$ and $(10,20)$? Is there a bijection between them? (Don't need the Schröder-Bernstein theorem here). Similarly, consider $[0,\infty)$ and $[1,\infty)$.




Hello, this is a question from my practice final. Can anyone explain how to answer this question? As I understand, the theorem allows you to find a one-to-one function between two intervals to show that they have same cardinality, but I don't know how to apply this. For example, for the first question, is it simple as a function like $f(x) = 1/x$ or is there something more than that?

real analysis - Discontinuous derivative.

Could someone give an example of a ‘very’ discontinuous derivative? I myself can only come up with examples where the derivative is discontinuous at only one point. I am assuming the function is real-valued and defined on a bounded interval.

Saturday 19 April 2014

Remainder when the polynomial $1+x^2+x^4+cdots +x^{22}$ is divided by $1+x+x^2cdots+ x^{11}$





Question : Find the remainder when the polynomial $1+x^2+x^4+\ldots +x^{22}$ is divided by $1+x+x^2+\cdots+ x^{11}$.




I tried using Euclid's division lemma, I.e.



$$P_1(x)=1+x^2+x^4+\cdots+x^{22}$$



$$P_2(x)=1+x+x^2+\cdots+x^{11}$$



Then for some polynomial $Q(x)$ and $R(x)$; we have




$$P_1(x)=Q(x)\cdot P_2(x)+R(x)$$



Now, we put the values of $x$ such that $R(x)=0$ and form equations, but this method is way too long and solving the 11 set of equations for 11 variable (Since $R(x)$ a polynomial of at most 10 degree) is impossible to do for a competitive exam where the average time for solving a question is 3 minutes.



Another method is using the original long division method, and following the pattern, we can predict $Q(x)$ and $R(x)$, but it's also very hard and time taking.



I am searching for a simple solution to this problem since last a week and now I doubt even we have a simple solution to this question.



Can you please give me a hint/solution on how to proceed to solve this problem in time?




Thanks!


Answer



$$P_1(x)=\frac{x^{24}-1}{x^2-1}$$
$$P_2(x)=\frac{x^{12}-1}{x-1}$$
$$\frac{P_1(x)}{P_2(x)}=\frac{x^{24}-1}{x^{12}-1}\cdot\frac{x-1}{x^2-1}=\frac{x^{12}+1}{x+1}$$
Then Ruffini's rule tells us that the remainder of this reduced division is the polynomial $x^{12}+1$ evaluated at $-1$, i.e. 2. When the top and bottom of $\frac2{x+1}$ are multiplied by $\frac{x^{12}-1}{x^2-1}$, the denominator becomes $P_2(x)$ and the numerator gives the final answer of $\frac{2(x^{12}-1)}{x^2-1}=2+2x^2+2x^4+2x^6+2x^8+2x^{10}$.


summation - Simplification of a double sum involving partial sums of harmonic series



Could somebody explain the jump in the following equation?
$$\frac{1}{n}\sum\limits_{i=1}^{n}\left[1 + \sum\limits_{j=i+1}^{n}\frac{1}{m}\right] = 1 + \frac{1}{nm}\sum\limits_{i=1}^{n}(n-i) $$


Answer



$$\frac{1}{n}\sum\limits_{i=1}^{n}\left[1 + \sum\limits_{j=i+1}^{n}\frac{1}{m}\right]\\

=\frac{1}{n}\sum_{i=1}^n 1 +\frac{1}{n} \sum\limits_{i=1}^{n}\sum\limits_{j=i+1}^{n}\frac{1}{m}$$



Sum of $n$ $1$'s is $n$, and $\frac{1}{m}$ is not related to the index, so we can factor it out. This gives



$$\frac{1}{n}\cdot n+\frac{1}{nm}\sum\limits_{i=1}^{n}\sum\limits_{j=i+1}^{n} 1$$



Now $\sum\limits_{j=i+1}^{n} 1$ is $(n-i)$ $1$'s. Hence the result.


trigonometry - Differentiating the function $arcsin(3x-4x^3)$



When I have to differentiate the function $\arcsin(3x-4x^3)$ which of the following methods is more appropriate ?




  1. Putting $x=\sin θ$,simplifying and then differentiating for certain ranges of $x$.


  2. Directly differentiating using chain rule.



Can the results obtained in these two techniques be shown to be same?
BTW I really don't understand why most textbooks prefer the first method. Any ideas? Thank you.
P.S:I know how to differentiate it.My question is something else ^ .


Answer



$$1-(3x-4x^3)^2=1-9x^2+24x^4-16x^6$$



$$=1-x^2-8x^2(1-x^2)+16x^4(1-x^2)=(1-x^2)(1-8x^2+16x^4)$$




$$=(1-x^2)(1-4x^2)^2$$



Now $3-12x^2=3(1-4x^2)$



$\implies\dfrac{3-12x^2}{1-(3x-4x^3)^2}=\dfrac{3(1-4x^2)}{\sqrt{1-x^2}|1-4x^2|}$



Now $|1-4x^2|=+(1-4x^2)\iff1-4x^2\ge0\iff-\dfrac12\le x\le\dfrac12$



Again, $\arcsin(3x-4x^3)=3\arcsin x\iff-\dfrac\pi2\le3\arcsin x\le\dfrac\pi2$

$\iff-\dfrac\pi6\le\arcsin x\le\dfrac\pi6\iff-\sin\dfrac\pi6\le x\le\sin\dfrac\pi6$ i.e., $-\dfrac12\le x\le\dfrac12$



The rest I want to leave for you as an exercise


Friday 18 April 2014

uniform continuity - Is the composite of an uniformly continuous sequence of functions with a bounded continuous function again uniformly continuous?



Let $\{f_n\}$ be a sequence of functions $f_n: J\to \mathbb{R}$ that converges uniformly to $f:J\to \mathbb{R}$ where $J\subseteq \mathbb{R}$ is an interval.



It is clear that for a uniformly continuous function $g:\mathbb{R}\to\mathbb{R}$, the sequence $\{g\circ f_n\}$ converges uniformly to $g\circ f:J\to \mathbb{R}$. There is a counterexample, if $g$ is only continuous.




If $J$ is compact, there is no such counterexample because then every continuous function $g$ is uniformly continuous. If $J$ is not compact, bounded and continuous for $g$ does not imply uniformly continuous.




Let $g:\mathbb{R}\to\mathbb{R}$ be bounded and continuous and $\{f_n\}$ a sequence of functions $f_n: J\to \mathbb{R}$ that converges uniformly to $f:J\to \mathbb{R}$. Does the sequence $\{g\circ f_n\}$ converges uniformly to $g\circ f:J\to \mathbb{R}$? If not, what is a counterexample?



Answer



Hint. $g(x) = \sin(x^2)$, $f(x)=x$, and find constants $a_n \to 0$ so that $f_n(x) = x+a_n$ does what you want. So that for each $n$ there is $x$ with $g(x)=\sin(2\pi n) = 0$ and $g(f_n(x))=\sin(2\pi(n+1/2)) = 1$.


calculus - Simple limit of a sequence





Need to solve this very simple limit $$ \lim _{x\to \infty
\:}\left(\sqrt[3]{3x^2+4x+1}-\sqrt[3]{3x^2+9x+2}\right) $$




I know how to solve these limits: by using
$a−b= \frac{a^3−b^3}{a^2+ab+b^2}$. The problem is that the standard way (not by using L'Hospital's rule) to solve this limit - very tedious, boring and tiring. I hope there is some artful and elegant solution. Thank you!


Answer



You have $$f(x)=\sqrt[3]{3x^2+4x+1}-\sqrt[3]{3x^2+9x+2}=\sqrt[3]{3x^2}\left(\sqrt[3]{1+\frac{4}{3x}+\frac{1}{3x^2}}-\sqrt[3]{1+\frac{3}{x}+\frac{2}{3x^2}}\right)$$ Using Taylor expansion at order one of the cubic roots $\sqrt[3]{1+y}=1+\frac{y}{3}+o(y)$ at the neighborhood of $0$, you get: $$f(x)=\sqrt[3]{3x^2}\left(\frac{4}{9x}-\frac{1}{x}+o\left(\frac{1}{x}\right)\right)$$ hence $$\lim\limits_{x \to \infty} f(x)=0$$


integration - Integral of $sqrt {-sin^2 t + cos^2 t - tan^2 t}$

$$\int{\sqrt {(-\sin^2 t + \cos^2 t - \tan^2 t)}}~\textrm{d}t$$



I'm aware of a few trig identities, such as ${\cos^2 t - \sin^2 t} = \cos (2t)$ and $\tan^2 t = \frac{\sin^2 t}{\cos^2 t}$ but these don't seem to help simplify the problem.



No simple $u$-substitution seems to prevent itself, and my attempt to integrate by parts has resulted in an even more difficult integrand.




WolframAlpha and a few different integral calculators cannot seem to solve this.

elementary set theory - Does same cardinality imply a bijection?



This came up today when people showed that there is no linear transformation $\mathbb{R}^4\to \mathbb{R}^3$.



However, we know that these sets have the same cardinality. I was under the impression that if two sets have the same cardinality then there exists a bijection between them. Is this true? Or is it just that any two sets which have a bijection between them have the same cardinality.



Edit: the question I linked to is asking specifically about a linear transformation. My question still holds for arbitrary maps.


Answer



"Same cardinality" is defined as meaning there is a bijection.




In your vector space example, you were requiring the bijection to be linear. If there is a linear bijection, the dimension is the same. There is a bijection between $\mathbb R^4$ and $\mathbb R^3$, but no such bijection is linear, or even continuous. (Space-filling curves, which are continuous functions from a space of lower dimension to a space of higher dimension, are not bijections since they are in no instance one-to-one.) If there is a bijection, then the cardinality is the same. And conversely.


calculus - Integration by Euler's formula




How do you integrate the following by using Euler's formula, without using integration by parts? $$I=\displaystyle\int \dfrac{3+4\cos {\theta}}{(3\cos {\theta}+4)^2}d\theta$$



I did integrate it by parts, by writing the $3$ in the numerator as $3\sin^2 {\theta}+3\cos^2{\theta}$, and then splitting the numerator.



But can it be solved by using complex numbers and the Euler's formula?


Answer



Hint



When you have an expression with a squared denominator, you could think that the solution is of the form $$I=\displaystyle\int \dfrac{3+4\cos {\theta}}{(3\cos {\theta}+4)^2}~d\theta=\frac{a+b\sin \theta+c\cos \theta}{3\cos {\theta}+4}$$ Differentiate the rhs and identify terms. You will get very simple results.


calculus - Computing $lim_{xto{0+}}frac{tan(x)-x}{x^3}$ without L'Hôpital's rule.





Computing $\lim_{x\to{0+}}\frac{\tan(x)-x}{x^3}$ without L'Hopital





Say $\lim_{x\to{0+}}\frac{\tan(x)-x}{x^3} = L$



For $L$:
$$L=\lim_{x\to0}\frac{\tan x-x}{x^3}\\
L=\lim_{x\to0}\frac{\tan 2x-2x}{8x^3}\\
4L=\lim_{x\to0}\frac{\frac12\tan2x-x}{x^3}\\
3L=\lim_{x\to0}\frac{\frac12\tan{2x}-\tan x}{x^3}\\
=\lim_{x\to0}\frac{\tan x}x\frac{\frac1{1-\tan^2x}-1}{x^2}\\
=\lim_{x\to0}\frac{(\tan x)^3}{x^3}=1\\

\large L=\frac13$$



I found that in another Q, can someone tell me why



$$L=\lim_{x\to0}\frac{\tan x-x}{x^3}=\lim_{x\to0}\frac{\tan 2x-2x}{8x^3}$$


Answer



If $x = 2y$ then $y\rightarrow 0$ when $x\rightarrow 0$, so $\lim_{x\rightarrow0} f(x) = \lim_{y\rightarrow 0} f(2y)$.


Thursday 17 April 2014

probability - 6 sided die probabilities

i am currently working on a study guide and one of the questions i am completely stuck on and have no idea how to do it.
Question is.
You are interested in the number of rolls of a fair $6$ sided die until a number $2$
shows up.




Let $X =$ The number of times you roll the die until a number $2$ shows up.



(a) What type of random variable is $X$?



(b) How many rolls do you expect it to take? That is, what is the expected value, or mean, of the random variable $X$?



(c) What is the probability you roll a $2$ for the first time on the fourth roll? i.e. What is $P(X = 4)$?

fractions - Zero/Zero questions and perhaps faulty logic

So I only have an Algebra II level understanding of math seeing as I am still in high school and am still missing some fundamentals seeing as I didn't pay attention in math until this year. However when recalling something my algebra teacher had taught me during the year I came up with some questions regarding the logic recently.



So during the school year, I was taught that $\frac{2}2=1, \frac{a}a=1, \frac{xy}{xy}=1$ and so forth but $\frac{0}0= \text{Undefined}$... and while researching this topic I found that the algebraic way to write all these fractions is as such
$2(x)=2, a(x)=a,$ and $0(x)=0$ and upon researching this further I found that the reason that $\frac00$ is undefined is that for any value of $x$ the equation holds true. However, seeing as in the fraction $\frac{a}a$ $a$ is a variable and variables can represent any given quantity I was wondering in the case that $a=0$ would $\frac{a}a$ still $=1$ and if not why along with the fact that lets say $a=0$ and you didn't know it why is it safe to assume that $a$ would never equal zero? Also if it happens to be the case where when $a=0, \frac{a}a=1$ (which I doubt it is) shouldn't this mean that $\frac{0}0=1$ then?

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...