Prove that the interval $(0, 1)$ and the Cartesian product $(0, 1) \times (0, 1)$ have the same cardinality
using the SB theorem?
Also how does one find a bijection on $f:(0, \infty) \to (0,1)$ such that they have the same cardiniality?
Friday, 31 January 2014
elementary set theory - Prove that the interval $(0, 1)$ and the Cartesian product $(0, 1) times (0, 1)$ have the same cardinality
numerical methods - Approximating the normal CDF for $0 leq x leq 7$
In answer https://stackoverflow.com/a/23119456/2421256, an approximation of the complementary normal CDF (ie $\frac{1}{\sqrt{2 \pi}} \int_x^{+\infty} e^{-\frac{t^2}{2}} dt$) was given for $0 \leq x \leq 7$ using a rational function as follows :
$$ \frac{1}{\sqrt{2 \pi}} e^{\frac{-x^2}{2}} \frac{N_0 + N_1 x + ... + N_6 x^6}{M_0 + M_1 x + ... + M_7 x^7} $$
How can one derive such an approximation,without relying on an already implemented approximation (ie no methods based on curve fitting), and a bound for the absolute error ?
EDIT : I added a bounty because I believe this question is not at all trivial.
EDIT 2 : I'm adding here what I've been able to do for now. Indeed, as Ian pointed out, for $0 \leq x \leq \sqrt{2}$ a good approximation of the normal CDF can be derived easily, by using a Taylor series expansion of $e^{-\frac{t^2}{2}}$ and integrating it term by term between $0$ and $x$.
For $\sqrt{2} \leq x \leq 7$, I proved, using Mill's inequality and the fact that $\frac{\int_x^{+\infty} e^{-\frac{t^2}{2}} dt}{x e^{-\frac{x^2}{2}}}$ is increasing in $x$, that $$\mathbb{P} \left( Z > x \right) \leq \frac{x e^{-\frac{x^2}{2}}}{98 \pi} $$, which yields an error smaller than $2.39 \times 10^{-3} $, but it is far from the precision yielded by the rational approximation above (around $10^{-16}$, checked numerically since I don't have a rigourous bound for it).
EDIT 3 : if it's of help, the target error is something smaller than $10^{-9} $.
Answer
This isn't a definitive answer, but hopefully points you in the right direction. The short of it is, I don't know how the coefficients were derived, but it is likely related to some Pade, Remez, interpolating, or least-squares approximant. Read on for details.
The function we're talking about is
$$
Q(x) = \frac{1}{\sqrt{2\pi}}\int_{x}^\infty e^{-t^2/2}\;dt = \frac{1}{2}\operatorname{erfc}\left(\frac{x}{\sqrt{2}}\right),
$$
for $x>0$, where $\operatorname{erfc}$ is the complimentary error function.
The link you provide gives code for an approximation of the following form, slightly different than how it was stated in the original question (note the lack of normalization by $\sqrt{2\pi}$ and a different form of the polynomials).
$$
\tilde{Q}(x) = e^{\frac{-x^2}{2}} \frac{(((((N_6 z+N_5)z+N_4)z+N_3)z+N_2)z+N_1)z+N_0}{((((((M_7 z+M_6)z+M_5)z+M_4)z+M_3)z+M_2)z+M_1)Z+M_0}
$$
Why do we pick this form? Well, asymptotically, the $Q$ function is well approximated by $\frac{e^{-x^2/2}}{x\sqrt{2\pi}}$ as $x\rightarrow\infty$. See here for details on that. This justifies factoring out the asymptotic behavior and looking for a rational approximation of what remains. If we factored out the asymptotic form, we could approximate what remains with something that goes to $1$ for large $x$, i.e., something with equal order in the numerator and denominator. If we leave that extra factor of $x$ in the denominator from the asymptotic piece, we can account for it by absorbing it into the denominator of the rational function, giving one higher degree in the denominator polynomial. The same goes for the factor of $\sqrt{2\pi}$, it gets absorbed in to the approximation.
Now, how on earth did Hart come up with those $N_i$ and the $M_j$ coefficients? I thought it was by requiring that the Taylor series of $\tilde{Q}(x)e^{x^2/2}$ matches that of $Q(x)e^{x^2/2}$ at some a point, up to a high enough order to uniquely determine all the coefficients. This is the Pade approximation technique, and you should be able to find more about the error bounds for approximations resulting from this technique. I used Mathematica to calculate these below.
I thought that surely, if you calculate that Pade approximant around $x=0$, and scale the coefficients to match this weird $M_0=440.413735824752$ (Pade approximants are usually normalized so that the constant term in the denominator is $1$) you'd get the formula. Instead, I get:
$$
\begin{array}{ccc}
i & N_i & M_i\\
0 & 220.207 & 440.414\\
1 & 152.909 & 711.438\\
2 & 95.568 & 525.241 \\
3 & 23.168 & 234.066 \\
4 & 4.6336 & 61.9813\\
5 & 0.34752 & 10.2661 \\
6 & 0.154453 & 1.04799 \\
7 & {} & 0.048886
\end{array}
$$
It's a similar pattern to Hart's numbers, and the values are all in the same ballpark as Hart's numbers, but it is not a match. Then I thought that there is this weird breakpoint in the code that appears to be $\frac{10}{\sqrt{2}}\approx 7.07$. This lead me down the path of looking to see if the Pade approximant near $\frac{10}{\sqrt{2}}$ or at the halfway point of the interval, $\frac{10}{2\sqrt{2}}$, appropriately scaled, gave Hart's original coefficients. They don't.
So here are some possibilities.
- Hart's coefficients are for a Pade approximant around SOME point on the interval $[0,10/\sqrt{2}]$, and I don't have the patience to find which point.
- Hart used some other method to generate these coefficients, and they don't correspond to a Pade approximant at all. One candidate is the Remez algorithm for rational functions, which should guarantee the error oscillates equally across the domain of approximation. Another candidate is that the coefficients are derived from an interpolation between some fixed points with known values of the domain. Yet another is a least-squares fit between the rational approximant a a bunch of known values of the function.
This has been a long winded answer, and boils down to "I don't know". But I hope the techniques I mentioned can be fruitful avenues for you to continue looking into this. I would finally suggest getting the original "Algorithm 5666 for the error function" and seeing if there is an explanation of the methodology in book in which it appears.
analysis - Show that the function $f : mathbb Q to mathbb Q$ is continuous
Let $\alpha\in\mathbb R$ be an irrational number. Show that the function $f : \mathbb Q \to\mathbb Q$ is continuous, where $f$ is given by $f(x) = x$ for $x < \alpha$ and $x + 1$ for $x > \alpha$.
I'm not sure how to go about this, I've been trying to use the fact that every rational number has a sequence of irrationals converging to it, but it doesn't seem to go anywhere. Any help would be appreciated. Thanks!
Thursday, 30 January 2014
calculus - Limit with Epsilon - Delta method
Prove using the $\epsilon - \delta$ definition of limits that $\lim_{x\to3} \frac{5}{4x-11} = 5$.
I know how the setup should be given $\epsilon \gt 0$ there exists a $\delta \gt 0$ such that $|x-3| \lt \delta$ and $|\frac{5}{4x-11} - 5| \lt \epsilon$ but I can't do the computation to help me find $\delta$ can someone guide me in the right direction?
How can I deal with radicals when I need to calculate limits?
I don't know what to do with radicals in limits. For example, is it obvious that $ \lim_{n\to \infty}\left(\sqrt[n]{c}\right)=1$ where c is a constant?
I am facing with something extremely hard for me. Like that:
$$\lim_{n\to \infty}\left( \frac{\sqrt[n]{n^3}+\sqrt[n]{7}}{3\sqrt[n]{n^2}+\sqrt[n]{3n}} \right)$$
L'Hôpital's rule is prohibited here.
Answer
Hint: Apply the following limit.
Proof that $\boldsymbol{\lim\limits_{n\to\infty}n^{\frac1n}=1}$
For $n\ge e$,
$$
\begin{align}
\frac{(n+1)^{\frac1{n+1}}}{n^{\frac1n}}
&=\frac{\left(1+\frac1n\right)^{\frac1{n+1}}}{n^{\frac1{n(n+1)}}}\tag{1}\\
&\le\frac{1+\frac1{n(n+1)}}{n^{\frac1{n(n+1)}}}\tag{2}\\[9pt]
&\le1\tag{3}
\end{align}
$$
Explanation
$(1)$: divide numerator and denominator by $n^{\frac1{n+1}}$
$(2)$: Bernoulli's Inequality
$(3)$: $e^x\ge1+x$
Thus, for $n\ge3$, $n^{\frac1n}$ is a decreasing function bounded below by $1$. Therefore, $a=\lim\limits_{n\to\infty}n^{\frac1n}$ exists and is not less than $1$.
$$
\begin{align}
a
&=\lim_{n\to\infty}(2n)^{\frac1{2n}}\tag{4}\\
&=\lim_{n\to\infty}2^{\frac1{2n}}\left(\lim_{n\to\infty}n^{\frac1n}\right)^{\frac12}\tag{5}\\[6pt]
&=1\cdot a^{\frac12}\tag{6}\\[12pt]
&=1\tag{7}
\end{align}
$$
Explanation:
$(4)$: limit of every other term is the limit of every term
$(5)$: product of limits is the limit of the product
$(6)$: Bernoulli's Inequality says $2^{\frac1{2n}}=(1+1)^{\frac1{2n}}\le1+\frac1{2n}$
$(7)$: multiply the reciprocal of the left side of $(4)$ by the square of $(6)$
QED
real analysis - Find a differentiable $f$ such that $f'$ is not continuous.
I'm trying to solve this problem:
Find a differentiable function $f:\mathbb{R} \longrightarrow \mathbb{R}$ such that $f':\mathbb{R} \longrightarrow \mathbb{R}$ is not continuous at any point of $\mathbb{R}$.
Any hints would be appreciated.
Answer
You are looking for a derivative that is discontinuous everywhere on $\Bbb R$. Such a function doesn't exist. Since $f'$ is the pointwise limit of continuous functions, it is a Baire class $1$ function. A theorem of Baire says that the set of discontinuities of $f'$ is a meager subset of $\Bbb R$.
Wednesday, 29 January 2014
calculus - When does L' Hopital's rule fail?
This thought jumped out of me during my calculus teaching seminar.
It is well known that the classical L'Hospital rule claims that for the $\frac{0}{0}$ indeterminate case, we have:
$$
\lim_{x\rightarrow A}\frac{f(x)}{g(x)}=\lim_{x\rightarrow A}\frac{f'(x)}{g'(x)}
$$
where the later could take any value including $\infty$. Here we assume that right hand side limit exist.
However, to apply it one often has to take the derivative of $f'(x)$ again at $A$, and in principle one assumes by repeatedly applying this rule we can resolve the problem by plug in the value into the function's derivative at $A$. My question is, what if the student ask if it is possible for $\lim_{x\rightarrow A} f(x),\lim_{x\rightarrow A} f'(x)\cdots \lim^{n}_{x\rightarrow A}f^{n}(x)$ be all zero for any $n$, so the rule 'fails'. How should we answer the question properly?
For example, consider the well-known non-analytic smooth function:
$$f(x)=
\begin{cases}
e^{-1/x}& x> 0\\
0& x\le 0
\end{cases}
$$
It is a trivial exercise to verify that $f^{n}(0)=0$ for any $n\in \mathbb{N}$. Now using L'Hospital rule we compute (as if we are a low level student)
$$
1=\lim_{x\rightarrow 0^{+}}\frac{f(x)}{f(x)}=\lim_{x\rightarrow 0^{+}}\frac{f'(x)}{f'(x)}=\lim_{x\rightarrow 0^{+}}\frac{f''(x)}{f''(x)}\cdots =\frac{0}{0}=?
$$
as the chain does not stop if the student applies the rule faithfully and blindly. This is a silly example, but in general for non-analytical functions I think this kind of thing could happen. And there should be more non-analytical functions than analytical functions. Is there a way for us to resolve this at introductory calculus level, so that the student know what to do, without introducing `confusing concepts' like $\epsilon-\delta$ language, Cauchy mean value theorem, Taylor series, and infinitesimals?
Answer
Even for analytical functions, this kind of thing can happen.
Consider $\displaystyle\lim_{x \to \infty}\dfrac{e^{x}-e^{-x}}{e^{x}+e^{-x}}$.
Blindly applying L'Hopital's Rule repeatedly gives:
$\displaystyle\lim_{x \to \infty}\dfrac{e^{x}-e^{-x}}{e^{x}+e^{-x}} = \lim_{x \to \infty}\dfrac{e^{x}+e^{-x}}{e^{x}-e^{-x}} = \lim_{x \to \infty}\dfrac{e^{x}-e^{-x}}{e^{x}+e^{-x}} = \cdots$.
But if we divide the numerator and denominator by $e^x$ we get:
$\displaystyle\lim_{x \to \infty}\dfrac{e^{x}-e^{-x}}{e^{x}+e^{-x}} = \lim_{x \to \infty}\dfrac{1-e^{-2x}}{1+e^{-2x}} = \dfrac{1+0}{1+0} = 1$.
Also, consider $\displaystyle\lim_{x \to 0^+}x \ln x = \lim_{x \to 0^+}\dfrac{\ln x}{1/x}$.
Blindly applying L'Hopital's Rule repeatedly gives:
$\displaystyle\lim_{x \to 0^+}x \ln x = \lim_{x \to 0^+}\dfrac{\ln x}{1/x} = \lim_{x \to 0^+}\dfrac{1/x}{-1/x^2} = \lim_{x \to 0^+}\dfrac{-1/x^2}{2/x^3} = \lim_{x \to 0^+}\dfrac{2/x^3}{-6/x^4} = \cdots$.
But it we stop after applying L'Hopital's Rule once and simplify stuff, we get:
$\displaystyle\lim_{x \to 0^+}x \ln x = \lim_{x \to 0^+}\dfrac{\ln x}{1/x} = \lim_{x \to 0^+}\dfrac{1/x}{-1/x^2} = \lim_{x \to 0^+} -x = 0$.
In both of these problems, the solution was to use basic algebra instead of just L'Hopital's Rule. The techniques taught in introductory calculus will not solve every limit problem in the world, but they will solve the problems encountered in introductory calculus. The important thing for students is to know many techniques and learn to figure out which ones will work for a given problem. Many students learn L'Hopital's Rule and then forget how to use every other tool. This is why after teaching L'Hopital's Rule, you should throw in a few examples where L'Hopital's Rule fails. This way, they think of L'Hopital's Rule as just another tool instead of magic.
abstract algebra - Why is $n_1 sqrt{2} +n_2 sqrt{3} + n_3 sqrt{5} + n_4 sqrt{7} $ never zero?
Here $n_i$ are integral numbers, and not all of them are zero.
It is natural to conjecture that similar statement holds for even more prime numbers. Namely,
$$ n_1 \sqrt{2} +n_2 \sqrt{3} + n_3 \sqrt{5} + n_4 \sqrt{7} + n_5 \sqrt{11} +n_6 \sqrt{13} $$ is never zero too.
I am asking because this is used in some numerical algorithm in physics
integration - Compute $intfrac{1-x}{(x-2)(x+3)}$ and $intfrac{cos(3x)}{sin(3x)}$
Compute $\int\frac{1-x}{(x-2)(x+3)}$ and $\int\frac{cos(3x)}{sin(3x)}$. I have no idea how to solve these 2 integrals, I've run out of ideas. The first one especially, I can't even start, not sure how to do it. I've done this for the second one:
I introduced a new variable $t$, $3x = t$, therefore $dx = \frac{dt}{3}$
Inserted into the integral: $\frac13\int\frac{cost}{sint}dt$, which leaves me with $\frac13\int cot(t)dt$ and then it stops for me, I tried Per-Partes, but couldn't come up with anything useful.
How do I solve these 2 integrals? They're driving me nuts.
Answer
Hints : For the first one use partial fractions, noting that
$$
\frac{1-x}{(x-2)(x+3)}=\frac{-1}{5x-10}-\frac{4/5}{x+3}.
$$
For the second one there is an obvious change of variable.
combinatorics - How does this image prove the identity $1+2+3+4cdots + (n-1) = binom{n}{2}$?
Proof without words:
$\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad $
How does this image prove the identity $1+2+3+4\cdots + (n-1) = \binom{n}{2}$?
I found this here; could anybody explain this in a lucid manner?
Answer
This shows that every yellow circle uniquely determines a pair of blue circles and vice versa. The number of yellow ones is the LHS, the number of pairs of blue ones is the RHS. Cute!
What's wrong with this "proof" that Gödel's first incompleteness theorem is wrong?
Edit: I've added an answer myself, based on the other answers and comments.
Here is a very very informal "proof" (sketch) that Gödel's theorem is wrong (or at least that the idea of the proof is wrong) :
Roughly, the proof of Gödel's theorem is as follows: For any decidable and consistent set of axioms $\Phi$ that contain (or imply) the first order Peano's axioms in first order language, we can construct a Gödel sentence $G^\Phi$, such that neither $\Phi\vdash G^\Phi$ nor $\Phi\vdash \neg G^\Phi$, but where we know from an argument in the meta-language that $G^\Phi$ is true. For any such $\Phi$, we will therefore have a counterexample to the completeness of the theory $Th(\Phi)$. Therefore we know that no such $\Phi$ can be complete (where complete means that all first order statements can be proven or disproven from it).
Here is a failed proposal to "circumvent" Gödel's theorem that I have already heard someone make: Just define $\Phi_1=\Phi\cup \{G^\Phi\}$, and since we know that $G^\Phi$ is true in arithmetic, we know that $\Phi_1$ is consistent. The problem of course is: We can now formulate a new Gödel sentence $G^{\Phi_1}$, which cannot be proven from in $\Phi_1$, even though it is true in standard arithmetic.
Now here is my proposal:
Rather than trying to add individual Gödel sentences to the set of axioms, we simply take the enumeration procedure, such that it enumerates $\phi_i \in \Phi$ for the original set of axioms $\Phi$, and also enumerates all successive Gödel sentences $G^{\Phi}, G^{\Phi_1}, G^{\Phi_2},...$. This is possible, since $\Phi$ is decidable, and decidable sets of finite strings are enumerable, so we can enumerate them successively, as $\phi_1, \phi_2, \phi_3$, where $\phi_1$ is the first statement of the enumeration of $\Phi$, and $\phi_2 = G^\Phi$, and $\phi_3$ is the second statement of the enumeration of $\Phi$, etc...
We can then define the set of axioms $\Phi_\infty = \{\phi_1, \phi_2, ...\}$. This will also have a Gödel sentence $G^{\Phi_\infty}$. But what we can simply do, is add this to the enumeration procedure as well. And then the next one, and the next, and so forth. We take this process to infinity, just as we did for $\Phi$, and just keep going.
Every time a Gödel sentence pops up, we simply add it to the enumeration. Now note that: since the set of first order sentences is countable, the set of Gödel sentences is countable as well (since it is a subset of the set of first order sentences). Therefore we can in this procedure described above enumerate all possible Gödel sentences.
The resulting set of sentences forms an enumerable and consistent set of sentences $\Psi$ that contains the original $\Phi$, and additionally
contains the Gödel sentences of all possible sets of axioms $\Phi_x$. Therefore The Gödel sentence of $\Psi$ must be in $\Psi$ itself.
Moreover, we can then create a "decidable version" of $\Psi$, by defining $\Psi^*=\{\psi_1, \psi_1 \land \psi_2, \psi_1 \land \psi_2\land \psi_3, ... \}$, for all $\psi_1, \psi_2,... \in \Psi$. We therefore have a consistent and decidable set of first order sentences that are true in standard arithmetic, contain Peano's axioms, and bypass Gödel's proof of incompleteness.
This is obviously a contradiction with Gödel's theorem. So where is my "proof sketch" wrong?
Answer
There are at least two problems here.
First, when you say "we take this process to infinity and just keep going", that is a very informal description, and without spending some work on making it more concrete you have no good reason to expect it can actually be made to work.
Fortunately, such work has in fact been done, and the standard way of making it concrete is to speak about process steps indexed by transfinite ordinal numbers, which I'm going to suppose is what you are proposing.
Then, however, a real problem arises: Gödel's procedure only works when the original theory is recursively axiomatized, that is, it is computable whether a given proposed sentence is one of its axioms or not. In order to do this for one of your intermediate theories, you need to be able to algorithmically describe the process that produced the theory. And there is (vividly speaking, but can be made precise) so far to infinity that there are not enough Turing machines to describe the structure of each of the theories you encounter along the way. So at some point you're going to approach a point where the process that produced the axioms you already have is so complex that the combined theory is not recursively axiomatized anymore, and then the incompleteness theorem stops working.
A second, comparatively minor, problem comes later in your argument:
since the set of first order sentences is countable, the set of Gödel sentences is countable as well (since it is a subset of the set of first order sentences). Therefore we can in this procedure described above enumerate all possible Gödel sentences.
This argument seems to be the form: "There are a countable infinity of foos in total; here we have a set of countably-infinite many foo; therefore the set contains all of them", which is not valid -- consider e.g. the situation where a "foo" is a natural number and the set in question contains all the perfect squares.
(Note also that you don't seem to have defined what a "possible Gödel sentence" means, which you really ought to do before claiming that you have all of them).
Tuesday, 28 January 2014
integration - Integrate $ I(t)=int_0^infty left( frac{sin tx}{x} right)^n,mathrm d x$
How to integrate the following:
$$
I(t)=\int_0^\infty \left( \frac{\sin tx}{x} \right)^n\,\mathrm d x$$
I tried to using the Laplace transform:
\begin{align}
\mathcal{L}\left[I(t)\right]=\int_0^\infty \mathcal L\left[\left( \frac{\sin tx}{x} \right)^n\right]\,\mathrm d x
\end{align}
but I don't know what to do then.
Arithmetic progression (given third term and difference between 5th and 7th term)
Given that the 3rd term of an arithmetic progression (AP) is
16 and the difference between the 5th and the 7th term is 12,
write down the first 7 terms of the AP.
For an AP, the $n^{th}$ term is given by:
$$a_n=a+(n-1)d$$
where $a$ is the first term and $d$ is the common difference
The difference between the $5^{th}$ and the $7^{th}$ is $12$
$$12 / 2 = 6$$
$$a_3=a+2d=16$$
$$a_3=a+2(6)=16$$
$$a_1=(16-2(6))$$
$$a_1=(16-12) = 4$$
Is this the correct method to find the first term?
Answer
Assuming that the 7th term is greater than the 5th term, we have the following:
Note that $T_1, T_3, T_5, T_7$ are also in AP (AP2).
Given that $T_3=16$, and $T_7-T_5=12$ (i.e. common difference for AP2 is $12$), we have $$T_{1,3,5,7}=4,16,28,40$$.
Interpolating (since an AP is linear) we have
$$T_{1,2,3,4,5,6,7}=-2,4,10,16,22,28,34,40$$
radicals - Square root inequality revisited
This is a follow-up question of this one:
Proof of the square root inequality $2\sqrt{n+1}-2\sqrt{n}<\frac{1}{\sqrt{n}}<2\sqrt{n}-2\sqrt{n-1}$
I am interested in the following generalizations of the square root inequality.
Let $\varepsilon,\delta>0.$ Then
$$\sqrt{\varepsilon+\delta}-\sqrt{\varepsilon}<\frac{\delta}{2\sqrt{\varepsilon}}.$$
If additionally $\delta<\varepsilon$, then
$$\frac{\delta}{2\sqrt{\varepsilon}}<\sqrt{\varepsilon}-\sqrt{\varepsilon-\delta}.$$ The proof is similar as answered in my previous question.
Moreover, if $\varepsilon,\delta\in\mathbb{C}$ are any complex numbers and $\sqrt{\cdot}$ is a complex root, then
$$\min(|\sqrt{\varepsilon+\delta}-\sqrt{\varepsilon}|,|\sqrt{\varepsilon+\delta}+\sqrt{\varepsilon}|)\leq\min(\frac{|\delta|}{\sqrt{|\varepsilon|}},\sqrt{|\delta|}).$$
I have trouble to prove this one, but it should hold (it is not from me). What I have yet is the following: first suppose
$$|\sqrt{\varepsilon+\delta}-\sqrt{\varepsilon}|\leq|\sqrt{\varepsilon+\delta}+\sqrt{\varepsilon}|.$$
Then
$$|\sqrt{\varepsilon+\delta}-\sqrt{\varepsilon}|^2\leq|\delta|,$$
so that
$$|\sqrt{\varepsilon+\delta}-\sqrt{\varepsilon}|\leq\sqrt{|\delta|}.$$
Similarly, when
$$|\sqrt{\varepsilon+\delta}-\sqrt{\varepsilon}|\geq|\sqrt{\varepsilon+\delta}+\sqrt{\varepsilon}|,$$
then
$$|\sqrt{\varepsilon+\delta}+\sqrt{\varepsilon}|\leq\sqrt{|\delta|},$$
so that
$$\min(|\sqrt{\varepsilon+\delta}-\sqrt{\varepsilon}|,|\sqrt{\varepsilon+\delta}+\sqrt{\varepsilon}|)\leq\sqrt{|\delta|}.$$
Moreover,
$$|\sqrt{\varepsilon+\delta}-\sqrt{\varepsilon}|=\frac{|\delta|}{|\sqrt{\varepsilon+\delta}+\sqrt{\varepsilon}|}\leq\ldots$$
and I don't know how to continue. Any help for solving this?
real analysis - Prove that an arbitrary norm is continuous. Is my proof correct?
Let $f: \mathbb{F}^n\rightarrow \mathbb{R}$ be defined by $f(a_1,\cdots, a_n)=\|\sum a_jv_j\|$. Show $f$ is continuous on $\mathbb{F}^n$.
1. $\|\cdot\|$ is an arbitrary norm on $\mathcal{V}$.
Proof:
- Let $u=\sum a_iv_i$ and $v=\sum b_iv_i$.
- Show $\forall \epsilon>0, \forall u\in \mathcal{V}$, there exists $\delta>0$ such that if $\|v-u\|\leq \delta$, then $|f(v)-f(u)|\leq \epsilon$.
- \begin{align}
|f(v)-f(u)|&=|\|v\|-\|u\|| \\
&\leq \|v-u\| \ \ \ \ \text{this is implied by triangle inequality}\\
&=\|\sum (b_j-a_j)v_j\| \\
&\leq \sum \|(b_j-a_j)v_j\| \\
& = \sum|b_j-a_j|\|v_j\|
\end{align} - So if letting $\sum |b_j-a_j|\|v_j\|=\delta$, the above shows that there exists $\delta\geq 0$ such that if $\|v-u\|\leq \delta$, then $|f(v)-f(u)|\leq \epsilon$.
Is my proof of using $\delta-\epsilon$ correct? Where should I say something more?
Answer
This is one of those cases where maybe it's better --and more instructive--to prove the general case first. Then, adapting the proof to your specific function is easy.
So, suppose $v,w\in \mathbb F^{n}$. Then,
$w=v+(w-v)$ so $\Vert w\Vert \leq \Vert v\Vert +\Vert w-v\Vert \Rightarrow\Vert w\Vert-\Vert v\Vert\leq \Vert w-v\Vert$.
Interchanging the roles of $v$ and $w$ gives $\Vert v\Vert-\Vert w\Vert\leq \Vert v-w\Vert=\Vert w-v\Vert$
so in fact
$\left | \Vert w\Vert-\Vert v\Vert\ \right |\leq \Vert w-v\Vert$, which proves that $\Vert \cdot\Vert $ is continuous (with $\delta =\epsilon)$.
Monday, 27 January 2014
Basic Probability Die Roll Game
I am not sure if I got the correct answers to these basic probability questions.
You and I play a die rolling game, with a fair die. The die is equally likely to land on any of its $6$ faces. We take turns rolling the die, as follows.
At each round, the player rolling the die wins (and the game stops) if (s)he rolls either "$3$", "$4$","$5$", or "$6$". Otherwise, the next player has a chance to roll.
- Suppose I roll 1st. What is the probability that I win in the 1st round?
- Suppose I roll in the 1st round, and we play the game for at most three rounds. What is the probability that
a. I win?
b. You win?
c. No one wins? (i.e., I roll "$1$" or "$2$" in the 3rd round?)
My answers:
- $4/6$
- a. I think it is $(4/6)^2$ . Can someone explain why it isn't $4/6 + 4/6$ ?
b. I think this is $4/6$.
c. I think it is $(2/6)^3$
Answer
What is the probability that the player who rolls first wins in the first round?
Your answer of $4/6 = 2/3$ is correct.
What is the probability that the player who rolls first wins the game if the game lasts at most three rounds?
The player who rolls first if he or she wins during the first round, second round, or third round. We know that the probability that the player who rolls first has probability $2/3$ of winning in the first round.
For the player who rolls first to win in the second round, both players must roll a 1 or 2 in the first round, then the first player must roll a 3, 4, 5, or 6 in the second round. The probability that this event occurs is
$$\frac{2}{6} \cdot \frac{2}{6} \cdot \frac{4}{6} = \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{2}{3} = \frac{2}{27}$$
For the first player to win in the third round, both players must roll a 1 or 2 for the first two rounds, then the first player must roll a 3, 4, 5, or 6 in the third round. The probability that this occurs is
$$\frac{2}{6} \cdot \frac{2}{6} \cdot \frac{2}{6} \cdot \frac{2}{6} \cdot \frac{4}{6} = \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{2}{3} = \frac{2}{243}$$
Since these three events are mutually exclusive, the probability that the player who rolls first wins the game is
$$\frac{2}{3} + \frac{2}{27} + \frac{2}{243} = \frac{162}{243} + \frac{18}{243} + \frac{2}{243} = \frac{182}{243}$$
The probability that the first player wins cannot be
$$\frac{4}{6} + \frac{4}{6} = \frac{8}{6} = \frac{4}{3} > 1$$
since it is impossible for a probability to exceed $1$.
What is the probability that the player who rolls second wins the game if the game lasts at most three rounds.
The player who rolls second can win in the first round if the first player rolls a 1 or 2, then the second player rolls a 3, 4, 5, or 6. The probability that this event occurs is
$$\frac{2}{6} \cdot \frac{4}{6} = \frac{1}{3} \cdot \frac{2}{3} = \frac{2}{9}$$
For the second player to win in the second round, the first and second players must both roll a 1 or 2 in the first round, the first player must roll a 1 or 2 in the second round, then the second player must roll a 3, 4, 5, or 6 in the second round. The probability that this event occurs is
$$\frac{2}{6} \cdot \frac{2}{6} \cdot \frac{2}{6} \cdot \frac{4}{6} = \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{2}{3} = \frac{2}{81}$$
For the second player to win in the third round, both players must roll a 1 or 2 in the first two rounds, the first player must roll a 1 or 2 in the third round, then the second player must roll a 3, 4, 5, or 6 in the third round. The probability that this event occurs is
$$\frac{2}{6} \cdot \frac{2}{6} \cdot \frac{2}{6} \cdot \cdot \frac{2}{6} \cdot \frac{2}{6} \cdot \frac{4}{6} = \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{2}{3} = \frac{2}{729}$$
Since these three events are mutually exclusive, the probability that the second player wins is
$$\frac{2}{9} + \frac{2}{81} + \frac{2}{729} = \frac{162}{729} + \frac{18}{729} + \frac{2}{729} = \frac{182}{729}$$
What is the probability that neither player wins a game that lasts at most three rounds?
For this event to occur, both players must roll a 1 or 2 in all three rounds. The probability that this event occurs is
$$\left(\frac{2}{6}\right)^6 = \left(\frac{1}{3}\right)^6 = \frac{1}{729}$$
Check: There are three possibilities: The first player wins, the second player wins, or neither player wins. Therefore, the probabilities of these three events should add up to $1$. Observe that
$$\frac{182}{243} + \frac{182}{729} + \frac{1}{729} = \frac{546}{729} + \frac{182}{729} + \frac{1}{729} = 1$$
algebra precalculus - What is the next step in the prove? (Mathematical Induction) $left(x^{n}+1right)
I have to prove this preposition by mathematical induction:
$$\left(x^n+1\right)<\left(x+1\right)^n \quad \forall n\geq 2 \quad \text{and}\quad x>0,\,\, n \in \mathbb{N}$$
I started the prove with $n=2$:
$\left(x^{2}+1\right)<\left(x+1\right)^{2}$
$x^{2}+1
We see that;
$x^{2}+1-x^{2}-1<2x$
$0<2x$
Then
$x>0$
And this one carries out for $n=2$
Now for $\quad n=k \quad$ (Hypothesis)
$\left(x^{k}+1\right)<\left(x+1\right)^{k}$
We have
$\displaystyle x^{k}<\left(x+1\right)^{k}-1\ldots \quad (1)$
Then, we must prove for $\quad n= k+1 \quad$ (Thesis):
$x^{k+1}+1<\left(x+1\right)^{k+1}$
We develop before expression as:
$x^{k+1}<\left(x+1\right)^{k+1}-1\ldots \quad (2)$
According to the steps of mathematical induction, the next stpe would be use the hypothesis $(1)$ to prove thesis $(2)$. It's in here when I hesitate if the next one that I am going to write is correct:
First way:
We multiply hypothesis $(1)$ by $\left(x+1\right)$ and we have:
$x^{k}\left(x+1\right)<\left[\left(x+1\right)^{k}-1\right]\left(x+1\right)$
$x^{k}\left(x+1\right)<\left(x+1\right)^{k+1}-\left(x+1\right)$
Last expression divided by $\left(x+1\right)$ we have again the expression $(1)$:
$\displaystyle \frac{x^{k}\left(x+1\right)<\left(x+1\right)^{k+1}-\left(x+1\right)}{\left(x+1\right)}$
$x^{k}<\left(x+1\right)^{k}-1$
Second way:
If we multiply $(2)$ by $x$ we have:
$xx^{k}
$x^{k+1}
And if we again divided last expression by $x$, we arrive at the same result
$\displaystyle \frac{x^{k+1}
$x^{k}<\left(x+1\right)^{k}-1$
I do not find another way to prove this demonstration, another way to solve the problem is using Newton's theorem binomial coeficients, but the prove lies in the technical using of mathematical induction. If someone can help me, I will be very grateful with him/her!
Thanks
-Víctor Hugo-
Answer
Suppose that $(1+x)^n>1+x^n$ for some $n\ge 2$. Then
$$
(1+x)^{n+1}=(1+x)^n(1+x)>(1+x^n)(1+x)=1+x+x^n+x^{n+1}>1+x^{n+1}
$$
since $x>0$ where in the first inequality we used the induction hypothesis.
Probability of outcome from a fair die
A fair die is rolled 5 times:
a) What is probability that an even number is cast at least once?
b) What is probability that a six is cast each time?
Answer
The first probability is $1-(1/2)^5$
The second is $(1/6)^5$
a)
Let A:= "at least one even number is rolled".
Then P(A) = 1 - P(only uneven numbers are rolled) = $1- (1/2)^5$
b) P(roll a six in one try) = 1/6
Therefore P(roll a six 5 times in a row) = $(1/6)^5$ because each rolling the die is independent from the others.
why 1/ infinity isn't indeterminate like other indeterminate?
$1/\infty$ tends to 0.
$\mathbf {It\ doesn't \ satisfy\ the\ inverse \ process\ of\ multiplication \ and \\division\ i.e} $
$\infty * 0$ is undefined or indeterminate.
So why $1/\infty$ is not indeterminate like other indeterminate $0/0$ , $\infty/\infty,..$
Thanks.
number theory - Minimum or maximum ratio for attacking RSA
Let’s have $N_1=p^a$ where $p$ a prime number and $a$ a natural number. If we want to find all numbers from $1-N_1$ which are not divisible by $p$ we have to subtract all multiples of $p$ from $N_1$. The multiples are $1p,2p,...p^{a-1}p$. From this we conclude that the numbers not divisible by $p$ are $p^a-p^{a-1}$ or $N_1-N_1/p$. Let’s have the product of two primes $p^a q^b=N_2$ where $b$ a natural number. Now we have to find which numbers from $1-N_2$ are not divisible by $p,q$. In order to do so we have to subtract from $N_2$ the multiples $N_2/p$ and $N_2/q$. Now we have $(N_2-N_2/p)-N_2/q$, but some numbers are multiples of $pq$. We can subtract these multiples from $N_2/p$ or $N_2/q$. Let’s subtract them from $N_2/q$, giving $N_2/q-N_2/pq$. The last result has to be subtracted from the first term $(N_2-N_2/p)$. So we have $(N_2-N_2/p)-(N_2/q-N_2/pq)=F(N_2)$. So the number $F(N_2)$ is counting the numbers from $1-N_2$ which are not divisible by $p,q,pq$.
We obtain the second term by applying the ABROZ technique. This works as follows: we multiply the first term by $1/q$ and we obtain$ (N_2-N_2/p) 1/q=(N_2/q-N_2/pq)$. Let’s have $N_3=p^a q^b f^c$ where $ f$ a prime number and $c$ a natural number. We know how to get the first two terms and now have to obtain the third term which contains the multiples of $f$.
We again apply the technique, multiplying the two terms by $1/f$.
So we have $(N_3-N_3/p)-(N_3/q-N_3/pq) 1/f=N_3/f-N_3/pf-N_3/qf-N_3/pqf$.
To find $F(N_3)$ we subtract from the two terms the third term.
So we have: $F(N_3)=(N_3-N_3/p)-(N_3/q-N_3/pq)-(N_3/f-N_3/pf-N_3/qf-N_3/pqf)$.
We can continue this process indefinitely. The last equation can also be written as $F(N_3)=N_3[1-1/p-1/q+1/pq-1/f+1/pf+1/qf-1/pqf]$. The result inside the bracket can be obtained by multiplication of the three primes as follows:
$(1-1/p)(1-1/q)(1-1/f)=[1-1/p-1/q+1/pq-1/f+1/pf+1/qf-1/pqf]$.
So we have $F(N_3)=N_3(1-1/p)(1-1/q)(1-1/f)$.
We can make keys when$ F(N)/N$ is minimum or maximum.
My first question is: when is it easier to attack RSA, when $F(N)/N$ is minimum or maximum? Secondly, in what other areas of mathematics can we apply the maximum and minimum ratios?
Answer
$F(N)$ doesn't have a maximum (for $N\gt1$), and it doesn't have a minimum. It can be made arbitrarily close to, but not equal to, zero, and it can be made arbitrarily close to, but not equal to, one. As Yuval suggests, RSA implementations are usually taken with $N=pq$ with $p$ and $q$ large primes, and this will lead to $F(N)$ close to one.
Sunday, 26 January 2014
Convert complex number to polar coordinates
Problem
Compute when $x \in \mathbb{C}$:
$$ x^2-4ix-5-i=0 $$
and express output in polar coordinates
Attempt to solve
Solving this equation with quadratic formula:
$$ x=\frac{4i \pm \sqrt{(-4i)^2-4\cdot (-5-i)}}{2} $$
$$x= \frac{4i \pm \sqrt{4(i+1)}}{2} $$
$$ x = \frac{4i \pm 2\sqrt{i+1}}{2} $$
$$ x = 2i \pm \sqrt{i+1} $$
I can transform cartesian complex numbers to polar with eulers formula:
when $z \in \mathbb{C}$
$$ z=re^{i\theta} $$
then:
$$ r=|z|=\sqrt{(\text{Re(z)})^2+(\text{Im(z)})^2} $$
$$ \text{arg}(x)=\theta = \arctan{\frac{\text{Im}(z)}{\text{Re}(z)}} $$
Plugging in values after this computation would give us our complex in number in $(r,\theta)$ polar coordinates from $(\text{Re},\text{Im})$ cartesian coordinates.
Only problem is how do i convert complex number of form
$$ z=2i+\sqrt{i+1} $$
to polar since i don't know how to separate this into imaginary and real parts. How do you compute $\text{Re}(z)$ and $\text{Im}(z)$
Answer
Let $a,b\in\mathbb{R}$ so that $$\sqrt{i+1} = a+bi$$
$$ i+1 = a^2 -b^2 +2abi $$
Equating real and imaginary parts, we have
$$2ab = 1$$
$$a^2 -b^2 = 1$$
Now we solve for $(a,b)$.
$$
\begin{align*}
b &= \frac{1}{2a}\\\\
\implies \,\,\, a^2 - \left(\frac{1}{2a}\right)^2 &= 1 \\\\
a^2 &= 1 + \frac{1}{4a^2}\\\\
4a^4 &= 4a^2 + 1\\\\
4a^4 - 4a^2 -1 &= 0 \\\\
\end{align*}
$$
This is a quadratic in $a^2$ (it's also a quadratic in $2a^2$, if you prefer!), so we use the quadratic formula:
$$a^2 = \frac{4 \pm \sqrt{16-4(4)(-1)}}{2(4)}$$
$$a^2 = \frac{1 \pm \sqrt{2}}{2}$$
Here we note that $a$ is real, so $a^2>0$, and we discard the negative case:
$$a^2 = \frac{1 + \sqrt{2}}{2}$$
$$a = \pm \sqrt{\frac{1 + \sqrt{2}}{2}}$$
$$ b = \frac{1}{2a} = \pm \sqrt{\frac{\sqrt{2}-1}{2}}$$
This gives what you can call the principal root:
$$\sqrt{i+1} = \sqrt{\frac{1 + \sqrt{2}}{2}} + i\sqrt{\frac{\sqrt{2}-1}{2}} $$
As well as the negation of it:
$$-\sqrt{i+1} = -\sqrt{\frac{1 + \sqrt{2}}{2}} + i\left(-\sqrt{\frac{\sqrt{2}-1}{2}}\right) $$
Finally, substituting either of these into your expression $$z=2i \pm \sqrt{i+1}$$ will give you $\text{Re}(z)$ and $\text{Im}(z)$.
At that point, as you noted in your question, conversion to polar coordinates is straightforward.
calculus - Evaluating $limlimits_{ntoinfty} e^{-n} sumlimits_{k=0}^{n} frac{n^k}{k!}$
I'm supposed to calculate:
$$\lim_{n\to\infty} e^{-n} \sum_{k=0}^{n} \frac{n^k}{k!}$$
By using W|A, i may guess that the limit is $\frac{1}{2}$ that is a pretty interesting and nice result. I wonder in which ways we may approach it.
Answer
Edited. I justified the application of the dominated convergence theorem.
By a simple calculation,
$$ \begin{align*}
e^{-n}\sum_{k=0}^{n} \frac{n^k}{k!}
&= \frac{e^{-n}}{n!} \sum_{k=0}^{n}\binom{n}{k} n^k (n-k)! \\
(1) \cdots \quad &= \frac{e^{-n}}{n!} \sum_{k=0}^{n}\binom{n}{k} n^k \int_{0}^{\infty} t^{n-k}e^{-t} \, dt\\
&= \frac{e^{-n}}{n!} \int_{0}^{\infty} (n+t)^{n}e^{-t} \, dt \\
(2) \cdots \quad &= \frac{1}{n!} \int_{n}^{\infty} t^{n}e^{-t} \, dt \\
&= 1 - \frac{1}{n!} \int_{0}^{n} t^{n}e^{-t} \, dt \\
(3) \cdots \quad &= 1 - \frac{\sqrt{n} (n/e)^n}{n!} \int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du.
\end{align*}$$
We remark that
- In $\text{(1)}$, we utilized the famous formula $ n! = \int_{0}^{\infty} t^n e^{-t} \, dt$.
- In $\text{(2)}$, the substitution $t + n \mapsto t$ is used.
- In $\text{(3)}$, the substitution $t = n - \sqrt{n}u$ is used.
Then in view of the Stirling's formula, it suffices to show that
$$\int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du \xrightarrow{n\to\infty} \sqrt{\frac{\pi}{2}}.$$
The idea is to introduce the function
$$ g_n (u) = \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \mathbf{1}_{(0, \sqrt{n})}(u) $$
and apply pointwise limit to the integrand as $n \to \infty$. This is justified once we find a dominating function for the sequence $(g_n)$. But notice that if $0 < u < \sqrt{n}$, then
$$ \log g_n (u)
= n \log \left(1 - \frac{u}{\sqrt{n}} \right) + \sqrt{n} u
= -\frac{u^2}{2} - \frac{u^3}{3\sqrt{n}} - \frac{u^4}{4n} - \cdots \leq -\frac{u^2}{2}. $$
From this we have $g_n (u) \leq e^{-u^2 /2}$ for all $n$ and $g_n (u) \to e^{-u^2 / 2}$ as $n \to \infty$. Therefore by dominated convergence theorem and Gaussian integral,
$$ \int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du = \int_{0}^{\infty} g_n (u) \, du \xrightarrow{n\to\infty} \int_{0}^{\infty} e^{-u^2/2} \, du = \sqrt{\frac{\pi}{2}}. $$
real analysis - Find generating fraction of number
This was given to me as a practice problem as preparation for the final exam. The only thing is I have no idea how to do the question, as we did not go over generating fractions in class. I googled what a generating fraction was, and I found some information about generating functions, but nothing too helpful.
The hint says to find the sum of the series. But I'm not sure how to find the series in the first place.
Any help would be much appreciated, thank you.
complex analysis - Convergence of Laurent Series.
Let $f(z)$ be analytic for $|z|>r$ and let it be bounded $|f(z)|\leq M, M>0$ wherever it is analytic. Show that the coefficients of the Laurent Series of $f(z)$ are $0$ for $j\geq 1$.
I have found two approaches to solve this. I'm not sure about this one:
The positive part of the Laurent Series:
$$\sum_{j=0}^\infty a_j(z-z_0)^j$$
converges when $|z-z_0|
real analysis - Find the sum $sum_{j=0}^{n}j$
Find the sum $\sum_{j=0}^{n}j$
for all $n=0,1,2,3,\dots$.
How do I find/evaluate this sum?
Saturday, 25 January 2014
inequality - Proving $sum_{j=1}^n frac{1}{sqrt{j}} > sqrt{n}$ with induction
Problem: Prove with induction that \begin{align*} \sum_{j=1}^n \frac{1}{\sqrt{j}} > \sqrt{n} \end{align*} for every natural number $n \geq 2$.
Attempt at proof: Basic step: For $n = 2$ we have $1 + \frac{1}{\sqrt{2}} > \sqrt{2}$ which is correct.
Induction step: Suppose the assertion holds for some natural number $n$, with $n > 2$. Then we now want to prove that it also holds for $n +1$, i.e. that \begin{align*} \sum_{j=1}^{n+1} \frac{1}{\sqrt{j}} > \sqrt{n+1} \end{align*}
Now we have that \begin{align*} \sum_{j=1}^{n+1} \frac{1}{\sqrt{j}} = \sum_{j=1}^n \frac{1}{\sqrt{j}} + \frac{1}{\sqrt{n+1}} > \sqrt{n} + \frac{1}{\sqrt{n+1}} \end{align*} or \begin{align*} \sum_{j=1}^n \frac{1}{\sqrt{j}} + \frac{1}{\sqrt{n+1}} > \frac{\sqrt{n} \sqrt{n+1} + 1}{\sqrt{n+1}} \end{align*}
Now I'm stuck, and I don't know how to get $\sqrt{n+1}$ on the right hand side. Help would be appreciated.
Answer
As Daniel Fischer points out in the comments, since you have
$$
\sum_{j=1}^{n+1} \frac{1}{\sqrt{j}} > \sqrt{n} + \frac{1}{\sqrt{n+1}}
$$
it is enough to show
$$
\sqrt{n} + \frac{1}{\sqrt{n+1}} \geq \sqrt{n+1}
$$
or equivalently $
\frac{1}{\sqrt{n+1}} \geq \sqrt{n+1} - \sqrt{n}
$. A way to show this final inequality is to recall the identity $(a-b)(a+b) = a^2-b^2$ and multiply both sides by $\sqrt{n+1} + \sqrt{n}$, i.e. to use the identity with $a=\sqrt{n+1}$ and $b=\sqrt{n}$: what happens to the LHS? To the RHS?
probability - Roll a die. Are the chances of getting two same consecutive numbers the same as getting any specific random sequence?
Let's imagine a die roll.
You roll the die n times.
Example: I have a $6$ sided die. Assuming the distribution of the die is perfect, the chances of getting any single number are $1/6$. The chances of getting two consecutive same numbers (by rule of product) are $1/36$. The chances of getting any two specific numbers (let's assume a random pair like $(2,3)$) are $1/36$ also.
Let's assume that pairs of type $(n,m)$ are equivalent to pairs of type $(m,n)$ - we count them as equal. So when we roll the die n times, is the chance of some pair $(m,n)$ appearing greater than the chance of a pair $(n,n)$ appearing? That is to say is the chance of getting $(2,3)$ & $(3,2)$ in a row bigger than just the chance of getting $(4,4)$ & $(4,4)$ in a row?
Answer
The probabilities of outcomes of the die do not depend on which outcomes you consider
to be equivalent. The die rolls and stops rolling as it always does.
When we were paying attention to the sequence in which the numbers appeared, there
was just a $\frac{1}{36}$ chance of getting $(4,4),$
a $\frac{1}{36}$ chance of getting $(2,3),$
and a $\frac{1}{36}$ chance of getting $(3,2).$
When we only consider events distinct based on the number of times each face of
the die has occurred, there is still only a $\frac{1}{36}$ chance
that face $4$ occurred twice in two rolls,
and there is still a $\frac{1}{18}$ chance that $2$ and $3$ each occurred once.
Compare this to what happens if we only write down the sum of the numbers that occur,
not the actual numbers themselves. The probability that the total is $5$ is $\frac 19,$
but the probability that the total is $8$ is $\frac{5}{36},$ which is greater.
algebra precalculus - Best way to simplify a polynomial fraction divided by a polynomial fraction as completely as possible
I've been trying for the past few days to complete this question from a review booklet before I start university:
Simplify as completely as possible:
( 5x^2 -9x -2 / 30x^3 + 6x^2 ) / ( x^4 -3x^2 -4 / 2x^8 +6x^7 + 4x^6 )
However, I've only gotten as far as this answer below:
( (x -1) / 6x^2 ) / ((x^2 +1)(x^2 -4) / (2x^4 +4x^3)(x^4 + x^3))
I can't figure out how to simplify it further. What is the best / a good way to approach such a question that consists of a polynomial fraction divided by a polynomial fraction?
Is it generally a good idea to factor each fraction first then multiply them like I attempted above, or is it better to multiply them without factoring then try to simplify one big fraction?
Answer
\begin{align}
&\;\frac{ 5x^2 -9x -2 }{ 30x^3 + 6x^2 } \div \frac{ x^4 -3x^2 -4}{ 2x^8 +6x^7 + 4x^6 }\\
=&\;\frac{(x-2)(5x+1) }{ 6x^2(5x+1) } \times \frac{ 2x^6(x+1)(x+2)}{(x-2)(x+2)(x^2+1)}\\
=&\; \frac{x^4(x+1)}{3(x^2+1)}
\end{align}
real analysis - Continuity in dense set
Prove that if $f$ is continuous and $f(x)=0$ for all numbers $x$ in a dense set $A$, then $f(x)=0$, for all $x$ .
Proof. Suppose that $f(x) \neq 0$, for some $x$, as $f$ is continuous then there is $\delta>0$, such that $(x-\delta,x+\delta)$, given that a set $A$ is dense, there is $\alpha \in (x-\delta,x+\delta)$, for $\alpha \in A$, then $f(\alpha)=0$, but this is a contradiction, since $\alpha \in A$, hence $f(x)=0$ .
I'm not sure if it's necessary to use the case in which $f(x)<0$ and $f(x)>0$ .
Other Proof:
Given that A is a dense set, there is $x_{n}\in A$, such that $x_{n}\rightarrow x$, then $f$ is continuous $f(x_{n})\rightarrow f(x)$, given that $x_{n} \in A$, then $f(x_{n})=0$, thus $f(x)=0$
elementary number theory - Proving that there does not exist an infinite descending sequence of naturals using minimal counterexample
This might be a long-winded way of proving something so obvious, but I want to have it checked if it holds up.
Claim: There does not exist a sequence $\{{a}_{n \in \mathbb{N}}\}$ of natural numbers such that $a_{n+1} < a_{n}$ for every $n$.
Proof: This amounts to showing that the set $M = \{ m \in \mathbb{N} : m \leq a_{0}\}$, where $a_{0}$ is the first member from any sequence in infinite descent, is finite. Suppose for the sake of contradiction that there exists an $a_{0}$ such that the set $M$ is infinite. We suppose further that $a_{0}$ is minimal. Clearly $a_{0} \neq 0$, for if it were, the set $M = \{ m \in \mathbb{N} : m \leq 0\}$ = $\{0\}$ is finite.
Consider the set $N = M - \{n \in \mathbb{N}: a_{1} < n \leq a_{0}\}$, which is just the set $M$ with the finitely many naturals deleted. From elementrary set theory, $N$ must be infinite.
But $N = \{ n \in \mathbb{N} : n \leq a_{1}\}$, and we can suppose that $a_{1}$ is the first term in a natural number sequence in infinite descent. Thus we have found another infinite set with the desired property, but with $a_{1} < a_{0}$. This contradicts the minimality of $a_{0}$.
Answer
This amounts to showing that the set $M = \{ m \in \mathbb{N} : m \leq a_{0}\}$, where $a_{0}$ is the first member from any sequence in infinite descent, is finite.
$
\def\nn{\mathbb{N}}
$
This first line of your 'proof' is invalid. You are using merely your intuition to claim that the non-existence of an infinite descending chain of natural numbers follows from the finiteness of some set that you specified. This is circular in this case; proving the equivalence amounts to proving something of roughly the same strength as the original desired theorem.
Instead what you actually need is:
Let $S = \{ n : n \in \nn \land \text{there is a strictly decreasing sequence from $\nn$ starting with $n$ } \}$.
If $S$ is non-empty then let $m = \min(S)$ and (use your other ideas) to prove that there is a strictly decreasing sequence from $\nn$ starting with something less than $m$, which contradicts the definition of $m$.
Therefore $S$ is empty and you are done.
complex numbers - $1/i=i$. I must be wrong but why?
$$\frac{1}{i} = \frac{1}{\sqrt{-1}} = \frac{\sqrt{1}}{\sqrt{-1}} = \sqrt{\frac{1}{-1}} = \sqrt{-1} = i$$
I know this is wrong, but why? I often see people making simplifications such as $\frac{\sqrt{2}}{2} = \frac{1}{\sqrt{2}}$, and I would calculate such a simplification in the manner shown above, namely
$$\frac{\sqrt{2}}{2} = \frac{\sqrt{2}}{\sqrt{4}} = \sqrt{\frac{2}{4}} = \frac{1}{\sqrt{2}}$$
Answer
What you are doing is a version of
$$
-1=i^2=\sqrt{-1}\sqrt{-1}=\sqrt{(-1)(-1)}=\sqrt1=1.
$$
It simply shows that for non-positive numbers, it is not always true that $\sqrt{ab}=\sqrt{a}\sqrt{b}$.
elementary set theory - How to Prove $mathbb Rtimes mathbb R sim mathbb R$?
How to prove $\mathbb R\times \mathbb R \sim \mathbb R$?
I know you have to split the problem up into two claims, for each direction to prove that it is a bijection, but i don't know how to go much further than that...
Tough inverse Laplace transform
I know what the solution is to this inverse Laplace transform, I just have NO idea how to get there.
$$\mathcal{L}^{-1}\left(\frac{16s}{\left(s^2+4\right)^2}\right)$$
Basically, my question is what modification do I have to do to the equation above?
Answer
Using a table, note the form:
$$f(t) = t\sin(at)$$
$$F(s) = \frac{2as}{(s^2+a^2)^2}$$
Using this:
$$f(t) = \mathcal{L}^{-1}\left(\frac{16s}{\left(s^2+4\right)^2}\right) = \mathcal{L}^{-1}\left(\frac{2*2*s}{\left(s^2+2^2\right)^2}*4\right) =4t\sin(2t)$$
elementary number theory - How do I compute $a^b,bmod c$ by hand?
How do I efficiently compute $a^b\,\bmod c$:
- When $b$ is huge, for instance $5^{844325}\,\bmod 21$?
- When $b$ is less than $c$ but it would still be a lot of work to multiply $a$ by itself $b$ times, for instance $5^{69}\,\bmod 101$?
- When $(a,c) \neq 1$, for instance $6^{103}\,\bmod 14$?
Are there any other tricks for evaluating exponents in modular arithmetic?
This is being asked in an effort to cut down on duplicates, see here: Coping with *abstract* duplicate questions.
and here: List of Generalizations of Common Questions
Answer
Wikipage on modular arithmetic is not bad.
When $b$ is huge, and $a$ and $c$ are coprime, Euler's theorem applies:
$$
a^b \equiv a^{b \, \bmod \, \phi(c)} \, \bmod c
$$
For the example at hand, $\phi(21) = \phi(3) \times \phi(7) = 2 \times 6 = 12$. $ 844325 \bmod 12 = 5$, so $5^5 = 5 \times 25^2 \equiv 5 \times 4^2 = 80 \equiv 17 \mod 21$.When $a$ and $c$ are coprime, but $0$$
\begin{eqnarray}
5^4 \equiv 5 \times 5^3 \equiv 5 \times 24 \equiv 19 &\pmod{101}\\
19^4 \equiv (19^2)^2 \equiv 58^2 \equiv (-43)^2 \equiv 1849 \equiv 31 &\pmod{101} \\
31^4 \equiv (31^2)^2 \equiv (961)^2 \equiv 52^2 \equiv 2704 \equiv 78 &\pmod{101} \\
5^{69} \equiv 5 \times 5^4 \times ((5^4)^4)^4 \equiv 5 \times 19 \times 78 \equiv 5 \times 19 \times (-23)\\
\equiv 19 \times (-14) \equiv -266 \equiv 37 & \pmod{101}
\end{eqnarray}
$$When $a$ and $c$ are not coprime, let $g = \gcd(a,c)$. Let $a = g \times d$ and $c = g \times f$, then, assuming $b > 1$:
$$
a^b \bmod c = g^b \times d^b \bmod (g \times f) = ( g \times (g^{b-1} d^b \bmod f) ) \bmod c
$$
In the example given, $\gcd(6,14) = 2$. So $2^{102} \times 3^{103} \mod 7$, using Euler'r theorem, with $\phi(7) = 6$, and $102 \equiv 0 \mod 6$, $2^{102} \times 3^{103} \equiv 3 \mod 7$, so $6^{103} \equiv (2 \times 3) \equiv 6 \mod 14 $.
calculus - Is this an ordinary differential equation?
If a differential equation contains only ordinary derivatives of one or more functions with respect to a single independent variable it is said to be an ordinary differential equation (ODE).
If different functions are differentiated with respect to different independent variables, but each function is only differentiated with respect to only one independent variable, is the equation ordinary?
That was a mouthful. For example, for this equation: $$\frac {dy}{dx} + \frac{dz}{dw} - 12y = 0$$
There are multiple independent variables ($x$ and $w$) but each dependent variable is only differentiated with respect to one variable (as opposed to say $y$ being differentiated with respect to $x$ and $w$). So would this be considered ordinary?
I know equations like this:
$$\frac {dy}{dx} + \frac{dz}{dw} - \frac{dy}{dw} +12y = 0$$
aren't ordinary because the one independent variable ($y$ in this case) is being differentiated with respect to multiple dependent variables ($x$ and $w$).
Answer
A partial differential equation is one for which there are several independent variables. In contrast, an ordinary differential equation has just one independent variable. For example, see this discussion of PDEs or see this discussion of ODEs.
Often, in the standard examples of PDEs we separate solutions of several variables into products of functions of one variable. The one-variable functions are solutions to an ODE which is derived from the given PDE. This technique is known as separation of variables.
Friday, 24 January 2014
calculus - Showing a cubic equation has at most one real root or three real roots
Suppose we have $x^3 + \alpha x + \beta = 0$
Problem:
the cubic equation has one real root if $\alpha > 0$ and three real roots if $4 \alpha^3 + 27 \beta^2 < 0 $.
Try:
Let $f(x) = x^3 + \alpha x + \beta$. One has that $f'(x) = 3 x^2 + \alpha $. We have critical points when
$$ 3x^2 + \alpha = 0 \iff x^2 = - \frac{ \alpha }{3} $$
Clearly, if $\alpha > 0$, then we dont have critical points and $f'(x) > 0$ thus always increasing. Since $\lim_{x \to \infty} f(x) = \infty$ and $\lim_{x \to -\infty} = - \infty$, then there must be some $c$ such that $f(c) = 0$, so we have one real root.
Now, if $\alpha < 0$, then we have critical points $x = \pm \sqrt{ -\alpha/3 } $. but, here I am stuck. Any help would be greatly appreaciated.
convergence divergence - Computing terms of a sequence and proving it's convergent
Let $c_n$ be the sequence defined by
$$c_n =1 + \frac{1}{\sqrt{2}} + \frac{1}{\sqrt{3}} + \cdots + \frac{1}{\sqrt{n}} -2\sqrt n$$
a) Compute $c_1$, $c_2$, and $c_3$.
b)Prove that $c_n$ is a convergent sequence.
My attempt was:
a) $c_1$ = $1 + \frac{1}{\sqrt{2}} + \frac{1}{\sqrt{3}} + \cdots + \frac{1}{\sqrt{1}} -2\sqrt 1$ = $1 + \frac{1}{\sqrt{2}} + \frac{1}{\sqrt{3}} + \cdots + 1 - 2$ = $\frac{1}{\sqrt{2}} + \frac{1}{\sqrt{3}} + \cdots$
and same thing for $c_2$ and $c_3$ by replacing n by 2 and 3 respectively. Concerning part b, I don't know how it's done... any help?
Answer
We may write
$$
c_n =\sum_{k=1}^n f(k) -\int_0^n f(x)\mathrm dx
$$ where $f(x) = \frac{1}{\sqrt{x}},\ x> 0$. Note that since $f$ is decreasing, we have for every $k\ge 2$,
$$
f(k-1)\ge \int_{k-1}^k f(x)\mathrm dx \ge f(k),
$$ which implies
$$
\sum_{k=1}^{n-1} f(k)\ge \int_1^n f(x)\mathrm dx \ge \sum_{k=2}^{n} f(k)
$$ by summing over $2\le k\le n$. This in turn implies that
$$
c_n =\sum_{k=1}^{n-1} f(k)-\int_1^n f(x)\mathrm dx +\frac{1}{\sqrt{n}}-2\ge -2
$$ and $$
c_n-c_{n+1} = \int_n^{n+1} f(x)\mathrm dx -f(n+1)\ge 0
$$ for every $n\ge 1$. As a result, bounded decreasing sequence $(c_n)$ has a finite limit $\lim\limits_{n\to\infty} c_n$ by monotone convergence theorem.
elementary number theory - Proof of irrationality of square roots without the fundamental theorem of arithmetic
Here is an elementary proof (adapted from Hardy's A Course of Pure Mathematics) that for any integer $k$, $\sqrt{k}$ is either irrational or integral.
- Suppose $\sqrt{k}$ is rational, $\sqrt{k} = \frac{m}{n}$, $m$ and $n$ have no common factor.
- Then $m^2=kn^2$
- Every factor of $m^2$ must divide $kn^2$
- As $m$ and $n$ have no common factor, every factor of $m^2$ must divide $k$
- Hence $k = \lambda m^2,$ $\lambda \in \mathbb{Z}$
- Hence $1 = \lambda n^2 \rightarrow \lambda = n = 1$
- Hence $k=m^2$
So $\sqrt{k}$ is either irrational or integral.
Q.E.D.
My question regards one step in this proof, present here and also in Hardy's proof - step 4. We conclude that, because $m$ and $n$ have no common factors, all of $m^2$'s factors must be factors of $k$ - because none of them could be factors of $n^2$. We've subtly used the 'fact' here that:
The relative primality of $m$ and $n$ implies the relative primality of $m^2$ and $n^2$
And this is where I am concerned, because I can't quite pinpoint why this must be true. Further, this statement we have assumed as 'obvious' is as strong as the whole proof. Indeed, a weak form of the the contrapositive is:
If $m^2 = kn^2, k\in\mathbb{N}$ then $m$ and $n$ have a common factor.
And straight from this, if $k$ is not a perfect square then $\sqrt{k}$ is irrational.
This is my problem - I cannot see why the first statement highlighted above must be true. Of course, it is obvious from the fundamental theorem of arithmetic, but the whole proof is obvious from the fundamental theorem of arithmetic!
How could you prove the first highlighted statement above without FTOA?
Thank you very much :)
Answer
Bezout's identity says that two integers $m, n$ are relatively prime if and only if we can find integers $a, b$ such that
$$am + bn = 1.$$
Squaring this identity gives
$$a^2 m^2 + 2amn + b^2 n^2 = (a^2m - 2an)m + b^2 n^2 = 1$$
from which we conclude that $\gcd(m, n^2) = 1$. Squaring again gives $\gcd(m^2, n^2) = 1$.
Note that Bezout's identity is often used as a crucial step in the proof of unique factorization. It is very close to the claim that $\mathbb{Z}$ is a principal ideal domain, and it's straightforward to prove that any principal ideal domain has unique factorization.
More generally, in situations where greatest common divisors don't always exist, Bezout's identity is taken to be the definition of being relatively prime.
abstract algebra - Prove that $R(+,.)$ is a division ring but I disproved it
QUESTION:
Let $R=\left[\begin{matrix}\alpha & \beta \\ \bar\beta &
\bar\alpha\end{matrix}\right]\in \mathbf{M_2(\mathbb{C})} $ where
$\bar\alpha,\bar\beta$ denote the conjugates of $\alpha, \beta$
respectively. Prove that $R$ is a division ring but not field under
the usual matrix addition and multiplication.
MY ATTEMPT:
I am comfortable with what I have to do and what I have to prove. I have successfully proved that it is not a field as the matrix multiplication is not commutative. But instead of proving that it is a division ring, I have disproved it.
We know that,
A division ring, also called a skew field, is a ring in which division
is possible. Specifically, it is a nonzero ring in which every
nonzero element a has a multiplicative inverse, i.e., an element $x$
with $a·x = x·a = 1$. Stated differently, a ring is a division ring if
and only if the group of units equals the set of all nonzero elements.
Now the condition in bold is what I have shown not to hold. Actually the inverse of a matrix exists iff the matrix is not singular.
But $\left|\begin{matrix}\alpha & \beta \\ \bar\beta &
\bar\alpha\end{matrix}\right|=|\alpha|^2-|\beta|^2$ which can be $0$ if $|\alpha|=|\beta|$ which holds for infinitely many $\alpha,\beta \in \mathbb{C}$.
So $R(+,.)$ is not a division ring since it contains an infinite number of non-invertible matrices.
Am I right or wrong? Please help.
Answer
You are right about $$R=\left\{\begin{bmatrix}\alpha & \beta \\ \bar\beta &
\bar\alpha\end{bmatrix}\mid \alpha,\beta\in\mathbb{C}\right\} $$, especially since $e=\frac12\begin{bmatrix}1&1\\ 1&1\end{bmatrix}$ would create zero divisors : $e(1-e)=0$.
But presumably what you're reading is about
$$R=\left\{\begin{bmatrix}\alpha & \beta \\ -\bar\beta &
\bar\alpha\end{bmatrix}\mid \alpha,\beta\in\mathbb{C}\right\} $$
which is isomorphic to Hamilton's quaternions, and is a division ring, of course.
abstract algebra - Elements of $E^{times},cdot$ of the quotient ring $E:= frac{mathbb{Z}_3[X]}{langle x^2 + x + 2rangle}$
Consider the field $E:= \frac{\mathbb{Z}_3[X]}{\langle x^2 + x + 2\rangle}$.
If I'm right the elements of the quotient ring can be found as:
$$a_0 + a_1x + \langle x^2 + x + 2\rangle.$$
So we got the possibilities in $\mathbb{Z}_3$:
$$\{0,1,2,\beta, 1+\beta , 2+\beta, 2\beta, 1+2\beta ,2+2\beta \}.$$
Here $\beta = \overline{x} = x + \langle x^2 + x + 2\rangle$ is a root of $x^2 + x+2$.
(Correct me if my notation is wrong.)
So how do we get the elements of unit of $E^{\times},\cdot$. I assume $1$ is in it, but don't know how to calculate the other elements. With the elements, what would be the Cayley table of $E^{\times},\cdot$?
Other little question: we know that $\beta$ is a solution of $x^2 + x+2$, what is the other root?
Answer
After I figured out how to proper multiplicate in a quotient ring via: Constructing a multiplication table for a finite field, I managed to find the unit elements by calculating every possible combination. I found for instance:
\begin{split}
\beta(1+\beta)& = x^2 + x + \langle x^2 + x + 2\rangle \\
& =x^2 + x + \langle x^2 + x + 2\rangle + (0 + \langle x^2 + x + 2\rangle)\\
&= x^2 + x + \langle x^2 + x + 2\rangle + 2x^2+2x+4+ \langle x^2 + x + 2\rangle\\
&= 3x^2+ 3x +4 +\langle x^2 + x + 2\rangle\\
&=0+0+1+\langle x^2 + x + 2\rangle\\
&=1
\end{split}
If I do this for the other elements, I find that
$(2+\beta)(1+2\beta)=1$ and $(2\beta)(2+2\beta)=1$.
So the elements of unit become: $E^{\times},\cdot = \{1,\beta,1+\beta,2+\beta,1+2\beta,2\beta,2+2\beta\}$. The Cayley table is found by multiplying all the elements with each other. They are calculated similar as above.
discrete mathematics - Solve the congruence $9x equiv −3 pmod{24}$. Give your answer as a congruence to the smallest possible modulus, and as a congruence modulo 24.
I've been able to find the answer as a congruence to the smallest possible modulus (i.e. mod 8) but unsure how to find answer as congruence to mod 24. Also, is everything I've done below correct?:
gcd(9,24) = 3
Therefore, our congruence becomes 3x ≡ -1 (mod 8)
So, 3x ≡ 7 (mod 8)
We must find inverse 'c' of 3 (mod 8), i.e. 3c ≡ 1(mod 8)
gcd(3,8) = 1
let 3c + 8y = 1
Using extended Euclidean Algorithm, we get c = 1
Therefore, solution of 3x ≡ 7 (mod 8) (i.e. smallest possible modulus) is:
x ≡ 7 (mod 8)
Now, how to find solution as a congruence to modulus 24? Assuming everything I've done above is correct.
Answer
How do you get $c=1$. The inverse of $3c \equiv 1 \pmod{8}$ is $c=3$ (since $3\times 3=9$). In this way, you obtain $x \equiv 5 \pmod{8}$.
Observe that $9 \times 5=45$ and $24 \times 2=48$, so $x \equiv 5 \pmod{24}$.
Thursday, 23 January 2014
elementary number theory - highest power of prime $p$ dividing $binom{m+n}{n}$
How to prove the theorem stated here.
Theorem. (Kummer, 1854) Let $p$ be a prime. The highest power of $p$ that divides the binomial coefficient $\binom{m+n}{n}$ is equal to the number of "carries" when adding $m$ and $n$ in base $p$.
So far, I know if $m+n$ can be expanded in base power as
$$m+n= a_0 + a_1 p + \dots +a_k p^k$$
and $m$ have to coefficients $\{ b_0 , b_1 , \dots b_i\}$ and $n$ can be expanded with coefficients $\{c_0, c_1 ,\dots , c_j\}$ in base $p$ then the highest power of prime that divides $\binom{m+n}{n}$ can be expressed as
$$e = \frac{(b_0 + b_1 + \dots b_i )+ (c_0 + c_1 + \dots c_j )-(a_0 + a_1 + \dots a_k )}{p-1}$$
It follows from here page number $4$. But how does it relate to the number of carries? I am not being able to connect. Perhaps I am not understanding something very fundamental about addition.
Answer
If $b_{0} + c_{0} < p$, then $a_{0} = b_{0} + c_{0}$, there are no carries, and the term
$$
b_{0} + c_{0} - a_{0} = 0
$$
does not contribute to your $e$.
If $b_{0} + c_{0} \ge p$, then $a_{0} = b_{0} + c_{0} - p$, and this time $b_{0} + c_{0} - a_{0}$ gives a contribution of $p$ to the numerator of $e$. Plus, there is a contribution of $1$ to $a_{1}$, so the net contribution to the numerator of $e$ is $p -1$, and that to $e$ is $1$. Repeat.
As mentioned by Jyrki Lahtonen in his comment (which appeared while I was typesetting this answer), you may have propagating carries, but this is the basic argument.
Matrices: left inverse is also right inverse?
If $A$ and $B$ are square matrices, and $AB=I$, then I think it is also true that $BA=I$. In fact, this Wikipedia page says that this "follows from the theory of matrices". I assume there's a nice simple one-line proof, but can't seem to find it.
Nothing exotic, here -- assume that the matrices have finite size and their elements are real numbers.
This isn't homework (if that matters to you). My last homework assignment was about 45 years ago.
algorithms - Find gcd($a,c$) with gcd($a,b$) and gcd($b,c$) is given?
Suppose gcd($a,b$) and gcd($b,c$) are given. How can we find gcd($a,c$)? (gcd($x,y$) is the greatest common divisor of $x$ and $y$). Any help is appreciated.
Answer
GCD as such has no transitive properties at all.
For example, you can take an extreme case : Let $a = p$, let $b=1$ and let $c = p$. Then, while $\gcd(a,b) = \gcd(b,c) = 1$, it so happens that $\gcd(a,c) = p$. Taking $p$ as large enough as you want, you can see that there is no relationship at all between the suggested quantities.
sequences and series - $A_n$=$frac {a_1+a_2+...a_n}{n}$ is monotonic if $a_n$ is monotonic
if $a_n$ is monotonic increasing/decreasing show that sequence
$A_n$=$\frac {a_1+a_2+...a_n}{n}$ is also monotonic increasing/decreasing.
my attempt:
I intially thought of using induction since $A_2>A_1$ when $a_2>a_1$ so base case is available. but to prove $A_{n+1}>A_n$ doesnt show up easy. Any other way?
Answer
Proving the generalized case is very similar to the base case, because you can write $A_{n+1} = \frac{n}{n+1}A_n + \frac{1}{n+1} a_{n+1}$, which looks very similar to $A_2 = \frac{1}{2}A_1 + \frac{1}{2} a_{2}$
Basically, it amounts to stating why the relative contribution from $a_{n+1}$ is at least as large as the relative contribution from any of the previous elements $a_j, j
sequences and series - How was $sumlimits_{i=1}^{n} i = frac{n(n+1)}{2}$ calculated?
Possible Duplicates:
Why is sum of a sequence $\displaystyle s_n = \frac{n}{2}(a_1+a_n)$?
Sum of n consecutive numbers
I know how to prove $\sum\limits_{i=1}^{n} i = \frac{n(n+1)}{2}$ works by mathematical induction, but how was the algorithm created?
Was it just trial and error?
How is any generic equation like this created with summation? At what level of math would I actually start learning how to come up with these equations myself?
Answer
A standard way of "discovering" this formula is to take the sum twice, in opposite order:
$$\begin{array}{ccccccccc}
1 & + & 2 & + & 3 & + & \cdots & + & n\\
n & + & n-1 & + & n-2 & + & \cdots & + & 1\\
\hline
(n+1) & + & (n+1) & + & (n+1) & + & \cdots & + & (n+1)
\end{array}$$
which readily yields that twice the sum equals $(n+1)n$, from which the result follows.
This kind of insight might strike "as a bolt from the blue" easily enough.
More interesting, perhaps, is to ask about more general formulas for sums of powers,
$$\sum_{i=1}^n i^k$$
with $k$ a positive integer. There are many systematic ways for finding such formulas; see for example this paper, or this Monthly paper by Beardon to get started.
real analysis - Computing $lim_{epsilon rightarrow 0} int_0^infty frac{sin x}{x} arctan{frac{x}{epsilon}}dx$
I'm not exactly sure how to get started computing the limit of the improper Riemann integral
$$\lim_{\epsilon \rightarrow 0} \int_0^\infty \frac{\sin x}{x} \arctan\left(\frac{x}{\epsilon}\right)dx.$$
Using the result that $\int_0^\infty \frac{\sin x}{x} dx = \pi/2$, is there a way to interchange the limit and the integral to get $\pi^2/4$?
Answer
By the dominated convergence theorem
$$\lim_{\epsilon \to 0} \int_0^\pi \frac{\sin x}{x} \arctan\frac{x}{\epsilon}\,dx=\frac{\pi}{2}\int_0^\pi \frac{\sin x}{x}\,dx.
$$
Now
$$
\int_\pi^\infty \frac{\sin x}{x}\,\arctan\frac{x}{\epsilon}\,dx=\frac{\pi}{2}\int_\pi^\infty \frac{\sin x}{x}\,dx+\int_\pi^\infty \frac{\sin x}{x}\Bigl(\arctan\frac{x}{\epsilon}-\frac{\pi}{2}\Bigr)\,dx.
$$
Let's stimate the second integral:
$$
\Bigl|\arctan\frac{x}{\epsilon}-\frac{\pi}{2}\Bigr|=\int_{x/\epsilon}^\infty\frac{dt}{1+t^2}\le\frac{\epsilon}{x},
$$
and
$$
\int_\pi^\infty \Bigl|\frac{\sin x}{x}\Bigl(\arctan\frac{x}{\epsilon}-\frac{\pi}{2}\Bigr)\Bigr|\,dx\le\epsilon\int_\pi^\infty\frac{|\sin x|}{x^2}\,dx.
$$
trigonometry - How do you find the sine and cosine from the tangent?
I'm given the problem:
If $\cot(\theta) = 1.5$ and $\theta$ is in quadrant 3, what is the value of $\sin(\theta)$?
I looked at all the related answers I could find on here, but I haven't been able to piece together the answer I need from them.
I know that $\sin^2\theta + \cos^2\theta = 1$, $ \cot^2\theta + 1 = \csc^2\theta $, and $\csc^2\theta = \frac{1}{\sin^2\theta}$
Substituting 3.25 for $\cot^2\theta + 1$ and $\frac{1}{\sin^2\theta}$ for $\csc^2\theta$ I get:
$3.25 = \frac{1}{\sin^2\theta}$
then
$\sin\theta = -\sqrt{\frac{1}{3.25}}$
This doesn't seem correct though. Can anyone help please?
edit: Sorry, meant to make that answer negative.
Answer
If $\cot\theta=1.5$, then $\tan\theta=\frac23$. This means that if $\theta$ were in the first quadrant, it would be one of the angles of a right triangle whose legs measure $2$ and $3$ and whose hypotenuse measures $\sqrt{2^2+3^2}=\sqrt{13}$. Specifically, it would be the angle opposite the side of length $2$. Sketch the triangle, and you’ll see that in that case we’d have $$\sin\theta=\frac2{\sqrt{13}}\;.$$
But $\theta$ is in the third quadrant, not the first; what effect does this have on its sine?
elementary number theory - Solving a congruence, tricky implication
We want to prove that
$$243x \equiv 1 \mod 2018 \implies x^{403} \equiv 3 \mod 2018$$
My try :
Assume that $243x \equiv 1 \mod 2018$
We have $x^{2016} \equiv 1 \mod 2018$ (by Fermat ($1009$ is prime) and oddness of $x$) so $x^{2015} \equiv 243 \mod 2018$
but $403 \times 5 = 2015 $
hence $(x^{403})^5 \equiv 243 \mod 2018$ or equivalently $(x^{403})^5 \equiv 3^5 \mod 2018$. I wonder how to get from this last congruence to the desired $x^{403} \equiv 3 \mod 2018$ ?
Thanks for any suggestions.
Answer
From the last step of your attempt i.e. $(x^{403})^5 \equiv 3^5 \mod 2018$, you can go to the conclusion if you can show that $y^5 \equiv 243 \mod 2018$ has only one solution mod $2018$ (because this would force $x^{403} \equiv 3 \mod 2018$ as they are both solutions).
While that statement is true, its proof is not going to be easy because it boils down to asking why $x^5 \equiv 1 \mod 2018$ has a unique solution, and this is not clear at all.
Therefore, you have not done anything wrong but got yourself in a higher power than required.
However, you did do some groundwork. For example, by noting that $243x \equiv 1 \mod 2018$, so we may raise both sides to the power $403$ :
$$
3^5x \equiv 1 \mod 2018 \implies 3^{2015}x^{403} \equiv 1 \mod 2018
$$
Now, multiply by a further $3$, and eliminate the $3^{2016}$ using an argument made in your attempt.
abstract algebra - Showing field extension $mathbb{Q}(sqrt{2}, sqrt{3}, sqrt{5})/mathbb{Q}$ degree 8
I am trying to classify the Galois group of the field extension $\mathbb{Q}(\sqrt{2}, \sqrt{3}, \sqrt{5})/\mathbb{Q}$ and am getting stuck on trying to show that the extension is degree 8. I understand that you can look at intermediate fields in the tower to get
$[\mathbb{Q}(\sqrt{2}, \sqrt{3}, \sqrt{5}) : \mathbb{Q}] = [\mathbb{Q}(\sqrt{2}, \sqrt{3}, \sqrt{5}) : \mathbb{Q(\sqrt{2}, \sqrt{3})}][\mathbb{Q}(\sqrt{2}, \sqrt{3}):\mathbb{Q}(\sqrt{2})][\mathbb{Q}(\sqrt{2}) : \mathbb{Q}]$
and each of these has degree 2. But is there an easy way to show $[\mathbb{Q}(\sqrt{2}, \sqrt{3}, \sqrt{5}) : \mathbb{Q}(\sqrt{2}, \sqrt{3})] = 2$? I tried showing that $\sqrt{5}$ cannot be written as a linear combination of $\{1, \sqrt{2}, \sqrt{3}, \sqrt{6}\}$ but it's a long mess, and I'm wondering if there's a more clever way to do this.
More generally, is it true that $\mathbb{Q}(\sqrt{p_1}, ..., \sqrt{p_n})/\mathbb{Q}$ where $p_i$ are distinct primes has degree $2^n$?
Answer
An easy way to show that $\sqrt{5}$ is not in $\mathbb{Q}(\sqrt{2},\sqrt{3})$ is to note that $[\mathbb{Q}(\sqrt{2},\sqrt{3})\colon\mathbb{Q}]=4$, so by the Fundamental Theorem of Galois Theory it has either one or three intermediate fields of degree $2$ over $\mathbb{Q}$ (since a group of order $4$ has either a single subgroup of index $2$, when the group is cyclic, or exactly three, when it is the Klein $4$-group). Since this field contains the three distinct intermediate field $\mathbb{Q}(\sqrt{2})$, $\mathbb{Q}(\sqrt{3})$, and $\mathbb{Q}(\sqrt{6})$, it cannot also contain the field $\mathbb{Q}(\sqrt{5})$, which is distinct from those three.
For your second question, the answer is likewise "yes". The only intermediate fields of degree $2$ are $\mathbb{Q}(\sqrt{d})$, where $d$ is the of some (but at least one) of the $p_i$. You can prove it by induction on $n$.
Wednesday, 22 January 2014
functions - Is there a bijection between $(0,1)$ and $mathbb{R}$ that preserves rationality?
While reading about cardinality, I've seen a few examples of bijections from the open unit interval $(0,1)$ to $\mathbb{R}$, one example being the function defined by $f(x)=\tan\pi(2x-1)/2$. Another geometric example is found by bending the unit interval into a semicircle with center $P$, and mapping a point to its projection from $P$ onto the real line.
My question is, is there a bijection between the open unit interval $(0,1)$ and $\mathbb{R}$ such that rationals are mapped to rationals and irrationals are mapped to irrationals?
I played around with mappings similar to $x\mapsto 1/x$, but found that this never really had the right range, and using google didn't yield any examples, at least none which I could find. Any examples would be most appreciated, thanks!
Answer
$(1/x)-2$ on $(0,1/2]$ and $2-(1/(x-1/2))$ on $(1/2,1)$.
summation - Power Sum of Integers and Relationship with Sum of Squares and Sum of Cubes
Let $\displaystyle\sigma_m=\sum_{r=1}^n r^m$.
Refer to the tabulation of the power sum of integers here.
It is interesting to note that
$$\begin{align}
\color{green}{\sigma_1}\ &=\frac 12 n(n+1)\\
\color{blue}{\sigma_2}\ &=\frac 16 n(n+1)(2n+1)\\
\color{red}{\sigma_3}\ &=\frac 14 n^2(n+1)^2&&=\color{green}{\sigma_1}^2\\
\sigma_4\ &=\frac 1{30}n(n+1)(2n+1)(3n^2+3n-1)&&=\frac 15\; \color{blue}{\sigma_2} \ (3n^2+3n-1)\\
\sigma_5\ &=\frac 1{12}n^2(n+1)^2(2n^2+2n-1)&&=\frac 13\; \color{red}{\sigma_3}\ (2n^2+2n-1)\\
\sigma_6\ &=\frac 1{42}n(n+1)(2n+1)(3n^4+6n^3-3n+1)&&=\frac 17\;\color{blue}{\sigma_2}\ (3n^4+6n^3-3n+1)\\
\sigma_7\ &=\frac 1{24}n^2(n+1)^2 (\cdots)&&=\frac 16\; \color{red}{\sigma_3}\ (\cdots)\\
\sigma_8\ &=\frac 1{90}n(n+1)(2n+1)(\cdots)&&=\frac 1{15}\color{blue}{\sigma_2}\ (\cdots)\\
\sigma_9\ &=\frac 1{20}n^2(n+1)^2(n^2+n-1)(\cdots)&&=\frac 15\; \color{red}{\sigma_3}\ (n^2+n-1)(\cdots)\\
\sigma_{10}&=\frac 1{66}n(n+1)(2n+1)(n^2+n-1)(\cdots)&&=\frac 1{11}\color{blue}{\sigma_2}\ (n^2+n-1)(\cdots)
\end{align}$$
i.e.
- the sum of squares, $\sigma_2$, is a factor of sum of even powers greater than $2$, and
- the sum of cubes, $\sigma_3$, is a factor of sum of odd powers greater than $3$.
Is there a simple explanation for this, if possible without using Faulhaber's formula and Bernoulli numbers, etc?
and also,
Why does this occur only for $\sigma_2, \sigma_3$ but not for $\sigma_4, \sigma_5$, etc?
real analysis - What is a natural number?
According to page 25 of the book A First Course in Real Analysis, an inductive set is a set of real numbers such that $0$ is in the set and for every real number $x$ in the set, $x + 1$ is also in the set and a natural number is a real number that every inductive set contains. The problem with that definition is that it is circular because the real numbers are constructed from the natural numbers.
Answer
Suppose we did start with some notion of "natural number" which we used to construct a model of the real numbers.
Then even in this setting, the quoted definition is still not circular, because it's defining a new notion of "natural number" that will henceforth be used instead of the previous notion of "natural number".
We could give the new notion a different name, but there isn't really any point; the new version of "natural numbers" has an obvious isomorphism with the old version so it's not really any different from the old one in any essential way.
There are a number of reasons why an exposition of real analysis might construct the natural numbers from the real numbers; the two most prominent are:
- It is technically convenient to have the natural numbers be a subset of the real numbers
- It makes the exposition somewhat more agnostic about foundations; it simply needs the real numbers as a starting point
analysis - Injection, making bijection
I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$?
I know that every function is surjective when it's codomain is restricted to it's image but I am not sure can I do what I did.
fractals - Koch curve from Cantor sets (paradox)
The Koch curve is normally constructed by taking a line segment, replacing the middle third with two copies of itself forming legs in an equilateral triangle, and then repeating this recursively for every subsegment. See image below. At every step, the length of the curve is multiplied by $4/3$ so the final length is infinite.
Notice that every line segment undergoes the construction of the Cantor set:
Therefore one could consider replacing every line segment with Cantor sets already from the beginning. Thus, start with the Cantor set, take two smaller copies of the Cantor set and make a $\wedge$ in the middle opening, and then repeat recursively. In this case the final length will be zero since the Cantor set has length zero.
Question: How is this paradox resolved? Will the limit set of my construction be different than the ordinary Koch curve? If so, what points are missing?
Answer
Convergence in fractal geometry is typically defined in terms of the Hausdorff metric. Roughly, two sets are "close" with respect to the Hausdorff metric, if every point in one is close to some point of the other.
Your collection of Cantor sets is indeed dense in the Koch curve with respect to the Hausdorff metric. The Hausdorff metric, however, doesn't respect length. That is, two sets can be close in the Hausdorff metric, yet their lengths can be very far apart. You've found one example illustrating this but there are others.
For example, if
$$
Q_n = \{k/n:0\leq k \leq n\},
$$
then the Hausdorff distance between $Q_n$ and the unit interval is less than $1/n$. $Q_n$ is finite, yet the sequence of $Q_n$s converges to a set of positive length. Similarly, you could strengthen your example by using the set of endpoints of the intervals that approximate the Koch curve.
Here is a strategy to find points in the Koch curve that do not lie on any of your Cantor sets. First, note that the Koch curve is invariant set of the iterated function system:
$$\begin{align}
T_1(x,y) &= \left(\frac{x}{3},\frac{y}{3}\right) \\
T_2(x,y) &= \left(\frac{1}{6} \left(x-\sqrt{3}
y+2\right),\frac{1}{6} \left(\sqrt{3}
x+y\right)\right) \\
T_3(x,y) &= \left(\frac{1}{6} \left(x+\sqrt{3}
y+3\right),\frac{1}{6} \left(-\sqrt{3}
x+y+\sqrt{3}\right)\right) \\
T_4(x,y) &= \left( \frac{x}{3},\frac{y+2}{3} \right).
\end{align}$$
These functions map the Koch curve onto the four sub-parts below
Now, any point On the Koch curve can be realized as the limit of a sequence
$$\begin{align}
& T_{i_1}(0,0) \\
& T_{i_1} \circ T_{i_2}(0,0)\\
& \vdots \\
& T_{i_1} \circ T_{i_2} \circ \cdots T_{i_n}(0,0) \\
& \vdots
\end{align}$$
where $(i_1,i_2,i_3,\ldots)$ is a sequence in $\{1,2,3,4\}$. The point lies on one of your Cantor sets precisely when the sequence contains only finitely many 2s and 3s, so that it ends in a string of only 1s and 4s.
If, for example, the sequence has only 1s and 4s, and no 2s or 3s, then we get a point in Cantor's ternary set lying on the unit interval. If the sequence start with a 2 and then contains only 1s and 4s, we generate a point in the red Cantor set shown below; this is exactly the image of the ternary Cantor set under the function $T_2$. If the sequence starts 3, then 2, then contains only 1s and 4s, we generate a point in the blue Cantor set below; this is exactly the the image of the ternary Cantor set under the function $T_3 \circ T_2$.
Finally, if we have any other sequence of 1s, 2s, 3s, and 4s, then we generate some other point on the Koch curve that is not in any of your Cantor sets. There are uncountably many such points. I suppose the simplest one to find explicitly corresponds to the sequence containing only 2s, which is exactly the fixed point of $T_2$. To find it, we need only solve
$$T_2(x,y) = (x,y),$$
which yields $(5/14,\sqrt{3}/14)$. That point is shown in red in the sequence of approximations below.
If we zoom into the last picture on the red point so that it's centered in a square of side length 0.04, we get the following:
Thus, the edges keep jutting out closer to the point but never actually hit it. It's in the limit but not on any of the edges.
real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$
How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...
-
Self-studying some properties of the exponential-function I came to the question of ways to assign a value to the divergent sum $$s=\sum_{k=...
-
Ok, according to some notes I have, the following is true for a random variable $X$ that can only take on positive values, i.e $P(X<0=0)$...
-
Make a bijection that shows $|\mathbb C| = |\mathbb R| $ First I thought of dividing the complex numbers in the real parts and the c...