Wednesday 28 February 2018

Find the two limits without the use of l'Hospital's rule or series expansion.



I was asked to evaluate these two limits:



$$\lim_{x\rightarrow0}\frac{x^3}{x-\sin x}$$

$$\lim_{x\rightarrow0}\frac{e^{-x^2}+x^2-1}{\sin(3x^4)}$$



For the first one I tried to divide the numerator and denominator by $x^3$, but I can't get the answer unless I apply l'Hospital's rule or using a series expansion.



I also tried to use a substitution $u=x^2$ for the second limit, but I can't seem to relate anything between the exponential function and sine function.


Answer



As shown here we have that




  • $\lim_{x\to0}\frac{\sin x-x}{x^3}=-\frac16$


  • $\lim_{x\to0}\frac{e^x-x-1}{x^2}=\frac12$



For the second one we can use that



$$\frac{e^{-x^2}+x^2-1}{\sin(3x^4)}=\frac{e^{-x^2}+x^2-1}{-x^4}\cdot \frac{-3x^4}{\sin(3x^4)}\cdot\frac13$$


analysis - An inequality about a continuous function.



Let $\Omega \subset \Bbb R ^2$ be bounded and closed, and let $g : \Omega \to \Bbb [0, \infty)$ be continuous. Let $g(x_0,y_0)=\max \limits_{\Omega} g(x,y)$. Show that:



$\exists \rho_{x_0},\rho_{y_0}>0$, $\forall(x,y)\in \Omega,|x-x_0|<\rho_{x_0},|y-y_0|<\rho_{y_0}$, then $g(x,y_0)\geq g(x,y)$.



In fact, I think the $\Omega$ is bounded and closed is not necessary. All of the above is my guess, I really don't know whether it is right. Thanks for any answer or advice.



Answer



It is not true. The strict inequality is trivially false by taking $x=x_0$, $y=y_0$. The non-strict one is not true either. Consider $g(x,y)=-(x-y)^2$ and $(x_0,y_0)=(0,0)$. Then
$$
g(x,y_0)=-x^2$$
for all $x\ne x_0$.


trigonometry - Prove the following trigonometric identity without a calculator involved

I have to prove the following statement.





$$1+\cos{2\pi\over5}+\cos{4\pi\over5}+\cos{6\pi\over5}+\cos{8\pi\over5}=0$$




I have tried to use the sum of angles formula for cosine, but didn't get to a point where I'd be able to show that it is equal to $0$.

algebra precalculus - Proof that the squareroot of the mean of the squares is allways greater or equal than the mean of weighted values



I couldn't find a better title, but basically you have given some values $x_1...x_n$ and some weights $p_1...p_n$ ($x_n\in\mathbb{R}$ and $p_n\in[0,1]$, also $p_1+...+p_n=1$).



You now calculate the weighted arithmetic mean of the squares of this values:

$$W_1=\sum^n_{k=1}x_k^2p_k$$
And also the square of the weighted mean of the values:
$$W_2=\left(\sum^n_{k=1}x_kp_k\right)^2$$



Now I know that $W_1\geq W_2$, but I am not able to prove this. I was only able to transform the inequality a bit so I arrived at:
$$\sum^n_{k=1}x_k^2(p_k-p_k^2) \geq 2\sum^n_{k=1}\sum_{m=k+1}^nx_kx_mp_kp_m$$
I'm really stuck here and don't know how to proceed (or even if this inequality helps me or not).
Background of this question is this well-known formula of the variance for a discrete random variable $X$ (which boils down to the problem I described):
$$V(X)=E(X^2)-(E(X))^2$$
And because $V(X)\geq 0$, it follows $E(X^2)\geq (E(X))^2$. But I tried to find a convincing proof for this statement without using the definition of variance and this formula. Yes, I used Google and Wikipedia but neither could help me.




I hope someone can give me some hints on how to solve this or maybe even give me a complete proof or a reference to one, I would really much appreciate it. :)


Answer



A direct proof would be to observe that for any real number $a$,
$$
0 \le \sum^n_{k=1} (x_k - a)^2 p_k = \sum^n_{k=1} x_k^2 p_k - 2 a \sum^n_{k=1} x_k p_k + a^2 \, .
$$
Then set $a = \sum^n_{k=1} x_k p_k$ to obtain $0 \le W_1 - W_2$.



This is actually the same as the Cauchy–Schwarz inequality,

applied to the vectors
$$
(x_1 \sqrt{p_1}, \ldots, x_n \sqrt{p_n})
$$
and
$$
(\sqrt{p_1}, \ldots, \sqrt{p_n})
$$


real analysis - Does $f(ntheta) to 0$ for all $theta>0$ and $f$ Darboux imply $f(x) to 0$ as $x to infty$?



Recall that a Darboux function $f:\mathbb{R} \to \mathbb{R}$ is one which satisfies the conclusion of the intermediate value theorem (i.e., connected sets are mapped to connected sets). Being Darboux is a weaker condition than continuity. If a theorem about continuous functions only uses the intermediate value theorem, then chances are it also holds for the entire class of Darboux functions. I find it interesting to study which theorems about continuous functions also hold for Darboux functions.



We have the following theorem, which is fairly well known and hinges on the Baire Categoery Theorem.





If $f:\mathbb{R} \to \mathbb{R}$ is continuous and $f(n\theta) \xrightarrow[n \in \mathbb{N}, \ n\to\infty]{} 0$ for every $\theta \in (0, \infty)$, then $f(x) \xrightarrow[x \in \mathbb{R}, \ \ x\to\infty]{} 0$.




A counterexample if we drop continuity is $f(x) = \mathbf{1}_{\{ \exp(n) : n \in \mathbb{N}\}}$. However, this counterexample isn't Darboux, and I haven't been able to come up with any counterexample which is Darboux. Thus, this leads me to my question.




Can the continuity condition in the theorem stated above be relaxed to Darboux?





In searching for counterexamples of this sort, one approach is playing around with $\sin \frac{1}{x}$. An alternative approach is considering highly pathological functions with the property that every nonempty open set is mapped to $\mathbb{R}$ (for instance, Conway Base-13, or Brian's example here) and modifying these in such a way that they satisfy the hypotheses of the problem.


Answer



Non-measurable example



By the axiom of choice there is a $\mathbb Q$-linear basis of $\mathbb R.$ This basis has the same cardinality as $\mathbb R$ so can be indexed as $a_r$ for $r\in\mathbb R.$ Define $f$ by setting $f(x)=r$ if $x$ is of the form $a_0+qa_r$ for some rational $q$ and real $r,$ and set $f(x)=0$ for $x$ not of this form. Then $f$ is Darboux because the set $\{a_0+qa_r\mid q\in\mathbb Q\}$ is dense for each $r.$ But for each $\theta>0,$ we can only have $f(q\theta)\neq 0$ for at most one rational $q$ - the reciprocal of the $a_0$ coefficient of $\theta.$ In particular $f(n\theta)\to 0$ as $n\to\infty$ with $n\in\mathbb N.$



Measurable example



For $n\geq 2$ let $b_n=n!(n-1)!\dots 2!.$ Each real has a unique "mixed radix" expression as

$x=\lfloor x\rfloor + \sum_{n\geq 2}\frac{x_n}{b_n}$ where $x_n$ is the unique representative of $\lfloor b_n x\rfloor$ modulo $n!$ lying in $\{0,1,\dots,n!-1\}.$ For non-negative $x$ define $f(x)=\lim_{n\to\infty} \tfrac{1}{n}\sum_{m=2}^n x_m$ if this limit exists and $x_n\leq 1$ for all sufficiently large $n,$ and take $f(x)=0$ otherwise. For negative $x$ define $f(x)=f(-x).$ Note $f(x)\in[0,1].$ It is straightforward to see that $f$ takes all values in $[0,1]$ in every interval and is hence Darboux.



Now consider a real $x>0$ with $f(x)\neq 0$ and let $q<1$ be rational. We will show that $f(qx)=0.$ We know there exists $N$ such that $x_n\leq 1$ for all $n>N.$ Increasing $N$ if necessary we can assume that $qN$ is an integer. We also know that $x_n=1$ for infinitely many $n>N$ - otherwise we would have $\lim_{n\to\infty} \tfrac{1}{n}\sum_{m=2}^n x_m=0.$
Write $x=x'/b_{n-1}+1/b_n+\epsilon/b_{n+1}$ where $x'$ is an integer and $0\leq\epsilon< 2.$ So $qx b_{n+1}=qx'n!(n+1)!+q(n+1)!+q\epsilon.$ The first term is a multiple of $(n+1)!$ because $qn!$ is an integer, and the second term $q(n+1)!$ is an integer, and $q\epsilon<2.$ So $(qx)_{n+1}$ is either $q(n+1)!$ or $q(n+1)!+1$ (note this is less than $(n+1)!$). Since $q(n+1)!>1$ and there are infinitely many such $n,$ we get $f(qx)=0,$ .



This shows that for each $\theta>0,$ the sequence $f(n\theta)$ takes at most one non-zero value, and in particular $f(n\theta)\to 0.$



Remark: this $f$ appears to be a counterexample to https://arxiv.org/abs/1003.4673 Theorem 4.1.


Tuesday 27 February 2018

probability - Example of non continuous random variable with continuous CDF




Can someone provide an example of $X$ being a non-continuous random variable with continuous cumulative distribution function?



For instance:



$X$ is discrete if it takes (at most) a countable number of values.



$X$ is continuous (or absolutely continuous) if its law $P^X$ admits a density $f(x)$.



Note: A random variable don't have to be necessarily discrete or continuous; just take a cumulative distribution function that is non-constant and continuous except in $0$.
Then $X$ is neither continuous nor discrete.




I know that to ensure that $X$ is continuous, we need to ask $F_X \in C^1$, as $F_X \in C^0$ does not suffice.



I would then like to see a non continuous random variable with continuous cdf


Answer



A simple example is $X$ uniformly distributed on the usual Cantor set, in other words, $$X=\sum_{n\geqslant1}\frac{Y_n}{3^n},$$ for some i.i.d. sequence $(Y_n)$ with uniform distribution on $\{0,2\}$.



Other examples are based on binary expansions, say, $$X=\sum_{n\geqslant1}\frac{Z_n}{2^n},$$ for some i.i.d. sequence $(Z_n)$ with any nondegenerate distribution on $\{0,1\}$ except the uniform one.



These distributions have no density with respect to Lebesgue measure, even partly, since $P(X\in C)=1$ for some Borel set $C$ with zero Lebesgue measure. They have no atom either, in the sense that $P(X=x)=0$ for every $x$.



Monday 26 February 2018

The functional equation and differentiability




Find all functions $f: \mathbb R\rightarrow \mathbb R$, at the same time satisfying the following two conditions:



a) $f (x + yf (x)) = f (x) f (y)$




b) the function $f$ can be represented in the form $f (x) = (\varphi (x)) ^ 2, x \in \mathbb R,$ where the function $f$ has a finite derivative at $x = 0.$ (not infinite)




I have no clue how to start. Any kind of help will be appreciated.


Answer



Here is a possible approach.



Plug in $y=0$ to get
$$

f(x) = f(x) f(0),
$$
so either $f(x) \equiv 0$ or $f(0)=1$.



Now note that
$$
f(x+yf(x)) = f(y+xf(y))
$$
and assuming $f$ is 1-to-1, we have
$$

x + yf(x) = y + xf(y)\\
x(1-f(y)) = y(1-f(x))\\
\frac{x}{1-f(x)} = \frac{y}{1-f(y)}
$$
for arbitrary $x,y$, and that means both LHS and RHS and constant, say $c$.Then you have
$$
c = \frac{x}{1-f(x)} \\
f(x) = 1 - x/c
$$




UPDATE
If $f$ is not 1-1, $f(x) \equiv 1$ is a solution, but not sure if there are others...


elementary number theory - Solving Non-Linear Congruences



I'm trying to solve an exercise from Niven I.M "An Introduction to Theory of Numbers" on page 106, problem 10. The problem wants you to find all the solutions to the congruence \begin{equation*}

x^{12} \equiv 16 \quad (\text{mod }17)
\end{equation*}

Here is my attempt;



First I found that $3$ is a primitive root in $(mod 17)$, i.e $3^{16} \equiv 1 \quad (\text{mod }17)$.



This means that we can write $16 \equiv 3^{8} \quad (\text{mod }17)$. So we have \begin{equation*}
x^{12} \equiv 3^{8} \quad (\text{mod }17)
\end{equation*}

Then multiplying the congruence by $3^{16}$ we see that

\begin{equation*}
x^{12} \equiv 3^{24} \quad (\text{mod } 17)
\end{equation*}

We see that $x=9$ is a solution because $9=3^2$.



To find the remaining solution I think we need to have \begin{equation*}
x^{12} \equiv 3^{8+16k} \quad (\text{mod }17)
\end{equation*}

for $k \in \mathbb{Z}/17\mathbb{Z}$.




So we need $12|(8+16k)$.
However, I'm not sure about my last argument that $12|(8+16k)$. Is it right or wrong?
Any help is appreciated.


Answer



$k$ would be in $\mathbb{Z}$ . You can Also note both 12 and $16k+8$ divide by 4. This means, 3 would need to divide $4k+2$. Using mod 3, we get $k$ congruent to 1 mod 3. $k=1$ gives a cube root of 9. $k=4$ gives 15, $k=7$ gives 8, and $k=10$ gives 2. You intuition works, but your reduction could have gone further.


Sunday 25 February 2018

real analysis - Evaluate $lim_{nrightarrowinfty} frac{a_n}{b_n}$



Let $a_n$ and $b_n$ be a recursive sequence with seed value $a_0=0,a_1=1$, $b_0=1$ and $b_1=2$ such that




$$\begin{align} \\ &a_{n+1}=(4n+2)a_n+a_{n-1}\\\\&b_{n+1}=(4n+2)b_n + b_{n-1} \end{align}$$



Find $\displaystyle\lim_{n\rightarrow\infty} \frac{a_n}{b_n}$. (Ans. $\frac{e-1}{e+1}$)



I don't know how to start. Any help would be appreciated.


Answer



Hint: the continued fraction of $\tanh\left(\frac{1}{2}\right)$ is given by:
$$ \tanh\left(\frac{1}{2}\right)=[0;2,6,10,14,18,22, 26, 30, 34, 38, 42,\ldots]\tag{1}$$
due to Gauss' continued fraction, and your sequence $\left\{\frac{a_n}{b_n}\right\}_{n\geq 1}$ is just the sequence of convergents of the RHS of $(1)$.







If you change the initial values $a_0,a_1,b_0,b_1$, the limit takes the form $\frac{a+bz}{c+dz}$ with $z=\tanh\left(\frac{1}{2}\right)$ by the general theory of continued fractions.


functions - Show that $frac{1}{1+k}=frac{frac{1}{k}}{1+frac{1}{k}}leq ln(1+frac{1}{k})leqfrac{1}{k}$





Prove the following:
$$\frac{1}{1+k}=\frac{\frac{1}{k}}{1+\frac{1}{k}}\leq \ln(1+\frac{1}{k})\leq\frac{1}{k}$$



I know I can prove it with induction if the values were naturals. However, the "problem" for me is that they're real.



Answer



For all $x \in \mathbb{R}$,
$$e^x \geq 1 + x$$
Taking log on both sides we get,
$$\ln (1 + x) \leq x, \forall x > -1$$
Substituting $x = \frac{1}{k}, k \notin [0, -1]$, we get,
$$\displaystyle{\ln \left(1 + \frac{1}{k}\right) \leq \frac{1}{k}}$$
Substituting $x = \frac{-1}{k + 1}, k \notin [0, -1]$, we get,
$$\ln \left(\frac{k}{k + 1}\right) \leq \frac{-1}{k + 1}\Rightarrow \ln \left(1 + \frac{1}{k}\right) \geq \frac{1}{k + 1} $$


linear algebra - Prove that a square matrix commutes with its inverse




The Question:



This is a very fundamental and commonly used result in linear algebra, but I haven't been able to find a proof or prove it myself. The statement is as follows:




let $A$ be an $n\times n$ square matrix, and suppose that $B=\operatorname{LeftInv}(A)$ is a matrix such that $BA=I$. Prove that $AB=I$. That is, prove that a matrix commutes with its inverse, that the left-inverse is also the right-inverse





My thoughts so far:



This is particularly annoying to me because it seems like it should be easy.



We have a similar statement for group multiplication, but the commutativity of inverses is often presented as part of the definition. Does this property necessarily follow from the associativity of multiplication? I've noticed that from associativity, we have
$$
\left(A\operatorname{LeftInv}(A)\right)A=A\left(\operatorname{LeftInv}(A)A\right)
$$
But is that enough?




It might help to talk about generalized inverses.


Answer



You notation $A^{-1}$ is confusing because it makes you think of it as a two-sided inverse but we only know it's a left-inverse.



Let's call $B$ the matrix so that $BA=I$. You want to prove $AB=I$.



First, you need to prove that there is a $C$ so that $AC=I$. To do that, you can use the determinant but there must be another way. [EDIT] There are several methods here. The simplest (imo) is the one using the fact the matrix has full rank.[/EDIT]



Then you have that $B=BI=B(AC)=(BA)C=IC=C$ so you get $B=C$ and therefore $AB=I$.


Saturday 24 February 2018

Prove $frac{n(n+1)}{2}$ by induction, triangular numbers





Prove that the $n$-th triangular number is:




$$T_n = \dfrac{n(n+1)}{2}$$




I did this:



Base case: $\frac{1(1+1)}{2}=1$, which is true.



Then I assumed that $T_k=\frac{k(k+1)}{2}$ is true.




$$T_{k+1} = \frac{(k+1)(k+1+1)}{2}$$



I'm not sure what to do next. What is this supposed to be equal to?


Answer



You have to think about the nature of the triangular numbers: the $n$-th triangular number is the number of dots created by $n$ layers of dots stacked upon each other: the first (top) layer has $1$ dot, the next (below it) has $2$ dots, etc. The $n$-th and last layer of the $n$-the triangular number has $n$ dots:



enter image description here



Now, your inductive hypothesis is that the $k$-the triangular number consists of $\frac{k(k+1)}{2}$ dots, i.e. that




$$T_k = \frac{k(k+1)}{2}$$



Using that hypothesis, you have to show that the $k+1$-th triangular number has $\frac{(k+1)(k+2)}{2}$ dots. But note: the $k+1$-th triangular number adds a layer of $k+1$ dots to the $k$-th triangular number. That is, we know that:



$$T_{k+1}=T_k +(k+1)$$



So, use that fact, together with the inductive hypothesis, to show what you need to show, i.e. that



$$T_{k+1}=\frac{(k+1)(k+2)}{2}$$


linear algebra - Are there matrices such that $AB=I$ and $BA neq I$




Are there matrices such that $AB=I$ and $BA \neq I$ ?




$A$ and $B$ are square matrices


Answer



With square matrices, no. With non-square matrices, it's perfectly possible. For example,
\begin{align*}
A &= \left(\begin{array}{ccc}1 & 0 & 0 \\ 0 & 1 & 0\end{array}\right) \\
B &= \left(\begin{array}{cc}1 & 0 \\ 0 & 1 \\ 0 & 0\end{array}\right)
\end{align*}
Note that, indeed, we cannot have non-square matrices where $AB$ and $BA$ are identities of appropriate dimensions, because that would imply the existence of an isomorphism between spaces of different dimensions!


field theory - Prove that $[mathbb{Q}(sqrt[r]{p_1},cdots ,sqrt[r]{p_n}):mathbb{Q}]=r^n$

We have $n$ distinct prime numbers $p_1,\cdots ,p_n$ and I am asked to show that $[\mathbb{Q}(\sqrt[r]{p_1},\cdots ,\sqrt[r]{p_n}):\mathbb{Q}]=r^n$ where $r\in \mathbb{N}$.



I tried to solve it by induction. The case $n=1$ is trivial. If we let



$F=\mathbb{Q}(\sqrt[r]{p_1},\cdots ,\sqrt[r]{p_n})$



$E=\mathbb{Q}(\sqrt[r]{p_1},\cdots ,\sqrt[r]{p_n},\sqrt[r]{p_{n+1}})$



$L=\mathbb{Q}(\sqrt[r]{p_2},\cdots ,\sqrt[r]{p_n})$




then by the inductive hypothesis $[F:\mathbb{Q}]=r^n$, $[L:\mathbb{Q}]=r^{n-1}$, $[F:L]=r$. We want to show that $[E:F]=r$. We know that $[E:F]\leq r$. If $[E:F]

$\prod_{i\in I\subset \left \{0,\cdots ,r-1\right \}}X-\sqrt[r]{p_{n+1}}\zeta_r ^i$ where $\left |I\right |=m

Therefore $\sqrt[r]{p_{n+1}^m}=\sum_{i=0}^{r-1}a_i\left (\sqrt[r]{p_1}\right )^i$ where $a_i\in L$.



I don't know how to continue, I don't even know if I am doing the right thing.



Is it true that the trace over $F/L$ of $\sqrt[r]{p_{1}^i}$ equals $0$ for every $i\neq 0$? Because in that case taking trace over $F/L$ we would obtain $0=ra_0$, thereby $a_0=0$ and I think that could be helpful.

Friday 23 February 2018

calculus - The limit as $n$ approaches infinity of $n(a^{1/n}-1)$



I need to know how to calculate this without using l'hospitals rule:



limit as $x$ approaches infinity of: $$x(a^{1/x}-1)$$



I saw that the answer is $\log(a)$, but I want to know how they got it.

The book implies that I should be able to find it by just using algebraic manipulation and substitution.


Answer



METHOD 1:



$$\begin{align}
\lim_{x\to \infty}x(a^{1/x}-1)&=\lim_{x\to 0^{+}}\frac{a^{x}-1}{x}\tag 1\\\\
&=\lim_{x\to 0^{+}}\frac{e^{x\log a}-1}{x}\\\\
&=\lim_{x\to 0^{+}}\frac{\left(1+(\log a)x+O( x^2)\right)-1}{x}\\\\
&=\lim_{x\to 0^{+}}\left(\log a+O(x)\right)\\\\
&=\log a

\end{align}$$






METHOD 2:



Another way to do this is to substitute $y=a^x$ in $(1)$. Then



$$\begin{align}
\lim_{x\to \infty}x(a^{1/x}-1)&=\lim_{x\to 0^{+}}\frac{a^{x}-1}{x}\\\\

&=\lim_{y\to 1^{+}}\frac{y-1}{\log y/\log a}\\\\
&=\log a\,\lim_{y\to 1^{+}}\frac{y-1}{\log y}
\end{align}$$



Noting that for $y>1$, $\frac{y-1}{y}\le\log y\le y-1$. Then,



$$1\le\frac{y-1}{\log y}\le y$$



and the squeeze theorem does the rest!


algebra precalculus - How to find the magnitude squared of square root of a complex number



I'm trying to simplify the expression




$$\left|\sqrt{a^2+ibt}\right|^2$$



where $a,b,t \in \Bbb R$.



I know that by definition



$$\left|\sqrt{a^2+ibt}\right|^2 = \sqrt{a^2+ibt}\left(\sqrt{a^2+ibt}\right)^*$$



But how do you find the complex conjugate of the square root of a complex number? And what is the square root of a complex number (with arbitrary parameters) for that matter?



Answer



For any complex number $z$, and any square root $\sqrt{z}$ of $z$ (there are two), we have
$$\bigl|\sqrt{z}\bigr|=\sqrt{|z|}$$
Therefore
$$\bigl|\sqrt{a^2+ibt}\bigr|^2=\sqrt{|a^2+ibt|^2}=|a^2+ibt| = \sqrt{a^4+b^2t^2}$$


number theory - last digit is a square and.....



I've found some solutions for this questions but they were not impressive.



Question:



How many natural numbers are there in base $10$,whose last digit is perfect square,combination of last two digits is a perfect square,combination of last three digits is a perfect square,$\ldots$,combination of last $n$ digits is a perfect square?




For example $64$ is a number whose last digit is a perfect square and combination of last two digits is also a perfect square.



Kindly tell me how to approach this question.


Answer



Answers are in the forms:-



(i)$4\times10^n$



(ii)$9\times10^n$




(iii)$10^n$



(iv)$49\times10^n$



(v)$64\times10^n$



(vi)$81\times10^n$



Where $n \in 0,2,4,6...$


Thursday 22 February 2018

probability - Expected number of rolls when repeatedly rolling an $n$-sided die




Suppose I roll an $n$-sided die once. Now you repeatedly roll the die until you roll a number at least as large as I rolled. What is the expected number of rolls
you have to make?



I know the answer to this problem, but I'm curious about possible solutions people might post.


Answer



If you roll a $k$, then there are $n-k+1$ possible numbers out of $n$ that will be greater than or equal to $k$. This gives rise to a geometric distribution, and so the expected number of rolls required after rolling a $k$ is $\frac{n}{n-k+1}$. Averaging over all $k$, the expected number of rolls will be $$\mathbb{E}=\frac{1}{n}\sum_{k=1}^n \frac{n}{n-k+1}=H_{n}$$ where $H_n$ is the $n^{th}$ harmonic series.


Wednesday 21 February 2018

real analysis - Does $lim_{xrightarrow 0}frac{ln(x)}{cot(x)}$ exist or not?



I stumbled on the following limit in a calculus textbook today:




\begin{equation*}
\lim_{x\rightarrow 0}\frac{\ln(x)}{\cot(x)}
\end{equation*}



According to the book's solutions and Mathematica, this limit exists and is equal to 0. I can see why $0$ is obtained using l'Hôpital's rule twice:



\begin{equation*}
...=\lim_{x\rightarrow0}\frac{\left(\frac{1}{x}\right)}{-\csc^2(x)}=-\lim_{x\rightarrow0}\frac{\sin^2(x)}{x}=-\lim_{x\rightarrow0}\frac{2\sin(x)\cos(x)}{1}=2\sin(0)\cos(0)=0
\end{equation*}




If I recall correctly, l'Hôpital's rule is applicable when we have:
\begin{equation*}
\lim_{x\rightarrow a}\frac{f(x)}{g(x)}
\end{equation*}
even if $f$ and $g$ are not derivable at precisely $a$, so there should be no issue in using it on the above limit.



However, I can't reconcile the fact that $\ln(x)$ is defined over $]0,+\infty[$ (and usually, only $\lim_{x\rightarrow0^+}\ln(x)$ exists) with the fact that the above limit exists (both as $x\rightarrow0^+$ and as $x\rightarrow0^{-}$).



It seems to me that only




\begin{equation*}
\lim_{x\rightarrow 0^+}\frac{\ln(x)}{\cot(x)}
\end{equation*}



should exist and thus the "bilateral limit" (with $x\rightarrow 0$) does not exist since the limit with $x\rightarrow0^-$ doesn't.



Is there something I am missing?


Answer



Yes you are right, since $\ln x$ is defined for $x>0$ the limit for $x\to 0^-$ is meaningless and only




$$\begin{equation*}
\lim_{x\rightarrow 0^+}\frac{\ln x }{\cot x }
\end{equation*}$$



can be considered.



Note that this doesn't mean in general that the limit considered exists.



For example




\begin{equation*}
\lim_{x\rightarrow 0^+}\frac{\ln x }{\sin \frac1x}
\end{equation*}



can be considered but does not exist.


integration - $int e^{-x^2}dx$








How does one integrate $\int e^{-x^2}\,dx$? I read somewhere to use polar coordinates.



How is this done? What is the easiest way?

calculus - Why does $lim_{xrightarrow 0}frac{sin(x)}x=1$?





I am learning about the derivative function of $\frac{d}{dx}[\sin(x)] = \cos(x)$.



The proof stated: From $\lim_{x \to 0} \frac{\sin(x)}{x} = 1$...



I realized I don't know why, so I wanted to learn why part is true first before moving on. But unfortunately I don't have the complete note for this proof.





  1. It started with a unit circle, and then drew a triangle at $(1, \tan(\theta))$

  2. It show the area of the big triangle is $\frac{\tan\theta}{2}$

  3. It show the area is greater than the sector, which is $\frac{\theta}{2}$
    Here is my question, how does this "section" of the circle equal to $\frac{\theta}{2}$? (It looks like a pizza slice).

  4. From there, it stated the area of the smaller triangle is $\frac{\sin(\theta)}{2}$. I understand this part. Since the area of the triangle is $\frac{1}{2}(\text{base} \times \text{height})$.


  5. Then they multiply each expression by $\frac{2}{\sin(\theta){}}$ to get
    $\frac{1}{\cos(\theta)} \ge \frac{\theta}{\sin(\theta)} \ge 1$





And the incomplete notes ended here, I am not sure how the teacher go the conclusion $\lim_{x \to 0} \frac{\sin(x)}{x} = 1$. I thought it might be something to do with reversing the inequality... Is the answer obvious from this point? And how does step #3 calculation works?


Answer



Draw the circle of radius $1$ centered at $(0,0)$ in the Cartesian plane.



Let $\theta$ be the length of the arc from $(1,0)$ to a point on the circle. The radian measure of the corresponding angle is $\theta$ and the height of the endpoint of the arc above the coordinate axis is $\sin\theta$.



Now look at what happens when $\theta$ is infinitesimally small. The length of the arc is $\theta$ and the height is also $\theta$, since that infinitely small part of the circle looks like a vertical line (you're looking at the neighborhood of $(1,0)$ under a microscope).



Since $\theta$ and $\sin\theta$ are the same when $\theta$ is infinitesimally small, it follows that $\dfrac{\sin\theta}\theta=1$ when $\theta$ is infinitesimally small.




That is how Leonhard Euler viewed the matter in the 18th century.



Why does the sector of the circle have area $\theta/2$?



The whole circle has area $\pi r^2=\pi 1^2 = \pi$. The fraction of the circle in the sector is
$$
\frac{\text{arc}}{\text{circumference}} = \frac{\theta}{2\pi}.
$$
So the area is
$$

\frac \theta {2\pi}\cdot \pi = \frac\theta2.
$$


elementary number theory - Does the Extended Euclidean Algorithm always return the smallest coefficients of Bézout's identity?

Bezout's identity says that there are integers $x$ and $y$ such that $ax + by = gcd(a, b)$ and the Extended Euclidean Algorithm finds a particular solution. For instance,



$333\cdot-83 + 1728\cdot16 = \gcd(333, 1728) = 9$.



Will the Extended Euclidean Algorithm always return the smallest $x$ and $y$ that satisfy the identity? By small, I mean that $|x|, |y|$ are minimized.



Because $333\cdot(-83 + 1728t) + 1728\cdot(16 - 333t) = \gcd(333, 1728)$ is also a solution for all $t \in \mathbf{Z}$.

Tuesday 20 February 2018

calculus - Spivak Continuity problem



Suppose that $f$ is continuous from the right at $a$ and there exists $\delta>0$ s.t if $a0$. Prove that $f(a)\geqslant0$.



In Spivak he proves this theorem in ch.7 and so I used it in this proof:




If $f$ is continuous on $[a,b]$ and $f(a)<0

My attempt at a proof using this theorem and contradiction.



Suppose $f(a)<0$. If f is continuous from the right, it is continuous on the interval $a0. Thus $f(a)\geqslant0$



My questions are can I extend the theorem spivak gives to open intervals? And since I'm only given that the function $f$ is continuous from the right does my contradiction using $f(a)<0$ work since I don't necessarily know what happens to the function past $a$? Thanks.


Answer



Your proof uses the fact that $f$ is continuous in $a


Do not use non-trivial theorems to prove trivial results. Here is how one can prove the result in question. Let $f(a) <0$. Since $f$ is continuous at at $a$ from right there is a $\delta_{1}>0$ such that $$|f(x) - f(a)|<-\frac{f(a)} {2}$$ whenever $a\leq x




I add an easier, informal but rigorous version. Since $f$ is continuous from right at $a$, values of $f$ to the right of and near $a$ are near to $f(a) $ and if $f(a) $ is negative then these values of $f$ are also negative (numbers close to a negative number are negative). This contradicts that values of $f$ to the right of and near $a$ are positive.



Ideally one should be able to think informally as above and if needed produce the formal version instantly. Most of the proofs in analysis deal with such informal thinking and their translation into their boring formal counterparts.


number theory - Summation of logs



Are there any useful identities for quickly calculating the sum of consecutive logs? For example $\sum_{k=1}^{N} log(k)$ or something to this effect. I should add that I am writing code to do this (as opposed to doing this on a calculator) so N can be very large.


Answer



For large $N$, we have $N!\approx N^Ne^{-N}\sqrt {2\pi N}$ (Stirling formula) and hence
$$\sum_{k=1}^N\ln k\approx\left( N+\frac12\right)\ln N-N+\frac12\ln(2\pi).$$



Monday 19 February 2018

limits - Proving that $limlimits_{x to 0}frac{e^x-1}{x} = 1$




I was messing around with the definition of the derivative, trying to work out the formulas for the common functions using limits. I hit a roadblock, however, while trying to find the derivative of $e^x$. The process went something like this:



$$\begin{align}
(e^x)' &= \lim_{h \to 0} \frac{e^{x+h}-e^x}{h} \\
&= \lim_{h \to 0} \frac{e^xe^h-e^x}{h} \\
&= \lim_{h \to 0} e^x\frac{e^{h}-1}{h} \\
&= e^x \lim_{h \to 0}\frac{e^h-1}{h}
\end{align}
$$




I can show that $\lim_{h\to 0} \frac{e^h-1}{h} = 1$ using L'Hôpital's, but it kind of defeats the purpose of working out the derivative, so I want to prove it in some other way. I've been trying, but I can't work anything out. Could someone give a hint?


Answer



As to your comment:



Consider the differential equation



$$y - \left( {1 + \frac{x}{n}} \right)y' = 0$$



It's solution is clearly $$y_n={\left( {1 + \frac{x}{n}} \right)^n}$$




If we let $n \to \infty$ "in the equation" one gets



$$y - y' = 0$$



One should expect that the solution to this is precisely



$$\lim_{n \to \infty} y_n =y=\lim_{n \to \infty} \left(1+\frac x n \right)^n := e^x$$



Also note
$$\mathop {\lim }\limits_{n \to \infty } {\left( {1 + \frac{x}{n}} \right)^n} = \mathop {\lim }\limits_{n \to \infty } {\left( {1 + \frac{x}{{xn}}} \right)^{xn}} = \mathop {\lim }\limits_{n \to \infty } {\left[ {{{\left( {1 + \frac{1}{n}} \right)}^n}} \right]^x}$$




My approach is the following:



I have as a definition of $\log x$ the following:



$$\log x :=\lim_{k \to 0} \frac{x^k-1}{k}$$



Another one would be



$$\log x = \int_1^x \frac{dt}t$$

Any ways, the importance here is that one can define $e$ to be the unique number such that



$$\log e =1$$



so that by definition



$$\log e =\lim_{k \to 0} \frac{e^k-1}{k}=1$$



From another path, we can define $e^x$ as the inverse of the logarithm. Since




$$(\log x)'=\frac 1 x$$



the inverse derivative theorem tells us



$$(e^x)'=\frac{1}{(\log y)'}$$



where $y=e^x$



$$(e^x)'=\frac{1}{(1/y)}$$




$$(e^x)'=y=e^x$$



The looking at the difference quotient, one sees that by definition one needs



$$\mathop {\lim }\limits_{h \to 0} \frac{{{e^{x + h}} - {e^x}}}{h} = {e^x}\mathop {\lim }\limits_{h \to 0} \frac{{{e^h} - 1}}{h} = {e^x}$$



so that the limit of the expression is $1$. One can also retrieve from the definition of the logarithm that



$$\eqalign{
& \frac{x}{{x + 1}} <\log \left( {1 + x} \right) < x \cr

& \frac{1}{{x + 1}} < \frac{{\log \left( {1 + x} \right)}}{x} <1 \cr} $$



Thus



$$\mathop {\lim }\limits_{x \to 0} \frac{{\log \left( {1 + x} \right)}}{x} = 1$$



a change of variables $e^h-1=x$ gives the result you state. In general, we have to go back to the definition of $e^x$. If one defines
$${e^x} = 1 + x + \frac{{{x^2}}}{2} + \cdots $$



Then




$$\frac{{{e^x} - 1}}{x} = 1 + \frac{x}{2} + \cdots $$



$$\mathop {\lim }\limits_{x \to 0} \frac{{{e^x} - 1}}{x} = \mathop {\lim }\limits_{x \to 0} \left( {1 + \frac{x}{2} + \cdots } \right) = 1$$



from the defintion we just chose.


elementary number theory - Prove that ${ ax+bymid x,yinmathbb Z} = { n(a,b) mid ninmathbb Z}$

Prove the following proposition:



Suppose $a,b$ are fixed integers. Then $\{ ax+by\mid x,y\in\mathbb Z\} = \{ n(a,b) \mid n\in\mathbb Z\}$.

real analysis - Find a differentiable $f$ such that $f'$ is not continuous.





I'm trying to solve this problem:




Find a differentiable function $f:\mathbb{R} \longrightarrow \mathbb{R}$ such that $f':\mathbb{R} \longrightarrow \mathbb{R}$ is not continuous at any point of $\mathbb{R}$.





Any hints would be appreciated.


Answer



You are looking for a derivative that is discontinuous everywhere on $\Bbb R$. Such a function doesn't exist. Since $f'$ is the pointwise limit of continuous functions, it is a Baire class $1$ function. A theorem of Baire says that the set of discontinuities of $f'$ is a meager subset of $\Bbb R$.


Sunday 18 February 2018

Solving the exponential equation $x^2 = e^{-mx}cdot k$




I just had this problem come up at work, as part of a simulation where I had to solve the equation mentioned above (where $m$ and $k$ are constants). I googled solving exponential equations and I got so far as realizing that I need to log both sides of the equation resulting in:



$$2\ln x = -mx + \ln k$$



The above form of the equation seems more intractable than the first and am at a loss regarding how to proceed. Can someone please give me a hint as to the way forward?


Answer



When you have exponential and linear or quadratic function in same equation, you must use Lambert-$\operatorname{W}$ function which is defined as inverse function of $f(x)=e^xx$.
$$2\ln x=-mx+\ln k$$
$$x^2=e^{-mx}k$$

$$e^{-mx}k=x^2$$
$$e^{-mx}x^{-2}=\dfrac1k$$
$$e^{\dfrac{mx}2}x=\sqrt k$$
$$e^{\dfrac{mx}2}\dfrac{mx}2=\dfrac{m\sqrt k}2$$
$$\dfrac{mx}2=\operatorname{W}_k\left(\dfrac{m\sqrt k}2\right),k\in\mathbb{Z}$$
$$x=\dfrac{2\operatorname{W}_k\left(\dfrac{m\sqrt k}2\right)}{m}$$


calculus - change of variables for definite integrals

enter image description here



First of all I would like to start off by asking why do they have different change of variable formulas for definite integrals than indefinite...why cant we just integrate using U substitution as we normally do in indefinite integral and then sub the original U value back and use that integrand for definite integral?




I was at one point understanding integration but not when they started coming up with different formulas for definite integrals in U-substitution I got lost and resulted to just forcibly memorizing the formulas...



I dont get why for U substitution they sub the upper and lower bounds into U from the original function to find the new upper and lower bounds with the function U.



I know that because if you dont want to sub the original value of U in and want to instead stick to U as your function you must use those new upper and lower bound but if you sub in the original value for U then you can use your old upper and lower bound values.



My question is what or how does plugging your old lower and upper bound values into U give you the new values of your new function thats expressed as U...



Why do they make such a big deal out of it and complicate it when all they have to do is same U sub as indefinite integral and then plug original value of U in and go from there...are these math people just making excuses to come up with more work or is there more logic behind it?

real analysis - Infinity norm related to $L_p(X)$ norm on finite measure space $X$.

Let $(X, \cal M, \mu)$ be a measure space with $\mu(X)<\infty$. Why
\begin{align}
\lim_{p\to\infty}\|f\|_p=\|f\|_\infty
\end{align}
for all $f\in L_\infty$?

Saturday 17 February 2018

modular arithmetic - What is $26^{32}bmod 12$?




What is the correct answer to this expression:



$26^{32} \pmod {12}$



When I tried in Wolfram Alpha the answer is $4$, this is also my answer using Fermat's little theorem, but in a calculator the answer is different, $0.$


Answer



First, note that $26 \equiv 2 \pmod {12}$, so $26^{32} \equiv 2^{32} \pmod {12}$.




Next, note that $2^4 \equiv 16 \equiv 4 \pmod {12}$, so $2^{32} \equiv \left(2^4\right)^8 \equiv 4 ^8 \pmod {12}$, and $4^2 \equiv 4 \pmod {12}$.



Finally, $4^8 \equiv \left(4^2\right)^4 \equiv 4^4 \equiv \left(4^2\right)^2 \equiv 4^2 \equiv 4 \pmod {12}$.



Then we get the result.



There are slicker solutions with just a few results from elementary number theory, but this is a very basic method which should be easy enough to follow.


combinatorics - $n_{p,k} = frac{1}{p}binom{p}{k}$ for counting $k$-element subsets of $left{1,2,ldots,pright}$ with sum divisible by $p$




Let $p$ be prime and $k$ be an integer where $0.



Let $n_{p,k}$ denote the number of subsets $S$ of $\{1, 2, ..., p\}$ such that $\left|S\right| = k$ and such that the sum of all the elements in $S$ is divisible by $p$.



Show that $n_{p,k} = \dfrac{1}{p}\dbinom{p}{k}$.





Attempted work :



Let $l$ be positive integer such that $kl \equiv 1\; (\bmod p)$



and $\{a_1, a_2, ..., a_k\} \in \mathcal F_a(p,k)$



Let $f_{a,b} : \mathcal F_a(p,k) \to \mathcal F_b(p,k)$ defined by



$f(\{a_1, a_2, ..., a_k\}) = \{b_1, b_2, ..., b_k\}$ , where




$a_1 + l(a-b) \equiv b_1\; (\bmod p)\;\;, \; 0\leq b_1



$a_2 + l(a-b) \equiv b_2\; (\bmod p)\;\;, \; 0\leq b_2



.



.



$a_k + l(a-b) \equiv b_k\; (\bmod p)\;\;, \; 0\leq b_k




1) To show that $f$ is injective



Let $f(\{a_1, a_2, ..., a_k\}) = f(\{a'_1, a'_2, ..., a'_k\})$



so $\{b_1, b_2, ..., b_k\} = f(\{a'_1, a'_2, ..., a'_k\})$



we obtain $a'_i + l(a-b) \equiv b_i\; (\bmod p)$



and $a'_i \equiv a_i\; (\bmod p)$




so $a'_i = a_i , \;\forall i = 1, 2, ..., p$, thus $f$ is injective.



2) To prove that $f$ is surjective



Let $\{b_1, b_2, ..., b_k\} \in \mathcal F_b(p,k)$



Choose



$a_1 \equiv b_1- l(a-b) \; (\bmod p)$




$a_2 \equiv b_2- l(a-b) \; (\bmod p)$



.



.



$a_k \equiv b_k- l(a-b) \; (\bmod p)$ , where $gcd(l,p) = 1$



so, $f(\{a_1,\dots,a_k\}) = \{b_1, \dots, b_k\}$, thus $f$ is surjective.




Therefore $f$ is bijective, $|\mathcal F_a(p,k)|= |\mathcal F_b(p,k)|\;\forall a, b$ so $|\mathcal F_0(p,k)|= \frac{1}{p}\binom{p}{k}$.


Answer



Let $X_k$ be the set of $k$-element subsets of $\{1,2,\ldots,p\}$. Let the cyclic group $G=\mathbf{Z}/p\mathbf{Z}$ act on $X_k$ by translation: that is, for $a\in G$, we define
$$\sigma_a(\{x_1,\dots, x_k\})
= \{x_1+a,\ldots,x_k+a\}$$

where addition is mod $p$. This makes sense since translation preserves distinctness.



So far this makes sense for any $p$. But when $p$ is prime, this action is free: that is, if $a\ne0$, there is no $k$-tuple mapped to itself by $\sigma_a$. A cute way to see this is to look at the (mod $p$) sum of the elements of a subset. If we apply $\sigma_a$ to a subset whose sum is $s$, the sum of the resulting elements is $s+ak$ (mod $p$). If the subset is fixed by $\sigma_a$, then $s+ak=s$ (mod $p$), so $ak=0$ (mod $p$). But $k$ is not divisible by $p$, hence (since $p$ is prime) $a=0$ (mod $p$), and we see the action is free.




So each orbit of the action has size $p$ (hence $\binom pk$ is divisible by $p$) and furthermore each possible sum appears exactly once in each orbit, for a total of $\frac1p\binom pk$ times each.



(Added in response to OP comment:)



At a high school level, and expressed without group theory, we are arguing that when $p$ is prime, the following group of $p$ subsets
$$
\begin{array}{lcl}
\{x_1,&\dots,&x_k\}\\
\{x_1+1,&\ldots,&x_k+1\} \mod p\\
\vdots&\vdots&\vdots\\

\{x_1+p-1,&\ldots,&x_k+p-1\} \mod p
\end{array}
$$

all have different sums mod $p$, so each possible sum appears exactly once. (This also means the subsets in the group are actually different.) As $X_k$ is the disjoint union of groups of this kind, that means the possible sums occur equally often as you look across $X_k$.



If you're in high school, there are a few details to fill in, so you should do that. But the other thing to do would be to learn some basic group theory, which is accessible to any high school student (some experience with proofs helps, but group theory is as good as geometry for studying mathematical argument anyway). Once you check that what you have is a group acting freely on a set, a lot of those details come for free, and the structure of the argument becomes much more transparent.


Book for studying Calculus II

What book should I choose for self-studying Calculus II?. I'm looking for a book that has a good explanation of the content and also solved exercises (which is a very important thing that I'm usually missing).




I'm already studying Calculus I by A First Course in Calculus by Lang and Calculus and Analytical Geometry by Thomas and Finney. Do you think these are good books for Calculus II? I have a feeling Lang doesn't cover all the contents in my course (not sure about Thomas and Finney tho). What is in your opinion the best book for self-study. If there is a better book than the ones on this list please tell me. Thanks!!



NOTE:
Here are the contents of my course:
The algebraic and topological structure of $\mathbb{R}^n$.



Functions from $\mathbb{R}^n$ to $\mathbb{R}^m$: continuity and the notion of limit. Differential calculus. Partial derivatives. Chain rule. Taylor's theorem in $\mathbb{R}^n$ and applications to the study of extreme values. Inverse and implicit function theorems. Extreme values of functions with constrained variables.



Multiple integrals: Fubini's theorem, change of variables theorem, applications to the computation of physical quantities.



Line integrals: Integrals of scalar fields and vector fields. Fundamental theorem of calculus for line integrals, conservative fields, and scalar potentials. Green's theorem.




Surface integrals: surface integrals of a scalar field, flux of a vector field, divergence theorem and Stokes' theorem.

Friday 16 February 2018

calculus - $dx=frac {dx}{dt}dt $. Why is this equality true and what does it mean?

$dx=\frac {dx}{dt}dt $. I know that this deduction is obvious from the chain rule, given that we treat our dx and dt as just numbers. But I find it quite unsatisfactory to think of it in that sense. Is there a better / more "calculus-inclined" way of thinking about this equality. Can you please explain both the LHS and RHS individually.

elementary number theory - If $p^2,$is divisible by 3, why is p also divisible by 3?

I came across this in proving that the $\sqrt{3}$ is irrational

discrete mathematics - For odd $n$, there is an $m$ such that $n mid 2^m-1$

I am really stuck with this question:




Suppose $n$ is an odd positive integer. Prove that there exists a positive integer $m$ such that (2^m − 1)\n .
(Here, “divides” means that when 2^m − 1 is divided by n.)


Proving that modular inverse only exists when $gcd(n,x)=1$



I'm having trouble understanding why for finding the inverse for $x\bmod n$, $\gcd(x, n)=1$ is a precondition. Obviously I've tried examples where the gcd is greater than one and I can't find $a$ for $ax \equiv _n 1$. I'm trying to prove to myself why this is the case.



I can mechanically say the following:



Find the modular inverse $a$ of $x\pmod n$




$$ax \equiv _n 1 \Leftrightarrow n \mid (ax-1)$$



And $n \mid (ax-1)$ implies that $(ax-1)=nk$ for some $k \in \mathbb Z $



After that I am stuck and I'm not sure if I'm going in the right direction.


Answer



If there is an inverse of $x \bmod n$, that gives us a number $y$ so that $xy \equiv 1 \bmod n$. That means that $xy=kn+1$, or (rearranging) that $xy-kn=1$.



Now for any common divisor, $c$, of $x$ and $n$ we will have that $c \mid (xy-kn)$ which gives $c\mid 1$, that is, $c=1$. So that is an outcome - and therefore a requirement - of finding the inverse of $x \bmod n$


Thursday 15 February 2018

a question on prime numbers and infinite series



prove that the infinite sum - $∑(1/2)^p$, where $p$ runs over all the $prime$ numbers,is $irrational$




one idea that may work is to use the given lemma





$Lemma:$ $α$ is an irrational number iff there exists two convergent integer sequences ${a_n}$ and $b_n$ such that $(a_n-αb_n)≠0$ for all $n$. but $lim(a_n-αb_n)=0$




The proof is just by contradiction by assuming $α$ to be rational.I have tried to work out this lemma but failed .Plz try this.





Answer




Hint: look at the similar number $$\sum_{i = 1}^\infty \left(\frac{1}{10}\right)^{p_i} \approx 0.0110101000101000101$$ (be sure to notice that's "approximately," not "equals").



Clearly the $1$s will occur in the prime positions. You should already know that there are infinitely many primes. You should also know that their distribution feels kind of random. With one exception, any two $1$s are separated by an odd number of $0$s. But other than that... kind of random.



If this number was rational, we could find integers $m$ and $n$ such that one divided by the other gives this number. Try truncating the number at the prime positions. That does give rational numbers, but... $$\frac{1}{100}, \frac{11}{1000}, \frac{1101}{100000}, \frac{110101}{10000000}, \ldots$$


sequences and series - How do I evaluate this sum :$sum_{n=1}^{infty}frac{{(-1)}^{n²}}{{(ipi)}^{n}}$?

I'm interesting to know how do i evaluate this sum :$$\sum_{n=1}^{\infty}\frac{{(-1)}^{n²}}{{(i\pi)}^{n}}$$, I have tried to evaluate it using two partial sum for odd integer $n$ and even integer $n$ ,but i can't since it's alternating series ,and i would like to know if it's well know series also what about it's values :real or complex ? .



Note : wolfram alpha showed that is a convergent series by the root test




Thank you for any help

Wednesday 14 February 2018

projective geometry - Visualizing tuples $(a,b,x,y)$ of the extended Euclidean algorithm in a four-dimensional tesseract. Are there hidden symmetries?





I am trying to visualize the possible symmetries in the Euclidean four-dimensional space of the $4$-tuples of points $(a,b,x,y)$ generated by the extended Euclidean algorithm, where $ax+by=gcd(a,b)$. There is more than one solution of $(x,y)$ pairs, so I am focusing on visualizing one of the two minimal pairs generated by the algorithm, as it is explained in the Wiki page of the Bézout's identity. The questions are at the end of the explanation.





  1. In the example below, basically what I am doing is generating a single $4$-tuple $(a,b,x,y)$, where $(x,y)$ is one of the two possible minimal pairs that the extended Euclidean algorithm generates for every possible combination of integer pairs $(a,b)$ where in this case $a,b \in [0,50]$.


  2. For this reason, $50 \cdot 50=2500$ $4$-tuples are generated. Then the rest of combinations of positive and negative values of $a$ and $b$ were also included $(-a,b)$$(a,-b)$$(-a,-b)$ and the rest of $4$-tuples are calculated in the same way, obtaining the associated $(x,y)$ values as before.


  3. Finally $4 \cdot 2500 = 10^4$ $4$-tuples have been generated. This set of $10^4$ four-dimensional tuples then is visualized inside a tesseract, following the methodology of this previous question, having the reference axes at $(0,0,0,0)$, the center of the tesseract, and representing them as points of the Euclidean four-dimensional space.


  4. The result is shown as a $3$D projection of the tesseract (a Schlegel diagram). So basically we are able to view the four-dimensional set of $4$-tuples $(a,b,x,y)$ representing each possible combination of $(a,b)$ as the first two dimensions, and their associated pair $(x,y)$, one of the minimal pairs generated by the Euclidean algorithm, as the last two dimensions. In other words, one single four-dimensional point $(a,b,x,y)$ represents a given pair $(a,b)$ and one of its minimal results of the extended Euclidean algorithm, $(x,y)$ .


  5. So here are the results:





enter image description here



This is a zoom showing only the $4$-tuples associated with the positive combinations of $a$ and $b$:



enter image description here



And finally, this is the same zoom showing the whole set of results, for any combination of positive and negative values of $a$ and $b$:




enter image description here



Well, apart from being kind of "mesmerizing", it is possible to distinguish some basic already known symmetries, due to the inclusion of the tuples $(a,b,x_0,y_0)$ , $(-a,b,x_1,y_1)$ , $(a,-b,x_2,y_2)$ and $(-a,-b,x_3,y_3)$. But, in other hand, as we are visualizing a rotating set of four-dimensional points, it seems that other possible symmetries are there.



I would like to ask the following questions:





  1. Are there some other known expected symmetries in four-dimensional representation of the extended Euclidean algorithm?


  2. I am using on of the two possible minimal pairs to make this visualization. Would it be better to use another type of tuples in order to see other type of symmetries? Thank you!





Answer



I will answer partially to my second point and keep open the question regarding the expected symmetries and the reasons of the symmetries, which still I do not understand.



The following combination of $4$-tuples provides a better insight of the symmetries regarding the gcd:




$$(a,b,gcd(a,b),sgn(gcd(a,b)) \cdot \lfloor \sqrt{\frac{a \cdot b}{|gcd(a,b)|} \rfloor})$$





where $gcd(a,b) \in \Bbb Z$ (instead of using the definition of the gcd as the absolute value, we are using the alternative definition that lets the gcd to be negative when the sign of the elements $(a,b)$ is not positive) and




$$sgn(gcd(a,b)) \cdot \lfloor \sqrt{\frac{a \cdot b}{|gcd(a,b)|} \rfloor}$$




is the floor of the square root of the least common multiple, letting it be negative (sgn is the sign function) if gcd is negative and positive if the sign of the gcd is positive. I am making the square root of the LCM because it is an approximation to the expected values of its biggest possible pair of multiples $x$ and $y$:





$$\{x,y\ /\ x \cdot y = LCM \} \land \{\not\exists\ x',y',\ x' \cdot y' = LCM , x'+y' \gt x+y \}$$




$x$ and $y$ will be located around the square root of the LCM, so it is a good approximation, and they are located inside the range of the tesseract, so they are very useful in terms of representing the relationship of $(a,b)$ and the gcd using $4$-tuples.



It leads to the following results, first an overview of the projection:



enter image description here



And zoomed:




enter image description here



So it seems better to use directly the gcd and a manipulation of the LCM to observe the symmetries of the tuples.


Tuesday 13 February 2018

Alternative "Fibonacci" sequences and ratio convergence



So the well known Fibonacci sequence is
$$
F=\{1,1,2,3,5,8,13,21,\ldots\}

$$
where $f_1=f_2=1$ and $f_k=f_{k-1}+f_{k-2}$ for $k>2$. The ratio of $f_k:f_{k-1}$ approaches the Golden Ratio the further you go:
$$\lim_{k \rightarrow \infty} \frac{f_k}{f_{k-1}} =\phi \approx 1.618$$



Let's define a class of similar sequences $F_n$ where each $f_k$ is the sum of the previous $n$ numbers, $f_k=f_{k-1} + f_{k-2} + \dots + f_{k-n}$ so that the traditional Fibonacci sequence would be $F_2$ but we can talk about alternatives such as
$$F_3 = \{1,1,1,3,5,9,17,\dots \}$$
where we initialized the values $f_1$ through $f_3$ to be $1$ and we can show that in this case
$$
\lim_{k \rightarrow \infty} \frac{f_k}{f_{k-1}} \approx 1.839286755
$$

The following table gives some convergences for various values of $n$:
$$
\begin{matrix}
F_n & \text{Converges to} \\ \hline
F_2 & \phi \\
F_3 & 1.839286755 \\
F_4 & 1.927561975 \\
F_5 & 1.965948237 \\
F_{6} & 1.983582843 \\
F_{10} & 1.999018626

\end{matrix}
$$
Just by inspection, it seems that the convergence values are converging toward $2$ as $n \rightarrow \infty$.



So my primary question is:
What is the proof that the convergence converges to 2 (assuming it does).


Answer



From the standard theory of linear recurrence,s $F_n$ is the positive real root of the equation $z^n = z^{n-1} + z^{n-2} + \cdots + z + 1$. Multiplying both sides by $1-z$ and rearranging, you get that $F_n$ is the positive real root of $f_n(z) = z^{n+1} - 2z^n + 1 = 0$ that is not $z = 1$. (By Descartes' rule of signs (https://en.wikipedia.org/wiki/Descartes%27_rule_of_signs), there are either 0 or 2 positive real roots of this polynomial; we know there is at least 1 , so there must be 2.)



Now, we have

$$ f_n(2)= 2^{n+1} - 2 \cdot 2^{n} + 1 = 1 $$
and
$$f_n(2 - 1/2^{n-1}) = (2 - 1/2^{n-1})^{n+1} - 2 \cdot (2-1/2^{n-1})^n + 1. $$
We aim to show that $f_n(2-1/2^{n-1}) < 0$; then $f_n$ must have a root between $2 - 1/2^{n-1}$ and $2$.



Factoring out powers of 2, we get
$$f_n(2 - 1/2^{n-1}) = 2^{n+1} (1-1/2^n)^{n+1} - 2^{n+1} (1-1/2^n)^n + 1.$$
Factoring out $2^{n+1} (1-1/2^n)^n$ from the first two terms gives
$$f_n(2 - 1/2^{n-1}) = 2^{n+1} (1-1/2^n)^n (-1/2^n) + 1$$
or, finally,

$$f_n(2 - 1/2^{n-1}) = 1-2(1-1/2^n)^n. $$
So the result that $f_n$ has a root in the interval $[2-1/2^{n-1}, 2]$, for $n$ sufficiently large, follows from the fact that $(1-1/2^n)^n > 1/2$ for $n$ sufficiently large. Since $\lim_{n \to \infty} (1-1/2^n)^n = 1$ (use for example L'Hopital's rule) this follows.



Thus $F_n \in [2 - 1/2^{n-1}, 2]$ and the desired result follows, for example, from the squeeze theorem.


matrices - Finding the values of a vector if the vector.matrix product and the value of the matrix is known (only using left multiplication operations)

Given an unknown input vector $V= (v_1, v_2, v_3, v_4)$, a known $4\times 4$ matrix $A$ and a known vector-matrix product $M=[m_1,...,m_4]$. Can you discover $V$?




Normally you would just take the inverse of $A$, and right multiply it with $M$ to get $V$. However, here's the trick - in this environment, you are not allowed to right multiply, only left multiply with $4 \times 4$ matrices.



Is there a sequence of left multiplication operations that will produce the original vector? (I don't think so, but I had to ask)



(Note - no transpose operations are permitted: only $4\times 4$ multiplication)

Monday 12 February 2018

calculus - $frac {1}{6} + frac {5}{6.12} + frac {5. 8}{6.12.18} + frac {5.8.11}{6.12.18.24} + cdots.$



How to find the sum




$$\frac {1}{6} + \frac {5}{6.12} + \frac {5. 8}{6.12.18} + \frac {5.8.11}{6.12.18.24} + \cdots.$$





I have tried to make this series in the form $1 + n x + \frac {n (n + 1) x^2}{2!} + \frac {n (n+1) (n+2)x^3}{3!} + \cdots$. But I failed to do so. Can anyone please help me


Answer



Like Calculating $1+\frac13+\frac{1\cdot3}{3\cdot6}+\frac{1\cdot3\cdot5}{3\cdot6\cdot9}+\frac{1\cdot3\cdot5\cdot7}{3\cdot6\cdot9\cdot12}+\dots? $,



If $\displaystyle S=\frac {1}{6} + \frac {5}{6.12} + \frac {5. 8}{6.12.18} + \frac {5.8.11}{6.12.18.24} + \cdots.$



$$2S=\frac {2}{6} + \frac {2\cdot5}{6.12} + \frac {2\cdot5. 8}{6.12.18} + \frac {2\cdot5.8.11}{6.12.18.24} + \cdots$$




$$2S+1+\dfrac13$$



$$=1+\dfrac{-\dfrac23}{1!}\left(-\dfrac36\right)+\dfrac{-\dfrac23\left(-\dfrac23-1\right)}{2!}\left(-\dfrac36\right)^2+\dfrac{-\dfrac23\left(-\dfrac23-1\right)\left(-\dfrac23-2\right)}{3!}\left(-\dfrac36\right)^3+ \cdots$$



$$=\left(1-\dfrac12\right)^{-2/3}$$


calculus - $f$ is an unbounded monotonic increasing function: $lim limits_{x to infty} frac{1}{x}int_{0}^{x}f(t)dt=infty$



This is not homework.



I'd love your help proving that if $f$ is an unbounded monotonic increasing function, then $$\lim_{x \to \infty} \frac{1}{x}\int_{0}^{x} \ f(t)\, dt=\infty.$$ I want to use it in a couple of proofs, but I can't prove it by myself.



Thanks a lot.



Answer



I think the l'Hôpital's rule may be helpful here if your $f$ satisfies the requirements of the rule. And say the antiderviative of $f$ is $F(x)$, then



$$ \lim_{x\to\infty}\frac{1}{x}\int_{0}^{x}f(t)dt = \lim_{x\to\infty}\frac{F(x) - F(0)}{x} = \lim_{x\to\infty}f(x) .$$



Since your tag is calculus, not something like real analysis, I assume your function is a relatively normal function involving Riemann integral.


normed spaces - Convergence of sequence of function in norm.


Let $1\leq p<\infty$. Suppose that $\{f_k\}$ is a sequence in $L^p(X,\mathcal{M},\mu)$ such that the limit




$f(x)=\lim_{k \to \infty}f_k(x)$



exists for $\mu$-a.e. $x\in X$. Asumme that



$\liminf_{k \to \infty} \|f_k \|_p=a$ is finite. Then prove that



a) $f \in L^P$



b) $\|f \|_p\leq a$




c) Assuming that $\|f \|_p=\lim_{k\to \infty}\|f_k \|_p$,



$\lim_{k\to \infty} \|f-f_k\|_p=0$




By Fatous' Lemma, I proved b). Is the fact that $f(x)=\lim_{k \to \infty}f_k(x)$ exists $\mu$-a.e. guarantees a)?.



For c), using inequality $(a+b)^p \leq 2^{p-1}(a^p+b^p)$, I hope



$|f-f_k|^p \leq 2^{p-1}(|f|^p+|f_k|^p)$ hold. Then I can use dominated convergence theorem. But with $f\in L^p$, I don't know how to proceed.




Any help will be thankful.

calculus - Determining whether the series: $sum_{n=1}^{infty} tanleft(frac{1}{n}right) $ converges



I was tasked with determining whether the following series:



$$\sum_{n=1}^{\infty} \tan\left(\frac{1}{n}\right) $$



converges.




I tried employing the integral test which failed and produced incalculable integrals. Other methods didn't prove effective also. I was suggested that the Maclauren series might be of use here, but I'm not sure how to employ it.


Answer



Or by limit comparison test with $\sum\frac1n$ since by standard limit for $x\to 0\implies\frac{\tan x}{x}\to 1$ and then



$$\frac{\tan\left(\frac{1}{n}\right)}{\frac1n}\to1$$


Sunday 11 February 2018

radicals - Is $n^{th}$ root of $2$ an irrational number?








Will every $n^{th}$ root of $2$ be an irrational number? If yes, how can I prove that?

calculus - How to show that $frac{sin(n)}{n}$ is $1$ as $n rightarrow 0$?










How to show that $\frac{\sin(n)}{n}$




is $1$ as $n \rightarrow 0$? just hint.


Answer



Maclaurin series expansion of $\sin(n)$ is,



$$\sin(n) = n - \frac{n^3}{3!} +\frac{n^5}{5!}+... $$



Hence,



$$\frac{\sin(n)}{n} = 1-\frac{n^2}{3!} + \frac{n^4}{5!}+...$$




$$\lim_{n\to 0}\frac{\sin(n)}{n} = 1$$


algebra precalculus - An 11-gon with complex numbers

Let $A_1 A_2 \dotsb A_{11}$ be a regular $11$-gon inscribed in a circle of radius $2$.




Let $P$ be a point, such that the distance from $P$ to the center of the circle is $3$.



Find
$[PA_1^2 + PA_2^2 + \dots + PA_{11}^2]$



Note: This can't be made a duplicate. There are no other answers for the other two posts. Please answer.

Saturday 10 February 2018

discrete mathematics - Solving $3xequiv 4pmod 7$



I'm trying to learn about linear congruences of the form ax = b(mod m). In my book, it's written that if $\gcd(a, m) = 1$ then there must exist an integer $a'$ which is an inverse of $a \pmod{m}$. I'm trying to solve this example:




$$3x \equiv 4 \pmod 7$$





First I noticed $\gcd(3, 7) = 1$.



Therefore, there must exist an integer which is the multiplicative inverse of $3 \pmod 7$.



According to Bezout's Theorem, if $\gcd(a, m) = 1$ then there are integers $s$ and $t$ such that $sa+tm=1$



where $s$ is the multiplicative inverse of $a\pmod{m}$.



Using that theorem:





$\begin{align}7 = 3\cdot2 +1\\7 - 3\cdot2 = 1 \\-2\cdot3 + 7 = 1\end{align}$




$s=-2$ in the above equation so $-2$ is the inverse of $3 \pmod{7}$.



The book says that the next step to solve $3x \equiv 4 \pmod{7}$ is to multiply $-2$ on both sides.



By doing that I get:





$\begin{align}-2\cdot3x \equiv -2\cdot4 \pmod 7\\-6x\equiv -8 \pmod 7\end{align}$




What should I do after that?



I am working on this problem for hours.



Thanks :)



Answer



$$\begin{align}
3x\equiv4\pmod{7} & (\text{Original equation})\\3x\equiv -3\pmod{7} &(\text{Replaced 4 with -3(by subtracting 7)})\\x\equiv-1\pmod{7}& (\text{Divide each side by 3})\\ x\equiv6\pmod{7} &(\text{replaced -1 with 6 (by adding 7))}
\end{align}$$



P.S.- The reason you can add or subtract $7$ is one of the properties of $\pmod{7}$. You can add or subtract multiples of $7$ to the number in front of the $mod$ without effecting the equation.


calculus - Integrals involving exponential functions and the gamma function




I'm having trouble evaluating this integral




$$\int_0^\infty {e^{-ax^2}} \,dx $$



My guess is that it would evaluate into something like



$$\int_0^\infty \frac 12e^{-s}s^{\frac 12} \ldots \,dx = \frac {\Gamma\left(\frac 12\right)}{\frac{a^{\frac 12}}{2}}$$



When you do a substitution $ \sqrt{s}= \sqrt{a}x $ so that $ s = ax^2 $. I'm having trouble convincing myself though that $ \frac {d}{ds}\sqrt{s} = \left(\ldots a^\frac 12\right) $ which would satisfy the answer that I provided.



Am I doing something wrong or is my guess wrong?



Answer



If you want to use the Gamma function the substitution is $a x^2 = t$, so “$dx = \frac{1}{2\sqrt{a}}t^{-1/2}dt$".
Then the integral appears as,
$$\frac{1}{2\sqrt{a}} \int_0^\infty dt\,t^{-1/2}e^{-t} = \frac{1}{2\sqrt{a}}\Gamma(1/2)\ .$$



That's all.


Friday 9 February 2018

algebra precalculus - Decompose into Partial Fraction. Image Added.

Decompose into Partial Fraction



I really had no idea how to write these questions out without copying and pasting them onto here, so I am sorry for that..I hope adding a picture is fine. I would appreciate any kind of help, and if you dont mind suggesting a useful website, or video that can explain these problems to me. Thank you

Proving a Vector Norm Inequality Holds




Suppose we have a vector norm defined as $v(x)$ for $x \in\Bbb{ R}^n$, show that for vectors $x,y \in \Bbb{R}^n$ the following inequality holds:



$|v(x) - v(y)| \leq v(x - y)$.



I know that if $v(x)$ is a vector norm, then one of the conditions it necessarily satisfies is:



$||x + y|| \leq ||x|| + ||y||$ for all $x,y \in R^n$.



I believe the proof of the above inequality will most likely make use of this particular property but want to be careful when specifically using it and am having trouble establishing it. So far the attempt at showing the inequality I have is:




$|v(x) - v(y)| = ||x| - |y|| \leq |x| - |y|$



I am confused whether I have properly used the inequality property of vector norms and have trouble showing the original inequality, specifically how to get to the RHS of inequality. Any insight or help would be much appreciated, thanks.


Answer



You can use the triangle inequality as follows:
$$
\begin{align}
v(x) = v(y + (x-y)) \leq v(y) + v(x-y) \implies v(x)-v(y)\leq v(x-y)\tag1\\
v(y) = v(x + (y-x)) \leq v(x) + v(y-x) \implies v(y)-v(x)\leq v(y-x)\tag2

\end{align}
$$

thus
$$
\begin{align}
v(x)-v(y)\leq v(x-y)\tag1\\
-(v(x)-v(y))\leq v(x-y)\tag2
\end{align}
$$




using (1) and (2) $|v(x)-v(y)|\leq v(x-y)$.


calculus - Showing if $lim_{ntoinfty} a_n=L$ then $lim_{ntoinfty} -2a_n=-2L$ using definition




If $\displaystyle \lim_{n\to\infty} a_n=L$ then prove using the limit definition that: $\displaystyle \lim_{n\to\infty} -2a_n=-2L$.





From the given and the definition we know that: $L-\epsilon-2a_n>-2L-2\epsilon\Rightarrow |-2a_n+2L|<2\epsilon$ which concludes that: $\displaystyle \lim_{n\to\infty} -2a_n=-2L$.



I feel like I was cheating by doing that multiplication by $-2$, is it alright? it's still true for $2\epsilon$ right?


Answer



The general idea of you solution is very much correct. The only thing that is missing is that we have: $L-\epsilonfor sufficiently large $n$.


linear algebra - Let $A,B$ be $m times n$ and $n times m$ matrices, respectively. Prove that if $m > n$, $AB$ is not invertible



We haven't done anything about rank or dimensions or linear dependence / basis or determinants. Possible related facts :




  1. A matrix is invertible iff it is bijective as a linear transformation.



  2. An invertible matrix is row-equivalent to the identity matrix.


  3. A matrix has a right inverse iff it has a left inverse.




Also, invertability is only defined for square matrices.


Answer



Since $A$ is an $m\times n$ matrix and $B$ is an $n\times m$ matrix, the product $AB$ is an $m\times m$ matrix. Suppose $m>n$ then the operator associated to left multiplication by $B$ is not injective because there are more columns than rows, so the kernel is always nontrivial (i.e. there are more column vectors than there are entries in those column vectors, so they must be linearly dependent). Said another way: the linear operator $T:\Bbb R^m\to\Bbb R^n$ with $T(v)=Bv$ is not injective because $m>n$, so the domain has higher dimension than the codomain. So there exists vectors $v,w\in\Bbb R^m$ such that $Bv=Bw$ but $v\neq w$.



Spoiler:





Thus, $$Bv=Bw\implies A(Bv)=A(Bw)\implies (AB)v=(AB)w$$ but $v\neq w$. Hence, the operator associated with left multiplication by $AB$ is not injective.



trigonometry - How to calculate a "rational" sine/cosine to specific digits of precision?

I have a toy calculator I've been tasked to create for a class project. That's great - I have most of the normal calculator stuff done (order of operations, variables, precision settings, predefined functions). The issue is that, since I'm using rational arithmetic, there's no sin/cos function built-in, so I'll have to make my own.




And the issue with that is that sines are usually irrational, so I'll have to limit it to some precision or it'll run forever. I've successfully implemented the sine Maclaurin series in code. For reference, the relevant equation from the Wikipedia article is this:



$$\sin x = \sum_{n=0}^{\infty}{\frac{(-1)^n}{(2n + 1)!}x^{2n+1}} = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \frac{x^9}{9!} - \cdots$$



My program runs only until this condition is true:



$$\frac{x^i}{i!}<\frac{1}{10^p}$$



where $i = 2n + 1$ and $p$ is the desired precision in digits, but it gets very slow quite quickly (at least using a C++ interpreter, but I want it to run quickly there before I make it faster by compiling). There are some optimizations such as keeping the last numerator/denominator, but it's still not that fast.




Calculating the sine of $5\pi$ to 15 digits of precision (bad example, but it's to prove a point) using my program took 5 seconds across 34 iterations, since many later iterations end up multiplying integers greater than $1 \times 10^{900}$. $\pi$ has to be represented as a fraction, so there were very large integers involved.



I'm not sure where to ask for a better algorithm but here (or Stack Overflow, from which I've been banned for over a year, so that's not going to happen...). Even though I'm using rational arithmetic, I'm wondering if there's a more computationally efficient way to approximate a $\sin$ and $\cos$ to some amount of digits of precision. I know this isn't a programming site, but surely someone can help with the algorithm side? (preferably explained in plain English)



Note: I could offload the work onto some arbitrary-precision floating point library and then convert that back to a "rational", but that seems like cheating to me... it'll be more of a last resort.

Thursday 8 February 2018

exponentiation - Negative Number raised to fractional power

How would you solve a negative number raised to a fraction a/b if b is odd and a is evem?
Ignoring imaginary numbers



i.e $(-1)^\frac23$ Calculator returns an error



$(-1)^\frac 13 (-1)^\frac 13$ = -1.-1 = 1 (By law of indices)



or




$(-1^\frac13 )^2$ = 1



or



$(-1^2)^\frac13$ = 1



What about for other cases of a and b?

Existence of a smallest integer greater than some real number

In Stephen Abbott's Understanding Analysis, when proving that the set of rational numbers is dense in the set of real numbers, Abbott picks an integer $m$ such that $$m - 1 \leqslant na < m,$$ where $n$ and $a$ are a natural number and a real number respectively.




Intuitively it makes sense that such an $m$ exists, but it seems to me that Abbott is taking liberties here. How do I know from the fact that the set of reals is a complete ordered field that such an $m$ exists? (Abbott doesn't discuss the real number axioms but I know them from elsewhere.)

combinatorics - A puzzle on coin weighing



We are given $(3^n-1)/2$ coins, and among those coins there is just one counterfeit coin. All the other coins weigh the same, but the counterfeit coin weighs slightly heavier or lighter (we don't know which is the case) than a normal coin.



Using a balance scale only, is it possible to identify the counterfeit coin, weighing not more than $n$ times?



My attempt : Using $-1, 0, +1$ instead of $0, 1, 2$, we may write each integer from $-(3^n-1)/2$ to $(3^n-1)/2$ in base 3, with digits among $-1, 0, +1$, in a unique manner. Label the coins from $1$ to $(3^n-1)/2$, and for each coin, labelled $k$, say, we designate one number in $\left\{-k,+k\right\}$ as being 'heavy' and the other one as being 'light'.



My claim is that it is possible to do this so that the resulting designation satisfies the following property :





For each $0\leq i\leq n-1$, the number of heavy numbers whose $3^i$-digit is -1 equals the number of heavy numbers whose $3^i$-digit is +1.




If this is the case, than the following weighing strategy works :
On $i$th weighing, put all the coins whose 'heavy' representation has $-1$ on its $3^i$-digit on the left. Put all the coins whose 'heavy' representation has $+1$ on its $3^i$-digit on the right.



Assuming the counterfeit coin is heavy, our $i$th weighing shows us the $3^i$-digit of the coin. After $n$ weighings, we may deduce which coin is counterfeit.



If the counterfeit coin is light, a similar reasoning also works, and it is clear from our construction the final answer does not depend on whether the counterfeit coin is heavier or lighter than a normal coin.




I believe that this strategy makes sense, but I was not capable proving that such a designation is possible (see the quote box). Am I on the right track? Is such a dsignation possible? Are there any more beautiful solutions to this innocent-looking puzzle?


Answer



Note that we are only asked to find the counterfeit coin (which we know exists), but strictly speaking need not find out if it is heavy or light.
Thus we might lay one coin aside and use the following theorem, interpreting "all coins weigh the same" as "the coin put aside is counterfeit".
If you think this is a loophole to the problem, the theorem allows us one extra weighing in that last case and we can use it to compare the counterfeit coin against one of the known good coins.



Theorem. Given $(3^n-3)/2$ coins, it is possible with $n$ weighings to arrive at one of the following results: "Coin $i$ is a heavy counterfeit" with $1\le i\le \frac{3^n-3}{2}$, or "Coin $i$ is a light counterfeit" with $1\le i\le \frac{3^n-3}{2}$, or to arrive at the result "all coins weigh the same" already after $n-1$ weighings.



Proof.

For $n=1$ this is clear: Given $0$ coins, it takes $0$ weighings to declare that they "all" weigh the same.



For $n>1$, (in your mind) glue the coins into groups of three (note that $3\mid \frac{3^n-3}{2}$). This leaves us with $\frac{3^{n-1}-1}2=\frac{3^{n-1}-3}2+1$ big coins. Ignoring the extra coin, we can either determine that the first $\frac{3^{n-1}-3}2$ big coins weigh the same with $n-2$ weighings, or determine with $n-1$ weighings which of the first the first $\frac{3^{n-1}-3}2$ big coins is counterfeit and how.



In the first case, "un-glue" the last coin into three coins $a,b,c$; also un-glue another coin into $d,e,f$. We know that $d,e,f$ are not counterfeit! Compare $a+b$ against $c+d$. In case of equilibrium, we know that all coins weigh the same and we used $n-1$ weighings for that.
If $a+b>c+d$, either $a$ or $b$ is too heavy or $c$ is too light. Compare $a$ against $b$; in case of equilibrium, $c$ is too light, and otherwise the heavier of the two coins is found. That is, we have determined the counterfeit coin and its type in $n$ weighings.



In the second case, we have found a counterfeit big coin and know if it is too heavy or too light. Un-glue it into three smaller coins $a,b,c$ and compare $a$ against $b$. In case of equilibrium, $c$ is counterfeit and in the same way the big coin was. And otherwise, we see which of $a,b$ is too light or too heavy, respectively. That is, we have determined the counterfeit coin and its type in $n$ weighings.



The theorem now follows by induction. $\square$



Wednesday 7 February 2018

limits - Sum of $lim_{nrightarrow infty}left(frac{n}{n^2+1}+frac{n}{n^2+2}+cdots cdots cdots +frac{n}{n^2+n}right)$



Sum of $$\lim_{n\rightarrow \infty}\left(\frac{n}{n^2+1}+\frac{n}{n^2+2}+\cdots \cdots \cdots +\frac{n}{n^2+n}\right)$$



$\bf{My\; Try::}$ I have solved it using Squeeze theorem



$$\lim_{n\rightarrow \infty}\sum^{n}_{r=1}\frac{n}{n^2+n}\leq \lim_{n\rightarrow \infty}\sum^{n}_{r=1}\frac{n}{n^2+r}\leq \lim_{n\rightarrow \infty}\sum^{n}_{r=1}\frac{n}{n^2+1}$$



So we get $$\lim_{n\rightarrow \infty}\sum^{n}_{r=1}\frac{n}{n^2+r} = 1$$




My question is can we solve above limit without using Squeeze Theorem,



If yes then plz explian me, Thanks


Answer



Using harmonic numbers $$S_n=\sum^{n}_{r=1}\frac{n}{n^2+r}=n \left(H_{n^2+n}-H_{n^2}\right)$$ Now, using the expansion $$H_p=\gamma+\log \left({p}\right)+\frac{1}{2 p}-\frac{1}{12
p^2}+O\left(\frac{1}{p ^4}\right)$$ and applying to each term you should arrive to $$S_n=n\left(\frac{-6 n^3-6 n^2+2 n+1}{12 n^4 (n+1)^2}+\log \left(1+\frac{1}{n}\right)+O\left(\frac{1}{n ^2}\right)\right)$$ Taylor again, $$S_n=1-\frac{1}{2 n}+O\left(\frac{1}{n ^2}\right)$$



Edit




May be of interest
$$T_n=\sum^{n}_{r=1}\frac{n}{n^k+r}=n \left(H_{n^k+n}-H_{n^k}\right)$$ leads to $$T_n=\frac{ \left(n^{2-k}-6 n^{k+1}-6 n^2+2 n\right)}{12 n^k\left(n^k+n\right)^2}+\log \left(1+\frac 1 {n^{k-1}}\right)$$ which makes $$n^{k-1}T_n=1-\frac{1}{2 n^{k-1}}+O\left(\frac{1}{n^k}\right)$$


divisibility - The method of solving for a factor of $90!$


If $90! = (90)(89)(88)...(2)(1)$, then what is the exponent of the highest power of $2$ which will divide $90!$ ?




How would I apply one of the easiest method from Here?




I need help on applying the link to this question.



I do not understand which one explains my case, and how I can solve using the method.



I would appreciate if someone showed how it is applied to this

Tuesday 6 February 2018

probability - What is the expected number of rolls made if a fair six-sided die is rolled until a number less than one of the previous rolls is rolled?

I recently came across this problem in probability




Roll a fair six-sided die until you roll a number that is less than one of your previous rolls. To three decimal places, what is the expected value of the number of rolls made?





This is all what the question has to offer, any ideas?

linear algebra - Same characteristic polynomial $iff$ same eigenvalues?



This proves: Similar matrices have the same characteristic polynomial. (Lay P277 Theorem 4)



I prefer https://math.stackexchange.com/a/8407/53259, but this proves that they have the same eigenvalues.



Are they equivalent? What about in general, even for matrices which are NOT similar?


Answer





  1. If $A$ and $B$ have the same characteristic polynomial, then clearly the have the same eigenvalues, these are the zeros of the characteristic polynomial.

  2. The converse is generally not true: for example
    $$
    A=\left[\matrix{1&0&0\cr
    0&0&1\cr 0&0&0}\right],\quad
    B=\left[\matrix{1&1&0\cr
    0&1&0\cr 0&0&0}\right]
    $$
    we have $\sigma(A)=\sigma(B)=\{0,1\}$, but $\chi_A(X)=X^2(X-1)$, $\chi_B(X)=X(X-1)^2$.



real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...