Saturday 31 August 2019

combinatorics - Closed form formula for the following sum

Does anyone know of a closed-form formula for the sum $\sum_{n = 1}^\infty x^{2^n-1}$? We can assume that $0

Thanks!

How to find the summation of this infinite series: $sum_{k=1}^{infty} frac{1}{(k+1)(k-1)!}(1 - frac{2}{k})$

I've been trying to figure out the following sum for a while now:




$$\sum_{k=1}^{\infty} \frac{1}{(k+1)(k-1)!}\left(1 - \frac{2}{k}\right)$$





I'm pretty sure that this doesn't evaluate to $0$.



As $k$ increases the term tends to $0$, but the first few terms add up to give a non-zero number.



I'm just having trouble figuring out how to find that number. Any help would be appreciated.

Friday 30 August 2019

number theory - Deleting digits



How many four digit numbers have the following property?




‘For each of its digits, when this digit is deleted the resulting $3$ digit number is a factor of the original number’. There are multiple alternatives:
$$A)5$$$$ B)9 $$$$C)14 $$$$D)19 $$$$E)23$$
Any ideas on how to tackle this problem?





Answer



This is the official answer:



Let the 4-digit integer be ‘abcd’. When it is divided by ‘abc’, we get ‘abcd’ = 10 × ‘abc’ + d. Since ‘abc’ is a factor of ‘abcd’, we must have d = 0, and the integer is ‘abc0’. Similarly, when ‘abc0’ is divided by ‘ab0’, we get ‘abc0’ = 10 × ‘ab0’ + ‘c0’. But ‘c0’ can only be divisible by ‘ab0’ if c = 0. Thus the integer is ‘ab00’. This is divisible by ‘b00’, so we can’t have b = 0. When we divide ‘ab00’ by ‘a00’, we get ‘ab00’ = 10 × ‘a00’ + ‘b00’, hence ‘b00’ must be a

multiple of ‘a00’, and therefore b is a multiple of a.
Since ‘b00’ is a factor of ‘ab00’ and ‘ab00’ = ‘a000’ + ‘b00’, it must be that ‘b00’ divides ‘a000’,hence‘a0’is a multiple of b For b to be a multiple of a and a factor of 10×a, b must be a, 2a, 5a or 10a. But 10a is more than one digit long. When b = a we get ‘ab00’ is 1100,
2200, 3300, 4400, 5500, 6600, 7700, 8800, 9900. When b = 2a, we get 1200, 2400, 3600, 4800. When b = 5a, we get 1500. This is 9 + 5 = 14 possibilities.


calculus - How can I calculate this limit without using L'Hopital's rule?



$$\lim_{x \to 0}\left(\frac{\sin{x}-\ln({\text{e}^{x}}\cos{x})}{x\sin{x}}\right)$$



Can this limit be calculated without using L'Hopital's rule?


Answer




Rewriting the numerator as $\sin x-x-\ln\cos x$, notice that $\sin x-x$ is an odd function; as the denominator is even, the ratio vanishes at $x=0$ and these terms can be ignored.



Then
$$\lim_{x\to0}-\frac{\ln\cos x}{x\sin x}=-\lim_{x\to0}\frac{\frac12\ln(1-\sin^2x)}{\sin^2x}\frac{\sin x}x=-\frac12\lim_{t\to0^+}\frac{\ln(1-t)}t=-\frac12\lim_{t\to0^+}\ln(1-t)^{1/t}\\=-\frac12\ln\left(\lim_{t\to0^+}(1-t)^{1/t}\right)=-\frac12\ln e^{-1}=\frac12.$$



UPDATE:



We need to show that the first limit exists. Without being allowed to use derivatives, we need some property of the trigonometric functions, and we admit $\sin x\le x\le \tan x$, so that
$$1\ge\frac{\sin x}x\ge\cos x\ge\cos^2x,$$
and from there

$$0\ge\frac{\sin x-x}x\ge\cos^2x-1,$$
$$0\ge\frac{\sin x-x}{x\sin x}\ge-\sin x.$$
As expected, the limit is $0$. As a byproduct, the same relations establish the limit of $\sin x/x$.


Thursday 29 August 2019

proof verification - Testing divisibility by 7 of number in base 8




Is the number 3776543210123456773, in base 8, divisible by 7?



I started by expanding the number:
3 * $8^0$ + 7 * $8^1$ + 7 * $8^2$ + ...
3 * $(7 + 1)^0$ + 7 * $(7 + 1)^1$ + 7 * $(7 + 1)^2$ + ...



Then, by using Newton's Binomial Theorem, I reasoned that for each binomial, only the first term wouldn't be divisible by 7 since it would be equal to 1. This led me to the conclusion that each digit of the initial number would be multiplied by one once. Therefore, if the sum of all digits was divisible by 7, the number would also be divisible by 7.



However, I converted this number to base 10 and the remainder of the division by 7 was equal to 2, instead of 0, which was the result I found using the reasoning above.




I can't seem to find where I made a mistake. :c


Answer



Your reasoning is correct. But the sum of the digits of the digits of your number is $76$. And the remainder of the division of $76$ by $7$ is $6$, of course. So, the remainder of the division of your number by $7$ is $6$ too.



And, in base $10$, your number is $72\,011\,638\,983\,515\,643$, whose remainder, when divided by $7$, is $6$ too.


elementary number theory - Help me understand how the rule of divisibility for 11 is calculated

If you want to find b such that $a \equiv b \pmod{11}$, you do (assuming a has 4 digits, for example):



$$a_4*10^4+a_3*10^3+a_2*10^2+a_1*10^1+a_0*1$$



Then you calculate mod 11 for each product then add everything and do mod 11 again, like you do with other numbers.



The problem is that my theory book states that $10^k\equiv (-1)^k\pmod{11}$. I don't get this. How isn't the remainder of the division of 10 by 11 10? I thought that to calculate the remainder when your divisor is greater than the dividend, you to d*0 and your remainder is your dividend. For example, 10/11 = 0*11+10. Have I been doing it wrong?

Wednesday 28 August 2019

general topology - A perfect nowhere dense set which intersects every open set with positive measure?



A perfect set is a closed set with no isolated points. A nowhere dense set is a set whose closure has empty interior. My question is, what is an example of a nonempty perfect nowhere dense subset of $[0,1]$ such that every intersection of the set with an open set is either empty or has positive Lebesgue measure?



Is the Fat Cantor Set an example of such a set? Or does no such set exist?


Answer



Any closed set $F\subseteq\mathbb R$ admits a decomposition $F=N\cup F_0$ where $N$ has Lebesgue measure zero and $F_0$ is a closed set whose intersection with every open set is either empty or has positive measure. Of course $F_0$ can have no isolated points. Moreover, if $F$ is nowhere dense then $F_0$ is nowhere dense, and if $F$ has positive measure then $F_0$ is nonempty. Therefore, any nowhere dense closed set $F$ of positive measure will contain a set $F_0$ with the properties you want. Also, if $F$ arises from the usual construction of a "fat Cantor set", then $N=\emptyset$ and $F_0=F$.




P.S. Given a closed set $F$, let $\mathcal U$ be the collection of all open intervals $I$ with rational endpoints such that $F\cap I$ has measure $0$. Then $U=\bigcup\mathcal U$ is an open set and $N=F\cap U$ has measure 0. Let $F_0=F\setminus N=F\setminus U$. Then $F_0$ is a closed set, and the intersection of $F_0$ with any open set, if nonempty, has positive measure.


logic - Paradox - What is wrong with this proof that proves a false assertion?



Theorem: Let $a_{n}=a_{n-1}+1, a_1=1$. For any $n$, in order to compute $a_n$, it is necessary to compute $a_i$ for each $i=1,\dots,n-1$, which takes $\Theta(n)$ time.



Proof: This is vacuously true for $n=1$. Assume true for $n=k-1$. Prove true for $n=k$. In order to compute $a_{k-1}+1$, it is necessary to compute $a_{k-1}$. Then since $a_k=a_{k-1}+1$, in order to compute $a_k$, it is necessary to compute $a_{k-1}$. By the induction hypothesis, in order to compute $a_{k-1}$, it is necessary to compute $a_i$ for each $i=1,\dots,k-2$. Hence, in order to compute $a_k$, it is necessary to compute $a_i$ for each $i=1\dots,k-1$. QED



What is wrong with this proof? It seems valid to me, even though the theorem is false.


Answer



In your induction step, you made the additional assumption (beyond the inductive hypothesis) that it was necessary to compute $a_{k-1}$ in order to compute $a_k$. That's hardly the case, as simple inspection of the recursive definition gives us the closed-form definition $a_n:=n$ for all $n$.


Tuesday 27 August 2019

elementary number theory - Prove true by using induction



Define a sequence ($t_i$) where $i ∈ N$ recursively by $t_1 = t_2 = t_3 = 1$ and, for all $n \ge 3$, $t_{n+1} = t_n + t_{n-1} + t_{n−2}.$ Prove that $t_n < 2^n$
for all $n ∈ N.$



I'm having trouble making advancements because I am stuck on the base cases and inductive step; do I assume that there will be $3$ base cases that are all less than $2^1$?


Answer



The required claim holds for $n\in\{1,\,2,\,3\}$. If it holds for $n=k$, for $n=k+1$ and for $n=k+2$, $t_{k+3}<2^k+2^{k+1}+2^{k+2}=7\times 2^k<2^{k+3}$.



integration - Looking for closed-form solution of the following integral

I have been trying to calculate the following triple integral:




$$ I(a,b,c) \,=\, \int_{x=0}^{a}\int_{y=0}^{b}\int_{z=0}^{c} \frac{dx\,dy\,dz}{(1+x^{2}+y^{2}+z^{2})^{3}} $$



I can find values numerically for given $a,b,c$ but, since I know that $I(a,b,c)\rightarrow\frac{\pi^{2}}{32}$ as $a,b,c\rightarrow\infty$, I wondered whether the integral has a closed-form solution for arbitrary $a,b,c$ ? I certainly haven't found one and hoped someone might be able to help.

Monday 26 August 2019

calculus - Integrating the formula for the sum of the first $n$ natural numbers



I was messing around with some math formulas today and came up with a result that I found pretty neat, and I would appreciate it if anyone could explain it to me.



The formula for an infinite arithmetic sum is $$\sum_{i=1}^{n}a_i=\frac{n(a_1+a_n)}{2},$$ so if you want to find the sum of the natural numbers from $1$ to $n$, this equation becomes $$\frac{n^2+n}{2},$$ and the roots of this quadratic are at $n=-1$ and $0$. What I find really interesting is that $$\int_{-1}^0 \frac{n^2+n}{2}dn=-\frac{1}{12}$$ There are a lot of people who claim that the sum of all natural numbers is $-\frac{1}{12}$, so I was wondering if this result is a complete coincidence or if there's something else to glean from it.


Answer



We have Faulhaber's formula:



$$\sum_{k=1}^n k^p = \frac1{p+1}\sum_{j=0}^p (-1)^j\binom{p+1}jB_jn^{p+1-j},~\mbox{where}~B_1=-\frac12$$




$$\implies f_p(x)=\frac1{p+1}\sum_{j=0}^p(-1)^j\binom{p+1}jB_jx^{p+1-j}$$



We integrate the RHS from $-1$ to $0$ to get



$$I_p=\int_{-1}^0f_p(x)~\mathrm dx=\frac{(-1)^p}{p+1}\sum_{j=0}^p\binom{p+1}j\frac{B_j}{p+2-j}$$



Using the recursive definition of the Bernoulli numbers,



$$I_p=(-1)^p\frac{B_{p+1}}{p+1}=-\frac{B_{p+1}}{p+1}$$




Using the well known relation $B_p=-p\zeta(1-p)$, we get



$$I_p=\zeta(-p)$$



So no coincidence here!


Sunday 25 August 2019

calculator - What is logarithm & how log table can be constructed by me?

I'm studying properties of logarithm but I don't understand how base e works. Base 10 looks simple while doing calculations of numbers having multiple of 10. As other numbers are not multiple of 10 how one can calculate without using log table. That's why I want know how log table is constructed & how base e works ?
I'm very basic user, I don't get integration, derivatives & summations. Please give answer theoretically or using row concepts. Since I want to construct log table by myself.

Saturday 24 August 2019

elementary number theory - If $gcd(ab,c)=d$ and $c|ab$ then $c=d$



For all positive integers $a$, $b$, $c$ and $d$,

if $\gcd(ab, c) = d$ and $c | ab$, then $c = d$.



Need help proving this question, I know that $abx + cy = d$ for integers $x,y$
and that $c|ab$ can be $c=q\cdot ab$ but I'm not sure how to apply these facts or if they're even useful in this proof.



Any help to get me started would be great.


Answer



$c|ab$ means that $ab=q\cdot c$, not the other way around!



Therefore, you have $ab=q\cdot c$ and $ab x + c y = d$ which you can rewrite into $$qcx + cy = d\\

c(qx+y) = d$$



Can you continue from here?


complex numbers - Finding modulus of $sqrt{6} - sqrt{6},i$

I found the real part $=\sqrt{6}$.




But I don't know how to find imaginary part. I thought it was whatever part of the function that involved $i$, with the $i$ removed? Therefore the imaginary part would be $-\sqrt{6}$.



Meaning the modulus is equal to
\begin{align}
\sqrt{ (\sqrt{6})^2 + (-\sqrt{6})^2} = \sqrt{12}.
\end{align}
The answer was $2\sqrt{3}$.

limits - How to calculate $lim_{nrightarrow infty} frac{frac{1}{n^n} left(-Gamma(n) n + e^n Gamma(n+1,n)right)}{sqrt{n}}$



How can I compute the following limit?



$$\lim_{n\rightarrow \infty} \frac{\frac{1}{n^n} \left(-\Gamma(n) n + e^n \Gamma(n+1,n)\right)}{\sqrt{n}}$$




The answer appears to be about $1.25$.


Answer



This is the same as the one worked out in this question. We have
$$\Gamma(n+1,n) = \dfrac{n!}{e^n} \left(\sum_{k=0}^n \dfrac{n^k}{k!}\right) \sim \dfrac{n!}2$$ from this question.
Hence, we get that
$$\dfrac{e^n \Gamma(n+1,n)}{n^{n+1/2}} \sim \dfrac{e^n n!}{2n^{n+1/2}} \sim \dfrac{e^n \sqrt{2 \pi} n^{n+1/2}}{2n^{n+1/2} e^n} = \sqrt{\dfrac{\pi}2}$$
As I have in the comments, the first term can be thrown away. Hence, the limit is $$\sqrt{\dfrac{\pi}2} \approx 1.25$$


calculus - Is there any method to get a finite sum for $1+frac{1}{2}+frac{1}{3}+frac{1}{4}+cdots$?




As we can see on Wikipedia, there are some algebraic methods that give us finite sums for the Grandi's series



$$1-1+1-1+1-1+1-1+\cdots$$



Let $S$ be the sum of the Grandi's series. Then




  • $S=(1-1)+(1-1)+\cdots=0+0+\cdots=0$


  • $S=1+(-1+1)+(-1+1)+\cdots=1+0+0+\cdots=1$


  • $1-S=S$ so that $S=1/2$





The algebraic manipulations above are not allowed because the Grandi's series doesn't converges.



Is there something similar related to the harmonic series? In other words, is there any "incorrect algebraic way" to get a finite sum for



$$1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\cdots \;?$$



Thanks.


Answer




Actually, if you only consider the normal harmonic series, with only positive terms, you can't find a rearrangement that will converge to a certain value since all terms are positive, and changing the order is not possible.



However you can prove that, if you re-arrange correctly the terms of the series of general term: $a_n = \frac{(-1)^{n-1}}{n}$ , you can make it converge towards any real x.



Roughly: $a_1 = 1$ , if x<1, add negative terms until $x> S_{f(n)} $ where this partial sum is made with odd terms of the series. Then once you have this condition, you will add positive terms until : x < $S_{f(n)} $ . You do that undefinitely, and putting it properly enough you can prove that this series converges towards x. Of course this in only sketches, but the idea is here


Friday 23 August 2019

calculus - Show a function whose derivative is bounded is also bounded in an interval



Suppose g is differentiable over (a,b] (i.e. g is defined and differentiable over (a,c), where (a,c)$\supset$(a,b]), and |g'(p)|$\le$ M (M is a real number) for all p in (a,b]. Prove that |g(p)|$\le$Q for some real number over (a,b].




I looked at a similar solution here: prove that a function whose derivative is bounded also bounded
but I'm not sure if they are asking the same thing, and I'm having trouble figuring out the case when $x\in(a,x_0)$ (x here is point p). Could someone give a complete proof of this problem?


Answer



Use mean value theorem to get $g(x) = (x - b)g'(c) + g(b)$. If $x$ is bounded then clearly the RHS of the equation is bounded.


calculus - Determine $lim_{x to 0}{frac{x-sin{x}}{x^3}}=frac{1}{6}$, without L'Hospital or Taylor




How can I prove that $$\lim_{x \to 0}{\frac{x-\sin{x}}{x^3}}=\frac{1}{6}$$



without using L'Hospital or Taylor series?



thanks :)


Answer



Let $L = \lim_{x \to 0} \dfrac{x - \sin(x)}{x^3}$. We then have
\begin{align}
L & = \underbrace{\lim_{y \to 0} \dfrac{3y - \sin(3y)}{27y^3} = \lim_{y \to 0} \dfrac{3y - 3\sin(y) + 4 \sin^3(y)}{27y^3}}_{\sin(3y) = 3 \sin(y) - 4 \sin^3(y)}\\
& = \lim_{y \to 0} \dfrac{3y - 3\sin(y)}{27 y^3} + \dfrac4{27} \lim_{y \to 0} \dfrac{\sin^3(y)}{y^3} = \dfrac{3}{27} L + \dfrac4{27}

\end{align}
This gives us $24L = 4 \implies L = \dfrac16$


Thursday 22 August 2019

algebra precalculus - Prove $left(1+frac1{n}right)^{n}

I am trying to prove the following by mathematical induction:
$$\left(1+\frac1{n}\right)^{n}<\left(1+\frac1{n+1}\right)^{n+1}$$
Other proofs without induction are found here: I have to show $(1+\frac1n)^n$ is monotonically increasing sequence.
But I am curious whether it can be proved by induction as well.



What I've tried so far:



The original inequality is equivalent to
$$(n+1)^{2n+1}So I have to show:

$$(n+2)^{2n+3}<(n+1)^{n+1}(n+3)^{n+2}$$
And,
$$(n+1)^{n+1}(n+3)^{n+2}=(n+1)\color{red}{n^n}\left(1+\frac1n\right)^n\cdot(n+3)\color{red}{(n+2)^{n+1}}\left(1+\frac1{n+2}\right)^{n+1}$$$$>(n+1)(n+3)\cdot\color{red}{(n+1)^{2n+1}}\left(1+\frac1n\right)^n\left(1+\frac1{n+2}\right)^{n+1}$$$$=(n+1)(n+3)\left(n+2+\frac1n\right)^n\left(n+1+\frac{n+1}{n+2}\right)^{n+1}$$
and I am stuck.

elementary number theory - Last 3 digits of $3^{999}$



I know that it's $3^{999} \mod 1000$ and since $\varphi(1000) = 400$ and $3^{400}\equiv1 \mod1000$ it will be equivalent to $3^{199} \mod 1000$ but what should I do from then? Or am I wrong about this from the start?


Answer



Using Carmichael function will be beneficial here as

$\displaystyle\lambda(1000)=100$



$$\implies 3^{100n}\equiv1^n\pmod{1000}\equiv1$$ for any integer $n$



As $(3,1000)=1,$ this implies $$3^{100n-1}\equiv3^{-1}$$



As $\displaystyle 999\equiv-1\pmod{1000}\implies3^{-1}\equiv-333\equiv1000-333$


probability - Expected Value Of Dice Rolling Game

I am having trouble figuring out the expected value in situations were the examples are going to infinity.



Example:



I have a fair dice with 8 sides. I keep a counter ($k$) of the rounds I play. Each round I increase the counter by 1 and I roll the die. I keep doing this until I roll a 1 or an 8.




So I'll define a random variable $X$ to be the amount of rounds played. I know that the odds of rolling a 1 or a 8 on a 8 sided die is $\frac1 4$ otherwise it is $\frac3 4$. The way I am trying to solve the expected value is by using a geometric series. This is where I am getting stuck I think it should look like this:



$E(X)=X_1P_1+X_2P_2+X_3P_3+...+X_nP_n$



$E(X)=1*\frac1 4+2*\frac1 4+3*\frac1 4+...+n*\frac1 4$



I am unsure how to turn this into a sum.

Wednesday 21 August 2019

complex numbers - How to solve the equation $ z^2 = i-1 $?





$\ z^2 = i-1 \ $
Hey guys, I couldn't find a way to solve this problem. The question suggests I replace $\ z\ $ with $\ x+iy\ $ and then go from there, but I always end up having to complete the square and end up with a completely different answer to the back o the book. Can you please help?
Thanks


Answer



let $$z=x+iy$$ then we get

$$x^2-y^2+2xyi=i-1$$ so we get
$$x^2-y^2+1=0$$
and $$2xy-1=0$$
Can you solve this?


calculus - Functions not differentiable but continuous

So I have another question about functions:



Question: If neither $f$ nor $g$ is differentiable at $a$, but both are continuous at $a$, then $f+g$ is not differentiable at $a$.



I know that we could have a function $f(a)=|a|$ and $g(a)=|a|$, where $a=0$, so this means $f$ and $g$ are both continuous but not differentiable at $a=0$.



But how do I show that $f+g$ is not differentiable now?
How would I go about this?

real analysis - Proof that $lim_{x to 0} frac{sin(x) - x}{x^2} = 0$ (Without L'Hospital)




I was studying calculus and I got stuck in proving that





$$\lim_{x \to 0} \frac{\sin(x) - x}{x^2} = 0.$$




Using L'Hospital is easy. However, I want a proof where I don't use L'Hospital.



Help?


Answer



Since $\frac{\sin x-x}{x^2}$ is an odd function, it suffices to show that the right-limit converges to $0$. In this answer, we only use the following fact:




$$ \forall x \in (0, \pi/2), \quad \sin x \leq x \leq \tan x. $$



This inequality appears in the standard proof of $\frac{\sin x}{x} \to 1$ as $x \to 0$ in many calculus textbook, so I will skip the proof. And indeed, once we have this inequality, then $\frac{1}{\cos x} \leq \frac{\sin x}{x} \leq 1$ and letting $x \to 0$ together with the squeezing theorem gives $\frac{\sin x}{x} \to 1$.



Next, for $x \in (0, \pi/2)$,



$$ \frac{\sin x - \tan x}{x^2} \leq \frac{\sin x - x}{x^2} \leq 0. $$



But since $ \sin x - \tan x = \tan x(\cos x - 1) = - \frac{\tan x \sin^2 x}{1+\cos x} $ and $\frac{\sin x}{x} \to 1$ as $x\to 0$, we have




$$ \lim_{x \to 0} \frac{\sin x - \tan x}{x^2} = - \lim_{x \to 0} \frac{\tan x}{1+\cos x}\left(\frac{\sin x}{x}\right)^2 = 0. $$



So, by the squeezing theorem,



$$ \lim_{x\to0^+} \frac{\sin x - x}{x^2} = 0. $$


Tuesday 20 August 2019

math history - Approximation for $pi$



I just stumbled upon




$$ \pi \approx \sqrt{ \frac{9}{5} } + \frac{9}{5} = 3.141640786 $$



which is $\delta = 0.0000481330$ different from $\pi$. Although this is a rather crude approximation I wonder if it has been every used in past times (historically). Note that the above might also be related to the golden ratio $\Phi = \frac{\sqrt 5 + 1}{2} $ somehow (the $\sqrt5$ is common in both).



$$ \Phi = \frac{5}{6} \left( \sqrt{ \frac{9}{5} } + \frac{9}{5} \right) - 1 $$



or



$$ \Phi \approx \frac{5}{6} \pi - 1 $$




I would like to know if someone (known) has used this, or something similar, in their work. Is it at all familiar to any of you?



Related Question (link).


Answer



Ramanujan found this approximation, among many others, according to Wolfram MathWorld equation 21 in linked page.


analysis - Determine whether this series is absolutely convergent, conditionally convergent or divergent?



The series $ \sum_{n=1}^{\infty} \frac{(-1)^n n}{n^2 + 1} $; is it absolutely convergent, conditionally convergent or divergent?



This question is meant to be worth quite a few marks so although I thought I had the answer using the comparison test, I think I'm supposed to incorporate the alternating series test.


Answer



Your series is convergent by Leibniz-theorem but not absolutely convergent as you can see by comparison with $\sum \frac{1}{n+1}$


probability - How many tries on average before I see the same value $N$ times in a row?



If I repeatedly roll a fair, $X$-sided die, on average how many rolls should I expect to make before I happen to roll the same value $N$ times in a row?




I've found questions on here with answers when $X=2$, or where $N=2$, or where you're looking for a specific result $N$ times in a row; I'm interested in the situation where I don't care what the specific value is, I just want it to be $N$ times in a row. The closest I can find is Suppose we roll a fair $6$ sided die repeatedly. Find the expected number of rolls required to see $3$ of the same number in succession., but I can't quite figure out how to generalize it beyond $N=3$.


Answer



Let $E_k$ denote the expected number of rolls that are still needed if $k$ rolls have passed with equal result. We write recurrence relations for $E_k,0\leq k\leq N-1$ and calculate the wanted expectation $E=E_0$ from them.




We obtain
\begin{align*}
E_0&=E_1+1\tag{1}\\
E_1&=\frac{1}{X}(E_2+1)+\frac{X-1}{X}(E_1+1)\\
E_2&=\frac{1}{X}(E_3+1)+\frac{X-1}{X}(E_1+1)\\

E_3&=\frac{1}{X}(E_4+1)+\frac{X-1}{X}(E_1+1)\tag{2}\\
&\ \ \ \vdots\\
E_{N-3}&=\frac{1}{X}(E_{N-2}+1)+\frac{X-1}{X}(E_1+1)\\
E_{N-2}&=\frac{1}{X}(E_{N-1}+1)+\frac{X-1}{X}(E_1+1)\\
E_{N-1}&=\frac{1}{X}+\frac{X-1}{X}(E_1+1)\tag{3}\\
\end{align*}




Comment:





  • In (1) we note that any roll of the $X$-sided die will do.


  • In (2) we are in the situation that a run of length $3$ is given. With probability $\frac{1}{X}$ we get a run of length $4$ and with probability $\frac{X}{X-1}$ we get a different result and start with a run of length $1$.


  • In (3) we get a run of length $n$ with probability $\frac{1}{X}$ and can stop or we are back at the beginning and start again with a run of length $1$.





This system can be easily solved. We iteratively obtain be respecting all equations from above besides the last one (tagged with (3)).
\begin{align*}
E_1&=E_0-1\\

E_2&=E_0-(1+X)\\
E_3&=E_0-(1+X+X^2)\\
E_4&=E_0-(1+X+X^2+X^3)\\
&\ \ \ \vdots\\
E_{N-1}&=E_0-(1+X+X^2+\cdots+X^{N-2})\\
&=E_0-\frac{X^{N-1}-1}{X-1}\tag{4}\\
\end{align*}



From equation (4) and (3) we finally conclude
\begin{align*}

E_{N-1}&=E_0-\frac{X^{N-1}-1}{X-1}\\
(1-X)E_0+XE_{N-1}&=1
\end{align*}
from which
\begin{align*}
\color{blue}{E=E_0=\frac{X^N-1}{X-1}}
\end{align*}
follows.





Note: This approach is a generalisation of @lulu's answer of this MSE question. In fact we get as small plausibility check with $N=3$ and $X=6$ the result
\begin{align*}
E=\frac{6^3-1}{6-1}=\frac{215}{5}=43.
\end{align*}
in accordance with @lulu's result.




[Add-on] The strongly related problem: Rolling a fair $X$-sided die until we get a run of length $N$ of a specific value can be solved by the system above with the first equation (1) replaced with
\begin{align*}
E_0&=\frac{1}{X}(E_1+1)+\frac{X-1}{X}(E_0+1)

\end{align*}
which gives the result
\begin{align*}
\color{blue}{E=E_0=\frac{X\left(X^N-1\right)}{X-1}}
\end{align*}
This is also plausible compared with the result above, since the probability to roll a specific value is $\frac{1}{X}$.



calculus - What would be the limit of $frac{2000000^{x}}{2^{x^2}}$ as $x$ approaches infinity?



I'm interested in the limit of the fraction:
$\frac{2000000^{x}}{2^{x^2}}$ as $x$ approaches infinity. Since the limit results in an indeterminate fraction of $\frac{\infty}{\infty}$, I was thinking of using l'hopitals rule but I don't think the derivatives would help much as the exponential would remain. How would this be computed?


Answer



We have $2000000<2^{21} $, so $\dfrac{2000000^{x}}{2^{x^2}}<\dfrac {(2^{21})^x}{2^{x^2}}= \dfrac {1}{2^{x^2-21x}}$. Can you take it from here?



Generalization: There is nothing special about $2000000$; it could have been any other positive number, including a number less than $1$ $($ having (negative number$)^x$ gets a little more tricky.




We want to show $\displaystyle \lim_{x \to \infty} \dfrac {a^x}{2^{x^2}}=0$. No matter what $a$ is, we can always find a number $n$ such that $a < 2^n$. Then $ \displaystyle 0 \le \displaystyle \dfrac {a^x}{2^{x^2}} \le \dfrac {(2^n)^x}{2^{x^2}} = \dfrac {1}{2^{x^2-nx}}$, so $0 \le \lim_{x \to \infty} \dfrac {a^x}{2^{x^2}} \le \lim_{x \to \infty} \dfrac {1}{2^{x^2-nx}}=0$, so $\displaystyle\lim_{x \to \infty} \dfrac {a^x}{2^{x^2}} =0$


Sunday 18 August 2019

notation - Proper use of the vinculum to indicate repeating decimal.

A nerdy debate with a colleague leads me to ask this question:




To be used correctly, does the vinculum HAVE to only be placed over the shortest sequence of digits that repeat after a decimal?



For example, to indicate 1/3, is it ok to write $0.\overline{33}$ instead of $0.\overline{3}$?



Thank you!

summation - Doubling sequences of the cyclic decimal parts of the fraction numbers




Is there any theory, why and when doubling sequences of the decimal part of the fraction numbers occur? Take for example these small numbers:



1/7 = 0.[142857]
1/19 = 0.[052631578947368421]
1/49 = 0.[020408163265306122448979591836734693877551]


where sequence inside [] is the repeating/cycling part.



For 1/7 we can see that doubling is started with 14, following with 28, 56, 112, 224,... when numbers bigger than 2 digits additively carry front part of the digit to the previous number. On the other hand calculation can be done:




$(2^1 7)/10^2+(2^2 7)/10^4+(2^3 7)/10^6+(2^4 7)/10^8+...$



Or complete with summation function:



$$\sum_{n=1}^{\infty} \frac{2^n\cdot 7}{10^{2n}}=1/7$$



For 1/49 we find repeating part $[020408163265306122448979591836734693877551]$ again being a doubling series: $02, 04, 08, 16, 32, 64, 124, 256, \ldots$ whose summation function ends up being somewhat simpler:



$$\sum_{n=1}^{\infty} \frac{2^n}{10^{2n}}=1/49$$




For 1/19 doubling can be seen in reverse order starting from the end of the sequence: $1, 2, 4, 8, 16, 32, 64, 128, \ldots$



I can't find summation function for this however. Calculation of the digits clearly comes from the summation of this sequence:



$$2^0/10^{18} + 2^1/10^{17} + 2^2/10^{16} + 2^3/10^{15} + 2^4/10^{14} + 2^5/10^{13} + 2^6/10^{12} + 2^7/10^{11} + 2^8/10^{10} + 2^9/10^9 + 2^{10}/10^8 + 2^{11}/10^7 + 2^{12}/10^6 + 2^{13}/10^5 + 2^{14}/10^4 + 2^{15}/10^3 + 2^{16}/10^2 + 2^{17}/10^1 + 2^{18}/10^0 + ...$$



which results: 275941.052631578947368421. If we go further adding fractions of 10^-1, 10^-2,... and so on, repeating pattern will be visible on whole number part of the number. My secondary wonder is, if there is a summation function for 1/19?



Primary question is why and when does this kind of forward or reverse doubling sequence occur on recurring decimal expansion?




I have read these materials but doubling sequential behavior of the cyclic numbers are not really discussed on them:



http://thestarman.pcministry.com/math/rec/RepeatDec.htm



https://www.quora.com/Why-does-the-number-142857-have-such-interesting-properties



http://mathforum.org/library/drmath/view/63848.html



http://en.wikipedia.org/wiki/Cyclic_number




http://en.wikipedia.org/wiki/Recurring_decimal



http://www.researchgate.net/publication/259735461_An_Unanticipated_Decimal_Expansion



Examples table



G   SL  SDS SF              S
-----------------------------
7 6 27 14*2 142857

19 18 81 05^n and 2^n reverse (also 12*4 reverse) 052631578947368421
47 46 207 02 * 6 0212765957446808510638297872340425531914893617
49 42 189 02^n 020408163265306122448979591836734693877551
71 35 126 014 * 6 01408450704225352112676056338028169
79 13 54 1*8 reverse 126582278481
83 41 171 012 * 4 01204819277108433734939759036144578313253
89 44 198 fibonacci 01123595505617977528089887640449438202247191
97 96 432 03^n 010309278350515463917525773195876288659793814432989690721649484536082474226804123711340206185567
109 108 486 fibonacci reverse? 009174311926605504587155963302752293577981651376146788990825688073394495412844036697247706422018348623853211
199 99 405 005^n and 02^n reverse 005025125628140703517587939698492462311557788944723618090452261306532663316582914572864321608040201



G = Generator
SL = Sequence length
SDS = Sequence digit sum
SF = Summation function
S = Sequence


Answer



As your research shows, most discussion of repeating decimals focuses
on the simple repeating block of digits in the decimal expansion.

There is some interest in "cycling" numbers, but those are really just
the same repeating block starting at a different point:



\begin{align}
1/7 &= 0.\overline{142857} \\
&= 0.142857\overline{142857} \\
&= 0.14\overline{285714}.
\end{align}



The simple repeating block of $1/7$ is easily seen if you try to compute

$1/7$ by long division:



$$\require{enclose}
\begin{array}{rll}
0.142857 \\[-3pt]
7 \enclose{longdiv}{1.000000} \\[-3pt]
\underline{7}\phantom{00000} \\[-3pt]
30\phantom{0000} \\[-3pt]
\underline{28}\phantom{0000} \\[-3pt]
20\phantom{000} \\[-3pt]

\underline{14}\phantom{000} \\[-3pt]
60\phantom{00} \\[-3pt]
\underline{56}\phantom{00} \\[-3pt]
40\phantom{0} \\[-3pt]
\underline{35}\phantom{0} \\[-3pt]
50 \\[-3pt]
\underline{49} \\[-3pt]
1
\end{array}
$$




Once you find a remainder of $1$ after one of the subtraction steps,
you know the pattern will repeat.
(In general, for $1/n$, you don't necessarily ever find a remainder of $1$;
but as soon as you see any remainder that you have seen before,
you have found a repeating pattern.
For example, $1/6$ has the repeating remainder $4$.)



But long division also reveals shorter patterns that "repeat" in the
form of geometric series:




\begin{array}{rll}
0.14 \\[-3pt]
7 \enclose{longdiv}{1.00} \\[-3pt]
\underline{7}\phantom{0} \\[-3pt]
30\\[-3pt]
\underline{28} \\[-3pt]
2 \\[-3pt]
\end{array}




The remainder of $2$ indicates that the rest of the digits of $1/7$
will just be the digits of $2/7$ shifted two places to the right;
and the digits that occur there will just be the result of doubling
all the digits of $1/7$. In more algebraic notation, what happens is



\begin{align}
1 &= 7 \times 0.14 + 0.02 \\
&= 7 \times 0.14 + 0.02(7 \times 0.14 + 0.02) \\
&= 7 \times 0.14 + 0.02(7 \times 0.14 + 0.02(7 \times 0.14 + 0.02)) \\
&= 7 \times 0.14 + 0.02(7 \times 0.14 +

0.02(7 \times 0.14 + 0.02(7 \times 0.14 + 0.02))).
\end{align}



If we rewrite these same equations with all the multiplications fully
distributed over the terms in parentheses, the result is



\begin{align}
1 &= 7 \times 0.14 + 0.02 \\
&= 7 \times 0.14 + 7 \times 0.02 \times 0.14 + 0.02^2 \\
&= 7 \times 0.14 + 7 \times 0.02 \times 0.14

+ 7 \times 0.02^2 \times 0.14 + 0.02^3 \\
&= 7 \times 0.14 + 7 \times 0.02 \times 0.14
+ 7 \times 0.02^2 \times 0.14
+ 7 \times 0.02^3 \times 0.14 + 0.02^4.
\end{align}



As we continue writing equations in this pattern, we generate the terms
of the infinite geometric series



$$ \sum_{k=0}^\infty 7 \times 0.14 \times 0.02^k. $$




The method by which we develop this series suggests intuitively
that the limit of the sum as the number of terms goes to infinity is $1$,
but we can also use the known formula for the sum of a geometric series,
$ \sum_{k=0}^\infty r^k = \frac{1}{1 - r}$, to prove it:



$$ \sum_{k=0}^\infty 7 \times 0.14 \times 0.02^k
= 7 \times 0.14 \times \sum_{k=0}^\infty 0.02^k
= 0.98 \times \left( \frac{1}{1 - 0.02} \right) = 1. $$




So we have



\begin{align}
\frac17 = \frac17 \times 1
&= \frac17 \sum_{k=0}^\infty 7 \times 0.14 \times 0.02^k \\
&= \sum_{k=0}^\infty \frac{14}{100} \times \left( \frac{2}{100} \right)^k \\
&= \sum_{k=0}^\infty \frac{14 \times 2^k}{100^{k+1}} .
\end{align}



The factor of $2^k$ in each term of the sum produces the doubling phenomenon:




\begin{align}
1/7 = & \phantom{+0} 0.14 \\
& + 0.0028 \\
& + 0.000056 \\
& + 0.00000112 \\
& + 0.0000000224 \\
& + \cdots .
\end{align}




Of course, unlike the simple repeating block $142857$, after a few iterations
these doubling blocks of digits start to "overlap", meaning you actually have
to add them together rather than simply concatenating them.
You already knew that, of course, but I think it's important to emphasize that
these sequences and the simple repeating sequences come to the same
answer in different ways.



Notice also that $0.02^3 = 8 \times 10^{-6}$.
This would correspond a remainder of $8$ at the end of a step of long-division,
which we do not allow when dividing by $7$.

In effect, the long division algorithm uses the fact that
$$7 \times 0.02^2 \times 0.14 + 0.02^3
= 7 \times 0.000056 + 0.000008
= 7 \times 0.000057 + 0.000001$$
to stop the "doubling" effect and start from $1$ again.



You can also get a tripling effect in $1/7$ from the first step of the
long division,



\begin{array}{rll}

0.1 \\[-3pt]
7 \enclose{longdiv}{1.0} \\[-3pt]
\underline{7} \\[-3pt]
3
\end{array}



This corresponds to the equation



$$ 1 = 7 \times 0.1 + 0.3, $$




which can be further expanded to $7 \times 0.1 + 0.3(7 \times 0.1 + 0.3)$,
$7 \times 0.1 + 0.3(7 \times 0.1 + 0.3(7 \times 0.1 + 0.3))$,
and so forth, which generates the series



$$\sum_{k=0}^\infty \frac{3^k}{10^{k+1}}
= 0.1 + 0.03 + 0.009 + 0.0027 + 0.0081 + \cdots.$$



So long division is one way to discover a doubling or tripling pattern applied
to some digits (often many fewer digits than the simple repeated block contains), and there can be more than one such pattern that can
be discovered this way.







The last example (the tripling pattern of $1/7$)
is an example of the infinite sum
$\sum_{k=1}^{\infty} \frac{\left(10^x - n\right)^{k-1}}{\left(10^x\right)^k}$
which you mentioned in a comment, with $n=7$ and $x=1$.
This works as a representation of the fraction $1/n$ because
the sum is a geometric series, and




$$\sum_{k=1}^{\infty} \frac{\left(10^x - n\right)^{k-1}}{\left(10^x\right)^k}
= \frac 1n.$$



(This was shown in https://math.stackexchange.com/a/1372077/139123 using
a generalization of this formula, with $N$ instead of $10^x$
and $p$ instead of $n$.)



Let's generalize this formula in a slightly different way:



$$\sum_{k=0}^{\infty}

\frac{m\left(10^x - mn\right)^k}{\left(10^x\right)^{k+1}}.$$



Notice that this starts $k$ at $0$ instead of $1$ without changing
any of the terms of the series.
We can evaluate this as follows:



\begin{align}
\sum_{k=0}^{\infty}
\frac{m\left(10^x - mn\right)^k}{\left(10^x\right)^{k+1}}
& = \frac m{10^x} \sum_{k=0}^{\infty} \left(1 - \frac{mn}{10^x}\right)^k

\tag{A} \\
& = \frac m{10^x} \left(\frac{1}{1 - \left(1 - \frac{mn}{10^x}\right)}\right)
\tag{B} \\
& = \frac m{10^x} \left(\frac{10^x}{mn}\right)\\
& = \frac 1n.
\end{align}



Once again, we use the formula for the sum of an infinite
geometric series to get from step (A) to step (B).




This gives you a lot of flexibility in setting up geometric series to
find doubling, tripling, or other $N$-tupling digit sequences in a
repeating decimal fraction.
For example, setting $n = 7$, $m=14$, $x=2$,



$$
\frac 17
= \sum_{k=0}^{\infty} \frac{m\left(10^x - mn\right)^k}{\left(10^x\right)^{k+1}}
= \sum_{k=0}^{\infty}
\frac{14\left(10^2 - 14\times7\right)^k}{\left(10^2\right)^{k+1}}

= \sum_{k=0}^{\infty}
\frac{14 \times 2^k}{100^{k+1}}
$$
just as we found before.
Other examples are:



\begin{array}{rrrcl}
n & m & x && \qquad\text{decimal expansion} \\ \hline
19 & 5 & 2 &&
\displaystyle{\frac{1}{19}

= \sum_{k=0}^{\infty} \frac{5 \times 5^k}{100^{k+1}}}
= 0.05 + 0.0025 + 0.000125 + \cdots \\
47 & 2 & 2 &&
\displaystyle{\frac{1}{47}
= \sum_{k=0}^{\infty} \frac{2 \times 6^k}{100^{k+1}}}
= 0.02 + 0.0012 + 0.000072 + \cdots \\
49 & 2 & 2 &&
\displaystyle{\frac{1}{47}
= \sum_{k=0}^{\infty} \frac{2 \times 2^k}{100^{k+1}}}
= 0.02 + 0.0004 + 0.000008 + \cdots \\

71 & 14 & 3 &&
\displaystyle{\frac{1}{49}
= \sum_{k=0}^{\infty} \frac{14 \times 6^k}{1000^{k+1}}}
= 0.014 + 0.000084 + 0.000000504 + \cdots \\
83 & 12 & 3 &&
\displaystyle{\frac{1}{49}
= \sum_{k=0}^{\infty} \frac{12 \times 4^k}{1000^{k+1}}}
= 0.012 + 0.000048 + 0.000000192 + \cdots \\
97 & 1 & 2 &&
\displaystyle{\frac{1}{97}

= \sum_{k=0}^{\infty} \frac{1 \times 3^k}{100^{k+1}}}
= 0.01 + 0.0003 + 0.000009 + \cdots \\
199 & 5 & 3 &&
\displaystyle{\frac{1}{199}
= \sum_{k=0}^{\infty} \frac{5 \times 5^k}{1000^{k+1}}}
= 0.005 + 0.000025 + 0.000000125 + \cdots
\end{array}



There is a doubling pattern when the summation has $2^k$ in the numerator,
tripling when the summation has $3^k$ in the numerator, etc.




The procedure for generating these examples was simply: given $n$,
choose $x$ and set $m = \left\lfloor \frac{10^x}{n} \right\rfloor$.
Whether the result is an "interesting" pattern depends partly on how
hard you want to look at the digits of the decimal fraction;
for example, with $1/7$ there is
a pattern with $x=1$, $m=1$ (powers of $10-7 = 3$),
a pattern with $x=3$, $m=142$ (powers of $1000-7\times142= 6)$,
and even patterns with $x=4$ and $x=5$, but the easiest pattern to see
(in my opinion) is for $x=2$ (powers of $2$).

The effect is enhanced when $10^x - mn$ is small.
It also seems desirable for $x$ to be a fraction of the period of the repeating decimal, and not very large.
You can find $10^x - mn$ for as many values of $x$ as you wish
by starting with $x=0$ (in which case $10^x - mn = 1$),
and for each successive value of $x$ ($x = 1,2,3,\ldots$),
multiply the previous result by $10$ and find the remainder
when you divide by $n$.



It does not seem simple to predict which $n$ will have small enough values
of $10^x - mn$ for small enough values of $x$

without going through this exercise (even setting aside the fact that
"small enough" is a subjective measurement here).
But by setting this up in a spreadsheet I found some additional
"interesting" patterns for $n < 200$:



$$
\frac{1}{17} = \sum_{k=0}^{\infty} \frac{588 \times 4^k}{10000^{k+1}},
\quad
\frac{1}{43} = \sum_{k=0}^{\infty} \frac{232558 \times 6^k}{10000000^{k+1}},
\quad

\frac{1}{51} = \sum_{k=0}^{\infty} \frac{196 \times 4^k}{10000^{k+1}},
$$
$$
\frac{1}{119} = \sum_{k=0}^{\infty} \frac{84 \times 4^k}{10000^{k+1}},
\quad
\frac{1}{127} = \sum_{k=0}^{\infty} \frac{7874 \times 2^k}{1000000^{k+1}},
\quad
\frac{1}{167} = \sum_{k=0}^{\infty} \frac{5988 \times 4^k}{1000000^{k+1}}.
$$




And of these, I would not consider $1/17$ or $1/43$ to be "easy" to see
even after you know what to look for.


probability - game theoretic die rolls

Suppose player X has a 6 sided die and player Y has a 10 sided die. They each get two rolls and they can each choose to stop rolling on either one of the rolls, taking the number on that roll. Whoever has the higher number wins, when there is a tie X wins. What is the probability that Y wins this game?



I even couldn't start computing probabilities since X's decision is based on Y's decision. I assume that they are both rational. How can we proceed?

combinatorics - Another Hockey Stick Identity



I know this question has been asked before and has been answered here and here.



I have a slightly different formulation of the Hockey Stick Identity and would like some help with a combinatorial argument to prove it. First I have this statement to prove:
$$
\sum_{i=0}^r\binom{n+i-1}{i}=\binom{n+r}{r}.

$$
I already have an algebraic solution here using the Pascal Identity:
$$
\begin{align*}
\binom{n+r}{r}&=\binom{n+r-1}{r}+\binom{n+r-1}{r-1}\\
&=\binom{n+r-1}{r}+\left[\binom{n+(r-1)-1}{(r-1)}+\binom{n+(r-1)-1}{r-2}\right]\\
&=\binom{n+r-1}{r}+\binom{n+(r-1)-1}{(r-1)}+\left[\binom{n+(r-2)-1}{r-2}+\binom{n+(r-2)-1}{(r-2)-1}\right]\\
&\,\,\,\vdots\\
&=\binom{n+r-1}{r}+\binom{n+(r-1)-1}{(r-1)}+\binom{n+(r-2)-1}{(r-2)-1}+\binom{n+(r-3)-1}{r-3}+\cdots+\left[\binom{n+1-1}{1}+\binom{n+1-1}{0}\right]\\
&=\binom{n+r-1}{r}+\binom{n+(r-1)-1}{(r-1)}+\binom{n+(r-2)-1}{(r-2)-1}+\binom{n+(r-3)-1}{r-3}+\cdots+\binom{n+1-1}{1}+\binom{n-1}{0}\\

&=\sum_{i=0}^r\binom{n+i-1}{i}.
\end{align*}
$$



I have read both combinatorial proofs in the referenced answers above, but I cannot figure out how to alter the combinatorial arguments to suit my formulation of the Hockey Stick Identity. Basically, this formulation gives the "other" hockey stick. Any ideas out there?


Answer



Note that $\binom{n+r}{r}=\binom{n+r}{n}$ is the number of subsets of $\{1,2,\ldots,n+r\}$ of size $n$. On the other hand, for $i=0,1,2,\ldots,r$, $\binom{n+i-1}{i}=\binom{n+i-1}{n-1}$ is the number of subsets of $\{1,2,\ldots,n+r\}$ of size $n$ whose largest element is $n+i$.


abstract algebra - Constructing a finite field



I'm looking for constructive ways to obtain finite fields, for any given size $q=p^n$. For example, I know it suffices to find an irreducible polynomial of degree $n$ over $\mathbb{Z}_p$ (and then obtain the field as its quotient ring), but how can such polynomial be efficiently found?




Also, I know there are more ways to represent elements of finite fields - are they easier to use than the irreducible polynomial method? What is done in practice in computational mathematical libraries?


Answer



In practice, one guesses an $n$'th degree polynomial and tests it for irreducibility. As a random polynomial has about a $1/n$ probability of being irreducible, this does not take too long. For variations and improvements on this idea see this paper by Shoup (author of the NTL library, which you may also want to look at.)


Friday 16 August 2019

real analysis - T/F: a smooth function that grows faster than any linear function grows faster than $x^{1+epsilon}$



Prove or find a counterexample to the claim that a smooth function that grows faster than any linear function grows faster than $x^{1+\epsilon}$ for some $\epsilon>0$.



My attempt: I understand that the first part of the problem claims $\lim_{x\rightarrow \infty}\frac{g(x)}{kx} = \infty, \forall k>0$. We want to show, then, that $\exists \epsilon >0$ and constant $l>0$ such that $\lim_{x\rightarrow \infty}\frac{g(x)}{lx^{1+\epsilon}} = \infty$.



I've tried using the definition of limits, but I get stuck trying to bound the function $\frac{1}{x^\epsilon}$. Also, I've tried using L'Hopital's rule to no avail. Any ideas?



Any help is appreciated!


Answer




Hint: It is false. Find a counterexample.



Followup hint: (place your mouse on the hidden text to show it)




The function $f\colon(0,\infty)\to\mathbb{R}$ defined by $f(x) = x\ln x$ is such a counterexample.




Followup followup hint: (pretty much the solution, with some details to fill in; place your mouse on the hidden text to show it)





For any $a>0$, $\frac{x\ln x}{a x} = \frac{1}{a}\ln x \xrightarrow[x\to\infty]{} \infty$. However, for any fixed $\epsilon > 0$, $$\frac{x\ln x}{x^{1+\epsilon}} = \frac{\ln x}{x^\epsilon}=\frac{1}{\epsilon}\frac{\ln(x^\epsilon)}{x^\epsilon} = \frac{1}{\epsilon}\frac{\ln t}{t}$$ for $t=x^\epsilon \xrightarrow[x\to\infty]{}\infty$.



Thursday 15 August 2019

calculus - Why is $infty cdot 0$ an indeterminate form, if $infty$ can be treated as a very large positive number?




Why is $\infty \cdot 0$ indeterminate?



Although $\infty$ is not a real number, it can be treated as a very large positive number, and any number multiplied by $0$ is $0$. Why not in this case?


Answer




Why is $\infty \cdot 0$ indeterminate?




It's called indeterminate because if you get $\infty \cdot 0$ when evaluating a limit, then you can't conclude anything about the result.

Here's an example:
$$
\lim_{n \to \infty} (n^2) \cdot \left(\frac{1}{n}\right).
$$

Now, this limit is $\infty$. But the limit of the first thing $(n^2)$ is $\infty$, and the limit of the second $(1/n)$ is $0$. So we cannot evaluate $\infty \cdot 0$ to compute the limit.



More technically: If we have one sequence $a_1, a_2, a_3, \ldots$ of real numbers that approaches $\infty$, and another $b_1, b_2, b_3, \ldots$ that approaches $0$, the product sequence $a_1 b_1, a_2 b_2, a_3 b_3, \ldots$ might approach any real number, or it might not approach anything at all.
This is what it means for $\infty \cdot 0$ to be an "indeterminate form".





Although $\infty$ is not a real number, it can be treated as a very large positive number




No, I would not agree with this statement.
It may be helpful to intuitively think of $\infty$ as a very large positive number, but this is not what infinity is. $\infty$ is a sort of limit of higher and higher positive numbers.
In the same way $0$ is a limit of lower and lower positive numbers.




and any number multiplied by $0$ is $0$. Why not in this case?





As explained above, $\infty$ is not a very large number, but rather a limit of larger and larger numbers. So we cannot say that multiplying $\infty$ by $0$ is $0$.


The Complex Logarithm of a Function



For an analytic function $f$ that does not vanish on a simply connected region, we may define its logarithm to be the function:



$$\log f=g(z):=\int_{y}\frac{f'}{f}dz+c_0.$$



Where $\gamma$ is some path starting at an arbitrary point in the region, and ending at $z$; while $c_0$ satisfies $e^{c_0}=f(z_0)$.



I believe that this logarithm should satisfy under certain conditions that: $$\log f=\log |f|+iarg(f).$$




Am I right, or this is too difficult in general?


Answer



The function $g$ satisfies $g' = \frac{f'}{f}$ in the given domain, so that
$$
(f e^{-g} )' = f' e^{-g} - f g' e^{-g} = 0 \\
\implies f e^{-g} = \text{const} = f(z_0) e^{-g(z_0)} = f(z_0) e^{-c_0} = 1 \, .
$$

Therefore $e^g = f$, i.e. $g$ is “a holomorphic logarithm” of $f$ in the domain. In particular
$$

f(z) = e^{g(z)} = e^{\operatorname{Re} g(z)} e^{ i \operatorname{Im}g(z)}
$$

which implies that
$$
|f(z)| = e^{\operatorname{Re}g(z)} \implies \operatorname{Re}g(z) = \log |f(z)|
$$

and that $ \operatorname{Im}g(z)$ is an argument of $f(z)$. So
$$
g(z) = \log |f(z)| + i \operatorname{arg}f(z)
$$


in the sense that $\operatorname{arg}f(z)$ is a continuous function which is an argument of $f(z)$ for each $z$.


sequences and series - Convergence of $sum_{n=1}^{infty} frac{Sin^2(n)}{n}$

Is following sum convergent?



$$\sum_{n=1}^{\infty} \frac{Sin^2(n)}{n}$$ Integral test, Dirichlet test doesn't apply. Any idea !

mathematical induction (sum of squares of first $2n$ numbers)



I need to prove with mathematical induction the formula for the sum of the squares of the first $2n$ numbers.
The equation:
$$1^2 + 2^2 + 3^2 + \cdots + (2n)^2 = \dfrac{n(2n+1)(4n+1)}{3}.$$



The equation stands for $n \geqslant 1$.



Here is my solution:



$n=1$:
$1^2 + 2^2 = \dfrac{1 \cdot 3 \cdot 5}{3}$
$5 = 5$




$n = k$:
$1^2+2^2+\cdots+(2k)^2 = \dfrac{k(2k+1)(4k+1)}{3}$



$n = k + 1$:
$1^2+2^2+\cdots+(2k)^2 +(2к + 1)^2 + (2к + 2)^2 = \dfrac{(k+1)(2(k+1)+1)(4(k+1)+1)}{3}$



$\dfrac{k(2k+1)(4k+1)}{3} + \dfrac{3(2k + 1)^2}{3} + \dfrac{3(2k + 2)^2}{3} = \dfrac{(k+1)(2(k+1)+1)(4(k+1)+1)}{3}$



$\dfrac{8k^3 + 2k^2 + 4k^2 + k}{3} + \dfrac{3(4k^2 + 4k + 1)}{3} + \dfrac{3(4k^2 + 8k + 4)}{3} = \dfrac{(k+1)(2k+3)(4k+5)}{3}$



$\dfrac{8k^3 + 6k^2 + k + 12k^2 + 12k + 3 + 12k^2 + 24k + 12}{3} = \dfrac{8k^3 + 10k^2 + 12k^2 + 15k + 8k^2 + 10k + 12k + 15}{3}$




$\dfrac{8k^3 + 30k^2 + 37k + 15}{3} = \dfrac{8k^3 + 30k^2 + 37k + 15}{3}$



In the end, the terms are not equal, and they should be. What am I doing wrong?


Answer



You are forgetting to add the term $(2k+1)^{2}$ in the LHS,i.e. for $n=k+1$ the LHS is $$1^{2}+2^{2}+\cdots +(2k)^{2}+(2k+1)^{2}+(2k+2)^{2}$$


calculus - Evaluating the definite integral $int_0^infty x mathrm{e}^{-frac{(x-a)^2}{b}},mathrm{d}x$

To evaluate $\int_0^\infty x \mathrm{e}^{-\frac{(x-a)^2}{b}}\,\mathrm{d}x$, I have applied the substitution $u=\frac{(x-a)^2}{b}$, $x-a=(ub)^{1/2}$, and $\frac{\mathrm{d}u}{\mathrm{d}x}=\frac{2(x-a)}{b}$. I would first like to ask if $x-a$ should actually equal $\pm (ub)^{1/2}$ (i.e., is it valid to ignore the minus sign, and why?). Applying this substitution,
\begin{align}

I &=\int_0^\infty x \mathrm{e}^{-\frac{(x-a)^2}{b}}\, \mathrm{d}x \\
&=\int_0^\infty x \mathrm{e}^{-u} \frac{b\,\mathrm{d}u}{2(x-a)} \\
&=\int_{\frac{a^2}{b}}^\infty \left((ub)^{1/2} + a\right)\cdot{}\mathrm{e}^{-u} \frac{b\,\mathrm{d}u}{2(ub)^{1/2}} \\
&=\frac{b}{2}\int_{\frac{a^2}{b}}^\infty \mathrm{e}^{-u}\,\mathrm{d}u + ab^{-1/2}\mathrm{e}^{-u} u^{-1/2}\,\mathrm{d}u \\
&=\frac{b}{2}\int_{\frac{a^2}{b}}^\infty \mathrm{e}^{-u}\,\mathrm{d}u + \frac{a\sqrt{b}}{2}\int_{\frac{a^2}{b}}^\infty \mathrm{e}^{-u} u^{-1/2}\,\mathrm{d}u,
\end{align}
I find that I am unable to evaluate the second term because the domain is from a non-zero constant to $+\infty$. Were the domain $[0,+\infty)$, the second integral would simply be a $\Gamma$ function. Seeing that the substitution I have attempted has not worked, could someone please propose an alternate route to evaluating this definite integral? Thank you.

Wednesday 14 August 2019

discrete mathematics - Prove if a (mod n) has a multiplicative inverse, then it's unique





Assume that an integer $a$ has a multiplicative inverse modulo an integer $n$. Prove that this inverse is unique modulo $n$.




I was given a hint that proving this Lemma:
\begin{align}
n \mid ab \ \wedge \ \operatorname{gcd}\left(n,a\right) = 1
\qquad \Longrightarrow \qquad
n \mid b
\end{align}

should help me in finding the answer.




Here are my steps in trying to solve the problem:
\begin{align}
\operatorname{gcd}\left(n,a\right) = 1
& \qquad \Longrightarrow \qquad
sn + ta = 1
\qquad \Longrightarrow \qquad
sn = 1 - ta \\
& \qquad \Longrightarrow \qquad
1 \equiv ta \mod n

\qquad \Longrightarrow \qquad
ta \equiv 1 \mod n .
\end{align}

I know that having the GCD of m and a equal to 1 proves there is a multiplicative inverse mod n, but I'm not sure on how to prove $n \mid b$ with and how it helps prove the multiplicative inverse is unique.


Answer



For the first part note as $(n,a) = 1$, then there exist $x,y \in \mathbb{Z}$, s.t. $nx + ay = 1$. Multiply both sides by $b$ and you will get $nxb + (ab)y = b$. The LHS is obviously divisible by $n$, so then the RHS must be too. Hence $n \mid b$.



This thing proves that the inverse exist. To note that it's unique modulo $n$ assume that $aa_1 \equiv 1 \pmod n$ and $aa_2 \equiv 1 \pmod n$. Then we have that there exist integer $x,y$ s.t. $aa_1 + nx = 1$ and $aa_2 + ny = 1$. Subtracting the two equations we have that $a(a_1 - a_2) = n(y-x) \implies a(a_1 - a_2) \equiv 0 \pmod n \implies a_1 \equiv a_2 \pmod n$. Hence the multiplicative inverse is unique in $\mathbb{Z}_n$


calculus - Solve limit containing (1/0) using L'Hopital's Rule

I was assigned a problem in my calculus 2 course that I can't seem to solve. We're going over indeterminate forms solved by L'Hopital's Rule.



$$
\lim_{x\to \pi^-}

\left( \frac{\csc(x)+x}{\tan(\frac{x}{2})} \right)
$$



This is in the indeterminate form $ \left[\frac{\infty}{\infty}\right]$, which is given, except I'm not sure why, as $\csc(\pi)$ is undefined. Is this because trig functions occur infinitely (within their range), e.g. $\sin(x) = 1$, occurs for infinite $x$'s?



After applying L'Hopital's Rule, I get:



$$
\lim_{x\to \pi^-}
\left( \frac{-\csc(x)\cot(x) + 1}{\frac{1}{2}\sec^2(\frac{x}{2})} \right)

$$



However, I don't understand how I'm supposed to solve from here. No matter how many times we derive, $\csc(x)$ won't go away and I'm not aware of any trigonometric identities that can simplify the undefined division away. Am I missing something obvious?

Monday 12 August 2019

sequences and series - How do I solve the chessboard problem?



(Note that I am a high school student and bad at maths. Please explain your answers thoroughly.)




And the man asked for grains of rice. He wanted one grain of rice on the first square of the chess board, two grains on the second, four grains on the third, eight grains on the fourth, so on and so on. I suppose you could ask a mathematics teacher if you really wished to learn how many pieces the man had after that was done for every square of the chess board.



Okay, so in school we were having some moral assembly or whatever, I didn't listen, too busy thinking about a more mathematical way to solve the problem above than just doubling, adding, etc. I can't ask a teacher yet... I need to work this out myself!



I'm an idiot, so I wanted help, and I was hoping someone on mathematics SE could help me think of a good mathematical way of solving this. Suppose there are 64 squares on the chess board. Argh! Damn this, I just found out that its a duplicate. Oh well. The answers on the question have to little an explanation.



Okay, there's 64 squares on our imaginary chessboard, and I'm trying to work out the thing above in a more mathematical way than just one by one, doubling it over and over again. So, if there is a more mathematical way of doing this, could someone tell me what it is and explain it.



Thanks!


Answer




Let's write down in a mathematical fashion what we have. If you think a while about it, you'll realize that you want to calculate is the sum
$$S=\sum\limits_{n=0}^{63}2^n=1+2+4+\dots+2^{63}$$
We can quickly check that this is indeed true if we realize that for $n=0$ we get $2^0=1$, followed by $2^1=2$, etc. The fact that the sum only goes up to $63$ is because if you start counting at $0$ you already spelled $64$ numbers when reaching $63$. How can we now calulate this sum? It requires a small trick namely to realize that if we compare the sum given by
$$2\cdot S=2\cdot \sum\limits_{n=0}^{63}2^n=\sum\limits_{n=1}^{64}2^n$$
and $S$, their difference is just given by
$$2S-S=2^{64}-1$$
Using this and just rewriting $2S-S=S$ we find the final result
$$S=2^{64}-1$$







Additional comment:



We can also generalize this approach, which goes under the name of a geometric series. Consider a series of numbers which starts with $1$ and is constructed by just multiplying with a constant factor $q$. We further do this procedure $N$ times. The corresponding sum is than given by
$$S=\sum\limits_{n=0}^{N-1}q^n$$
Using the same procedure as above we find
$$q\cdot S=\sum\limits_{n=1}^N q^n$$
and therefore
$$q\cdot S-S=q^N-1$$
This time we have to factor $S$ out of the left hand term and find $S(q-1)$. If we now devide the equality by $(q-1)$ we find the more general result

$$S=\frac{q^N-1}{q-1}$$


trigonometry - Verify Euler's formula

Verify Euler's formula for $e^{ix}$ by considering $\frac{dz}{dx}$ where $z=r(\cos x+i\sin x)$



I tried taking the derivative of z but could not get to Euler's from there.

How find this limit $lim_{xto+infty}frac{e^{-2x}(cos{x}+2sin x)+e^{-x^2}(sin{x})^2}{e^{-x}(cos{x}+sin{x})}$




Find this limit.



$$\lim_{x\to+\infty}\dfrac{e^{-2x}(\cos{x}+2\sin x)+e^{-x^2}(\sin{x})^2}{e^{-x}(\cos{x}+\sin{x})}$$



This wolf can't have reslut. link



maybe this limit is not exsit? so how prove it?
Thank you


Answer




There are arbitrarily large $x$ at which our function is not even defined. And by taking $x$ close enough to $2n\pi+\frac{3\pi}{4}$, we can make our function arbitrarily large positive or negative.


Sunday 11 August 2019

sequences and series - Prove that the exponential function is sequentially continuous?

I am supposed to use the definition of the exponential function to prove that if x is a real number and the modulus of x is less than 1, the modulus of exp(x)-1 is less than or equal to (e-1)*modulus of x, and hence prove that the exponential function is sequentially continuous. I've managed to prove the former using the exponential power series but I don't understand how to 'hence' prove the latter.

recursion - Showing that a Sequence is defined recursively

How can I show that the sequence is defined recursively?





Show that the recursively defined sequence $(x_n)_{n\in\mathbb{N}}$ with
$$x_1=1, \qquad\qquad x_{n+1}=\sqrt{6+x_n}$$
converges and determine its limit




Image

Saturday 10 August 2019

abstract algebra - Square root in finite degree extension of odd characteristic field.



Let $F$ be a field of odd characteristic. Let $\alpha,\beta$ in $F$ be non-zero. $\beta$ does not have a square root in F and $\alpha$ has a square root in $F[x]/(x^2-\beta)$. Prove exactly one of $\alpha,\alpha\beta$ has a square root in $F$.



I've tried to express the elements in $F[x]/(x^2-\beta)$ using a basis $\{1,\gamma\}$, where $\gamma^2-\beta=0$, but did not make a progress. In particular, I don't know how to use the condition of $F$ having odd characteristic. Any help will be appreciated.


Answer



Hint: write $\alpha = (f_1 + f_2 \gamma)^2$ where $\gamma^2 = \beta$ and $f_i \in F$. See what this tells you about $f_1, f_2$.



group theory - Commutative Matrix Multiplication of Invertible Matrices

I know that in general, the only matrices that are multiplicatively commutative are those that are scalar multiples of $I$, the identity matrix.



But what about matrices that are multiplicatively commutative with only invertible matrices? Is it any different? I don't think so, but I'm not certain, and am struggling to prove it.



Simply, with $A$ and $B$ both being $n\times n$ matrices over the reals, what are all $A$ such that $AB = BA$ if $B$ is invertible?




I suppose in group theory this could be phrased as the centre of the general linear group over the reals - $S(GL_n(\mathbb{R}))$.

integration - Contour integral: $int_{-infty}^0 frac{x^{1/3}}{x^5-1}mathrm dx$



The integral is
$$

\int_{-\infty}^0 \frac{x^{1/3}}{x^5-1}\mathrm dx
$$



I have tried taking the typical half circle contour and finding the enclosed residues:
$$
\text{Res}(f,1)=\frac{1}{5}\\
\text{Res}(f,e^{\frac{2\pi i}{5}})=\frac{e^{\frac{8\pi i}{15}}}{5}\\
\text{Res}(f,e^{\frac{4\pi i}{5}})=\frac{e^{\frac{-2\pi i}{15}}}{5}
$$
By Jordan's lemma, the arc contributes nothing. However, I have a problem with the non integrable singularity at 1 on $\mathbb{R}^+$.




Can I deviate to avoid it while also keeping track of the contribution from the positive real axis? There doesn't seem to be great symmetry at $1$ thanks to the numerator. Is my contour not a good choice?



edit: Also tried changing variables to evaluate on the positive real axis, but this again introduces the bad singularity.


Answer



Assuming that $x^{1/3}$ is defined as $-(-x)^{1/3}$ on $\mathbb{R}^-$, the change of variable $x=-z^3$ bring the given integral in the form
$$ \int_{0}^{+\infty}\frac{3z^3}{z^{15}+1}\,dz $$
and by setting $\frac{1}{1+z^{15}}=u$ the previous integral can be computed through Euler's Beta function. By exploiting the reflection formula for the $\Gamma$ function, $\Gamma(z)\,\Gamma(1-z)=\frac{\pi}{\sin(\pi z)}$, such integral simplifies to
$$ \frac{\pi}{5\sin\frac{4\pi}{15}}=\frac{\pi}{5\cos\frac{7\pi}{30}}=\frac{4\pi}{5\sqrt{7-\sqrt{5}+\sqrt{6 \left(5-\sqrt{5}\right)}}}.$$


complex numbers - What is wrong with my proof: $-1 = 1$?




I have some theories about why this could by wrong but I still haven't something that convinces me. What is wrong with this proof:




$ -1 = i^2 = i.i = \sqrt{-1}.\sqrt{-1} = \sqrt{(-1).(-1)}= \sqrt1 = 1 $



This would imply that:
$1 = -1$



Which is obviously false.



So my theory is that it's not a great idea to write $i = \sqrt{-1}$, but I'm not sure why...


Answer




The flaw is in assuming that the rule $\sqrt x\sqrt y=\sqrt{xy}$ holds with imaginary numbers. You just show us a counter-example.


Thursday 8 August 2019

elementary number theory - Prove without induction that $3^{4n}-2^{4n}$ is divisible by $65$





My brother asked me this (for some reason).



My solution is:



$(3^{4n}-2^{4n})\bmod{65}=$



$(81^{n}-16^{n})\bmod{65}=$



$((81\bmod{65})^{n}-16^{n})\bmod{65}=$




$(16^{n}-16^{n})\bmod{65}=$



$0\bmod{65}$






I think that this solution is mathematically flawless (please let me know if you think otherwise).



But I'm wondering if there's another way, perhaps with the binomial expansion of $(81-16)^{n}$.




In other words, something like:



$3^{4n}-2^{4n}=$



$81^{n}-16^{n}=$



$(81-16)^{n}+65k=$



$65^{n}+65k=$




$65(65^{n-1}+k)$



How would I go from "$81^{n}-16^{n}$" to "$(81-16)^{n}+65k$"?


Answer



You can use the formula $$a^n-b^n = (a-b)\left(a^{n-1}+a^{n-2}b+a^{n-3}b^2+\ldots+ab^{n-2}+b^{n-1}\right)$$


arithmetic - Looking for a theorem talking about the remainder when a number is divided by 9




I have come across an interesting property of the number 9, which some people call it casting out nines.




This is the property: If any number is divisible by 9, then you can keep adding the digits until you get a 9. For example 9117 is divisible by 9, because 9+1+1+7=18, then from the 18 we can see that 1+8=9. But 9113 is not divisible by 9, because 9+1+1+3=14, and then from the 14 we deduce 1+4=5 which is the remainder when this number, 9113, is divided by 9.



My question is: Is there a theorem that summarises this concept? If the theorem exists, then what is its proof?



Thanks in anticipation.


Answer



It is essentially an extension of the rule that a number is divisible by 9 if and only if the digits in the base ten representation add up to a number divisible by 9. You can prove that the digits of a number have to add up to a multiple of 9 by using the fact that every power of 10 is congruent to 1 modulo 9, viz.\begin{equation} 10^n = 1+ 9\times \sum_{j=0}^{n-1} 10^j. \quad (1)\end{equation}
Hence you have for any natural number $d=\sum_{j=0}^n d_j 10^j$, we have that $$d \cong l \mod 9$$ if and only if $$\sum_{j=0}^n d_j 10^j \cong l\mod 9$$
if and only if $$\sum_{j=0}^n d_j 1^j =\sum_{j=0}^n d_j\cong l \mod 9,$$
where we used (1) to get to the last line.

Now use the fact that $\sum_{j=0}^n d_j 10^j > \sum_{j=0}^n d_j > 0$ and the result should be clear.


linear algebra - Find the eigenvalues of a matrix with ones in the diagonal, and all the other elements equal

Let $A$ be a real $n\times n$ matrix, with ones in the diagonal, and all of the other elements equal to $r$ with $0

How can I prove that the eigenvalues of $A$ are $1+(n-1)r$ and $1-r$,
with multiplicity $n-1$?

Wednesday 7 August 2019

algebra precalculus - how to find out any digit of any irrational number?



We know that irrational number has not periodic digits of finite number as rational number.
All this means that we can find out which digit exist in any position of rational number.
But what about non-rational or irrational numbers?
For example:
How to find out which digit exists in Fortieth position of $\sqrt[2]{2}$ which equals 1,414213.......
Is it possible to solve such kind of problem for any irrational number?


Answer




Let $\alpha$ be an irrational number. As long as there exists an algorithm the can decide whether $\alpha>q$ or $\alpha

For $\alpha=\sqrt 2$, the decision algorithm is quit simple: If $q=\frac nm$ with $n\in\mathbb Z, m\in\mathbb N$, then $\alpha0\land n^2>2m^2$.


elementary number theory - What is $k$ so that $frac {1001times 1002 times ... times 2008} {11^k}$ will be an integer?



I found this question from last year's maths competition in my country. I've tried any possible way to find it, but it is just way too hard.




  1. What is the largest integer $k$ such that the following equation is an integer?




$$\frac {1001\times 1002 \times ... \times 2008} {11^k}$$



(A) $100$ (B) $101$ (C) $102$ (D) $103$ (E) $105$



The first way I did it is calculate the multiple of $11$ in $1001$ to $2008$



$\lfloor (2008-1001) \div 11 \rfloor$ ($91$ multiples of $11$) + $\lfloor (2008-1001) \div 121 \rfloor$ ($8$ multiples of $121) + 1$ ($1331$ is $11^3)= 100$




Are there any mistakes or miscalculations? Are there any more effective ways to find the value of $k$?


Answer



It is almost correct. The number of multiples of 121 and 1331 are correct. However there are 92 multiples of 11 between 1001 and 2008. To see this, note that 1001 is a multiple of 11 since the alternating sum of the digits is 0.



Now calculate



$$\lfloor \frac{2008-1002}{11} \rfloor = 91$$



This means that there are 91 multiples of 11 between 1002 and 2008. With the number of multiples of 121 and 1331 you were lucky you were correct. For example, for 121 you should have searched for the smallest multiple of 121 bigger than 1001 which is 1089. Now have $$\lfloor \frac{2008-1098}{121} \rfloor = 7$$




So the answer is 92+8+1=101.


Tuesday 6 August 2019

integration - What is the correct way to solve calculus problems without treating derivatives as fractions?




A common question on mathematics is "why can't derivatives be treated as fractions?" I like the explanation that derivatives are not fractions, they are the limits of fractions, which are not the same type of mathematical object, so it is wrong to assume they behave in the same way. The answers I've seen give examples where treating derivatives as fractions yields correct solutions and other cases where it does not, but they do not proceed to show how to tackle the problems of the latter case in the correct way.



My question is: what is the correct way to solve calculus problems without treating derivatives as fractions?



I expect the answer to be quite complex, as otherwise people wouldn't use treat derivatives as fractions in the first place, they would do the problems properly. If possible, I would appreciate a simplified explanation (at the level of an A-level student) to preface the full explanation which could be useful to more advanced users.


Answer



This is really an issue of how derivatives and differentials interact. The relationship between the $3$ quantities $dx$, $dy$, and $dy/dx$ is



$$dy = \frac{dy}{dx}\; dx.$$




So if you have a differential equation



$$\frac{dy}{dx} = g(x,y)$$



you can multiply both sides by $dx$



$$\frac{dy}{dx}\; dx = g(x,y)\; dx$$



and then replace the left side by the relationship given above




$$dy = g(x,y)\; dx.$$



This looks like we've "treated $dy/dx$ as a fraction" because it looks like the $dx$'s were canceled.


Monday 5 August 2019

Discrete Math Informal Proofs Using Mathematical Induction



Need to do a proof by mathematical induction using 4 steps to show that the following statement is true for every positive integer n and to help use the weak principle of mathematical induction.
$2 + 6 + 18 + ... + 2\times{3^{n-1}} = 3^n-1$





  1. show that the base step is true


  2. What is the inductive hypothesis?


  3. what do we have to show?


  4. proof proper (Justify each step):



Answer



Base Step: $2 \cdot 3^{1-1} = 2 = 3^1 - 1$



The inductive hypothesis is: $\sum_{n=1}^{k} 2 \cdot 3^{n-1} = 3^k - 1$




We must show that under the assumption of the inductive hypothesis that $$3^k - 1 + 2 \cdot 3^k = 3^{k + 1} - 1$$



We verify this as $$3^k - 1 + 2 \cdot 3^k = 3^k(1 + 2) - 1$$
$$= 3^{k+1} - 1$$


Combinatorics problemo



Five people are preparing for a Christmas party. They organize a draw to decide who will have to buy a present for whom. On each of five tickets the name of one person is written. Next the tickets are being drawn. A draw is succesful if nobody draws the ticket with his own name. How many succesful draws are possible?



This is easy enough to do by hand, but this seems to be a combinatorics problem which can be generalized for n people. How can I find a way of doing this for n people using combinatorics/probability (in other words, how can I do this problem with combinatorics/probability)?


Answer



The underlying combinatorial notion for this problem is that of derangements. Just as $n!$ denotes the total number of permutations of an $n$-element set, $\{ 1 , \ldots , n \}$, the value $D_n$ denotes the total number of permutations of $\{ 1 , \ldots , n \}$ which leave no elements fixed. (In certain places you will see the notation $!n$ used instead, but as Douglas S. Stones remarks in a comment below, this notation may lead to ambiguity and is not very common in the literature.)




[The linked Wikipedia article says more about this topic than I know.]


Sunday 4 August 2019

Find a limit of a function W/OUT l'Hopital's rule.



I've got an expression: $\lim_{x\to 0}$ $\frac {log(6-\frac 5{cosx})}{\sin^2 x}$




The question is: how to find limit without l'Hopital's rule?


Answer



Hint:



$$\dfrac{\ln\left(6-\dfrac5{\cos x}\right)}{\sin^2x}=\dfrac{\ln(6\cos x-5)}{\sin^2x}+\dfrac{\ln(1-\sin^2x)}{-2\sin^2x}$$



Now the second limit can be managed by $\lim_{h\to}\dfrac{\ln(1+h)}h=1$



For the first limit $6\cos x-5=6\left(1-2\sin^2\dfrac x2\right)-5=1-12\sin^2\dfrac x2$




and $\sin^2x=4\sin^2\dfrac x2\cos^2\dfrac x2$


real analysis - Show that $e^{varepsilon |x|^{varepsilon}}$ grows faster than $sum_{k=0}^{infty} {|x|^{2k}}/{(k!)^2}$

I am wondering whether we have for $$f(x):=\sum_{k=0}^{\infty} \frac{|x|^{2k}}{(k!)^2} $$ that




$$\lim_{x \rightarrow \infty} \frac{e^{\varepsilon |x|^{\varepsilon}}}{f(x)} = \infty$$
for any $\varepsilon>0$?



I assume that this is true as factorials should somehow outgrow powers, but I do not see how to show this rigorously?



Does anybody have an idea?

calculus - Manipulation of some power series (probably integration or derivation).



Show that $\ln\Big(|\frac{1+x}{1-x}|\Big)=2\sum_{n=0}^{\infty}\frac{x^{2n+1}}{2n+1},$ for $|x|<1$. The previous excercise (which was within my limited reach) was to show that
$\frac{1}{1-x^2}=\sum_{n=0}^{\infty}x^{2n},$ for $|x|<1$. I suspect there is a (not overly subtle) connection here but, needless to say, I can't see it. I don't know why the absolute value signs are included, but it might be because the solution includes integrating some power series. The excercise is contained in a chapter on derivation and integration of (convergent) power series; one is also assumed to be familiar with multiplication of power series.



Very thankful for any help.


Answer



Hint:




$1$. Write down the power series for $\ln(1+x)$.



$2$. Substitute $-x$ for $x$ to find the power series for $\ln(1-x)$.



If you don't know the power series for $\ln(1+x)$, but you probably do, use the power series for $\frac{1}{1+t}$, and the fact that
$\ln(1+x)=\int_0^x\frac{dt}{1+t}$. Expand, and integrate term by term.



$3$. We have $\ln\left(\frac{1+x}{1-x}\right)=\ln(1+x)-\ln(1-x)$.



The absolute value signs surrounding $\frac{1+x}{1-x}$ that you asked about are pointless. Since we are only working with $|x|<1$, the quantity $\frac{1+x}{1-x}$ is positive, so the absolute value signs do nothing.




Some detail: It is standard that $\frac{1}{1-u}=1+u+u^2+u^3+\cdots$, with the series converging when $|u|<1$. Put $u=-t$. We obtain
$$\frac{1}{1+t}=1-t+t^2-t^3+\cdots=\sum_{k=0}^\infty (-1)^kt^k.$$
It follows that (for $|x|<1$)
$$\ln(1+x)=\int_0^x\frac{dt}{1+t}=\int_0^x\left(\sum_{k=0}^\infty (-1)^kt^k \right)dt=\sum_{k=0}^\infty(-1)^k\frac{x^{k+1}}{k+1}.$$
We have now carried out Hint $1$.
Now substitute $-x$ for $x$ in our expression for $\ln(1+x)$. We get
$$\ln(1-x)=\sum_{k=0}^\infty(-1)^k\frac{(-x)^{k+1}}{k+1}=-\sum_{k=0}^\infty \frac{x^{k+1}}{k+1}.$$
(The terms $(-1)^k$ and $(-1)^{k+1}$ have product $(-1)^{2k+1}$, which is identically equal to $-1$.) We have now carried out Hint $2$.



Now use Hint $3$. We get

$$\ln\left(\frac{1+x}{1-x}\right)=\sum_{k=0}^\infty \left[(-1)^k-(-1)\right]\frac{x^{k+1}}{k+1}.$$
Consider the coefficient $[(-1)^{k}-(-1)]$ above. This is $2$ if $k$ is even and $0$ if $k$ is odd. To put it another way, we can put $k=2n$, since odd $k$ make no contribution to the sum. Thus
$$\ln\left(\frac{1+x}{1-x}\right)=\sum_{n=0}^\infty 2\frac{x^{2n+1}}{2n+1}.$$


Saturday 3 August 2019

probability - Is a Markov process uniquely determined?

Let




  • $E$ be a Polish space and $\mathcal E$ be the Borel $\sigma$-algebra on $E$

  • $I\subseteq[0,\infty)$ be closed under addition and $0\in I$




Please consider the following result:




Let $(\kappa_t:t\in I)$ be a Markovian semigroup on $(E,\mathcal E)$ $\Rightarrow$ There is a measurable space $(\Omega,\mathcal A)$ and a Markov process $X$ with distributions $(\operatorname P_x)_{x\in E}$ such that $$\operatorname P_x\left[X_t\in B\right]=\kappa_t(x,B)\;\;\;\text{for all }x\in E,B\in\mathcal E\text{ and }t\in I\;.\tag 1$$ Conversely, given a Markov process $X$ with distributions $(\operatorname P_x)_{x\in E}$ on a measurable space $(\Omega,\mathcal A)$, a Markovian semigroup $(\kappa_t:t\in I)$ is defined by $(1)$.




It turns out that $X$ in the first part of the statement can be constructed as the family of coordinate maps on $(\Omega,\mathcal A)=(E^I,\mathcal E^{\otimes I})$.



I've seen that many authors assume that Markov processes are such coordinate maps. Why can they do that?




The statement above doesn't state, that given $(\Omega,\mathcal A)$ there is one unique Markov process, does it? However, the finite-dimensional distributions of $X$, i.e. $$\operatorname P_x\left[X\in\;\cdot\;\right]\circ\pi_J^{-1}\;\;\;\text{for }J\subseteq I\text{ with }|J|<\infty\;,\tag 2$$ where $\pi_J:E^I\to E^J$ are the canonical projections, are uniquely determined by $(1)$.



Maybe $(\operatorname P_x)_{x\in E}$ (not only the finite-dimensional distributions) are uniquely determined by $(1)$, if $I\subseteq \mathbb N_0$ or $I$ is at least almost countable or when $E$ is almost countable.



So, why does the stated result allows us to think about $X$ as being uniquely determined?

integration - Evaluate $int_{-infty}^infty frac{1}{sqrt{z^2 + 1}}frac{1}{z - alpha} dz$.




Evaluate $$\int_{-\infty}^\infty \frac{1}{\sqrt{z^2 + 1}}\frac{1}{z - \alpha} dz\,.$$





What is an elegant way to evaluate this integral for Im $\alpha >0$? I imagine using residue theorem will lead to an elegant solution, such as in these related questions [1,2,3]. However I've been unable to adapt them to this line integral.



One requirement is that $\frac{1}{\sqrt{z^2 + 1}}$ be analytic in a strip around the real line $(-\infty,\infty)$. In my eyes this implies that the branch cuts can not cross the real line. For example the principal branches (parallel to the real line) or the branches $[\mathrm{i},\mathrm{i}\infty)$ and $(-\mathrm{i} \infty, -\mathrm{i}]$.


Answer



This is probably not elegant, but you can probably find a place to use the residue theorem. Let $I$ denote the integral
$$\int_{-\infty}^{\infty}\frac{1}{\sqrt{x^2+1}\ (x-\alpha)}dx.$$
Then, $I$ equals
$$\int_0^\infty\frac{1}{\sqrt{x^2+1}}\left(\frac{1}{x-\alpha}-\frac{1}{x+\alpha}\right)dx=2\alpha\int_0^\infty\frac{1}{\sqrt{x^2+1}\ (x^2-\alpha^2)}dx$$

Take $x$ to be $\sinh(t)$. Then
$$I=2\alpha\int_0^\infty\frac{1}{\sinh^2(t)-\alpha^2}dt.$$
Since $\sinh(t)=\frac{e^t-e^{-t}}{2}$, by setting $s=e^t$, we have
$$I=2\alpha\int_1^\infty\frac{1}{\frac{1}{4}\left(s-\frac1s\right)^2-\alpha^2}\frac{ds}{s}=2\alpha\int_0^1\frac{1}{\frac{1}{4}\left(s-\frac1s\right)^2-\alpha^2}\frac{ds}{s}.$$
That is,
$$I=4\alpha\int_0^\infty\frac{s}{s^4-(4\alpha^2+2)s^2+1}ds.$$
Using partial fractions,
$$I=\int_0^\infty\left(\frac{1}{s^2-2\alpha s-1}-\frac{1}{s^2+2\alpha s-1}\right)ds.$$
(This is probably the place you can use the residue theorem but I am not too competent with that. Maybe you need to use a logarithm factor, and something like a keyhole contour.)




Since $$s^2-2\alpha s-1=(s-\alpha-\sqrt{\alpha^2+1})(s-\alpha+\sqrt{\alpha^2+1})$$ and $$s^2+2\alpha s-1=(s+\alpha-\sqrt{\alpha^2+1})(s+\alpha+\sqrt{\alpha^2+1})$$ (using the principal branch of $\sqrt{\phantom{a}}$), we get
\begin{align}I&=\frac{1}{2\sqrt{\alpha^2+1}}\int_0^\infty\left(\frac{1}{s-\alpha-\sqrt{\alpha^2+1}}-\frac{1}{s-\alpha+\sqrt{\alpha^2+1}}\right)ds\\
&\phantom{aaa}-\frac{1}{2\sqrt{\alpha^2+1}}\int_0^\infty\left(\frac{1}{s+\alpha-\sqrt{\alpha^2+1}}-\frac{1}{s+\alpha+\sqrt{\alpha^2+1}}\right)ds
\\&=-\frac{1}{2\sqrt{\alpha^2+1}}\ln\left(\frac{\alpha+\sqrt{\alpha^2+1}}{\alpha-\sqrt{\alpha^2+1}}\right)+\frac{1}{2\sqrt{\alpha^2+1}}\ln\left(\frac{\alpha-\sqrt{\alpha^2+1}}{\alpha+\sqrt{\alpha^2+1}}\right)\\&=\frac{1}{\sqrt{\alpha^2+1}}\ln\left(\frac{\alpha-\sqrt{\alpha^2+1}}{\alpha+\sqrt{\alpha^2+1}}\right)=-\frac{2\ln(\alpha+\sqrt{\alpha^2+1})}{\sqrt{\alpha^2+1}}=-\frac{2\operatorname{arccosh}(-i\alpha)}{\sqrt{\alpha^2+1}}.\end{align}

The particular case $\alpha=i$ yields $I=2i$.


Friday 2 August 2019

elementary number theory - General method for solving $axequiv bpmod {n}$ without using extended Euclidean algorithm?



Consider the linear congruence equation $$ax\equiv b\pmod { n}.$$ One way to solve it is solving a linear Diophantine equation
$$
ax+ny=b.
$$



I saw somebody solved it by another method somewhere I don't remember:





Solve $144x\equiv 22\pmod { 71}$.
$$\begin{align}
144x\equiv 93 &\pmod { 71}\\
48x\equiv 31&\pmod { 71}\\
48x\equiv -40&\pmod { 71}\\
6x\equiv -5&\pmod { 71}\\
6x\equiv 66&\pmod { 71}\\
x\equiv 11&\pmod { 71}
\end{align}
$$





Instead of solving a Diophantine equation using extended Euclidean algorithm, he uses the rules of congruence such as




If $a_1\equiv b_1\pmod {n}$ and $a_2\equiv b_2\pmod {n}$, then $a_1\pm a_2\equiv b_1\pm b_2\pmod {n}$.




Here are my questions:





  • Does the second method always work?

  • What's the general algorithm for solving $ax\equiv b\pmod {n}$ in this way?


Answer



For this example it is simpler to note that



$$\rm mod\ 71\!:\ \ \frac{22}{144}\: \equiv\: \frac{11}{72}\:\equiv\: \frac{11} 1 $$



When the modulus is prime one can employ Gauss's algorithm, for example




$$\rm mod\ 29\!:\ \ \frac{1}8\: \equiv \frac{4}{32}\: \equiv\: \frac{4}{3}\:\equiv\: \frac{40}{30}\: \equiv\: \frac{11}{1}$$



I.e. scale $\rm A/B\ \to AN/BN\ $ by the least $\rm\:N\:$ so that $\rm\ BN > 29\:.\ $ Then reduce the numerator and denominator $\rm\ mod\ 29,\:$ and iterate. You will eventually obtain a denominator of $1$ since each step reduces the denominator. Isn't that sweet? That's they key idea that led Gauss to the first proof of the Fundamental Theorem of Arithmetic, i.e. unique factorization of integers.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...