Tuesday 30 October 2018

analysis - Prove that the Union of [1/n,n] = (0,∞) from n=1 to ∞



So I'm having trouble starting the proof mainly because I don't know which proof technique to use. I thought about using the Principle of Mathematical Induction but it doesn't seem like the correct option. Any help is appreciated.


Answer



What want to show two things:




(1) If $x$ is a number in the interval $(0,\infty)$ then it is in the set $\cup_{n=1}^\infty [1/n,n]$.



(2) If $x \in \cup_{n=1}^\infty [1/n,n]$ then $x \in (0,\infty)$.



The second one is easier. Take $x \in \cup_{n=1}^\infty [1/n,n]$. Then in particular there is some $N$ such that $x \in [1/N, N]$ because this is what set unions mean. Can you fill in the rest?



For showing that (1) is true, you need to use the archimedian principle. The principle states that for any number $x>0$ there is an $N$ such that $1/N < x$. Does this mean that $x \in [1/N, N]$?



(The above is slightly tricky. It does not necessarily follow, but can you see why? If $N$ doesn't work, you can find an $M$ that does work, however. This is related to splitting up between cases $0


To finish (1), if we have an $N$ such that $x \in (0,\infty)$ means that $x \in [1/N,N]$ can you see why $x \in \cup_{n=1}^\infty [1/n, n]$?


Math induction, how does it work?




I'm reading a book(concrete Mathematics) and it uses a lot the concept of 'Math induction'.
Thought I've been reading tutorials and examples about it, I'm still unable to understand it, the last 'basic' example I was reading is(my questions are the ones highlighted):




Prove $1+2+...+n=\frac{n(n+1)}{2}$



Then it indicates:




  • Step 1: Show is valid for n=1: $1=\frac{1(1+1)}{2}=\frac{2}{2}=1$

  • Step 2: Inductive step, assume is valid for n=k:




$1+2+...+k=\frac{k(k+1)}{2}$ //why do I need to substitute n by k? to me is exactly the same as doing nothing, I could work with n+1 instead




  • Step 3: Prove is valid for $n=k+1$, and the demo is:



$1+2+...+\color{red}{(k+1)}=(1+2+...+k)+\color{red}{(k+1)}$



= $\frac{k(k+1)}{2}+k+1$




= $\frac{k(k+1)+2(k+1)}{2}$



= $\frac{(k+1)(k+2)}{2}$//How is getting this from the previous equation???????



= $\frac{(k+1)[(k+1) + 1]}{2}$



Any help is appreciated, this 'basic thing' is driving me nuts.


Answer



The principle of induction is based in the Axioms of Peano, an Italian mathematician. It states that, if a set X is such that $1 \in X $ and every successor of X is also an element of X, that is: $s(X) \in X$, then this set is the set of natural numbers!




The idea is quite intuitive: if every successor of a number is in a set, and the set contains the first of numbers (1), then it must have its sucessor as an element (2), and the successor of successor (3) and so on.



Then what you do is that you actually don't substitute n for k. You test the validity of the expression for n=1. And then you check if the formula holds for a k+1 (successor), assuming it works for a given k.



So, you could actually work with $n$ and $n+1$. It's just a matter of mathematical rigour! :)



For your second question, how you get that from the equation, it's factorization.


Monday 29 October 2018

calculus - Alternative way to prove $lim_{ntoinfty}frac{2^n}{n!}=0$?




It follows easily from the convergence of $\sum_{n=0}^\infty\frac{2^n}{n!}$ that

$$
\lim_{n\to\infty}\frac{2^n}{n!}=0\tag{1}
$$
Other than the Stirling's formula, are there any "easy" alternatives to show (1)?


Answer



Yes: note that
$$ 0\leq \frac{2^n}{n!}\leq 2\Big(\frac{2}{3}\Big)^{n-2}$$
for $n\geq 3$, and then use the squeeze theorem.


geometry - How can points that have length zero result in a line segment with finite length?





I have been told that a line segment is a set of points.
How can even infinitely many point, each of length zero, can make a line of positive length?



Edit:
As an undergraduate I assumed it was due to having uncountably many points.
But the Cantor set has uncountably many elements and it has measure $0$.



So having uncountably many points on a line is not sufficient for the measure to be positive.



My question was: what else is needed? It appears from the answers I've seen that the additional thing needed is the topology and/or the sigma algebra within which the points are set.




My thanks to those who have helped me figure out where to look for full answers to my question.


Answer



This may seem rather a strange thing to say, but I don't think it's helpful to think of lines as made up of points: the "lininess" of a line is an inherent property that points don't have, so it has some extra qualities that points don't, such as length.



The real numbers are basically the answer to the question "How can I augment the set of rational numbers so that I don't have to worry about whether limits that ought to exist really do exist?", from which one can then do calculus. One can wheel out $\sqrt{2}$, $\pi$ and so on if one so desires as an obvious example of a point where one needs this.



Perhaps a more helpful introduction of the real numbers is to say "I want to know how far I am along this line." You then say "Am I halfway?" "Am I a quarter of the way?" "Am I 3/8ths of the way?", and so on. This gives you a way of producing binary expansions using closed intervals, and you can then introduce the idea of asking infinitely many of these questions (which will obviously be necessary, since $1/3$ has an infinite binary expansion), and the object in which the infinite intersection of the decreasing family of closed intervals with rational endpoints constructed by answering the sequence of questions contains precisely one point is called the real numbers. Hence one ends up with the real numbers as describing locations on the line, while not actually being the line itself.



In fact, the construction of the real numbers also gives you some "lininess" as baggage from the construction: you produce a topology, which tells you about locations being close to one another. This gives the real numbers more "substance" than just being ordered and containing the rationals. One can define topologies on the rationals, but the real numbers' completeness in their topological construction is the key. Completeness forces there to be "too many" real numbers to be covered by arbitrarily small sets. (Obviously countable is too small since the rationals don't work, but the Cantor set shows that one can produce uncountable sets with zero "length".)




One large hole in this so far is what "length" actually is. To do things this way, one is forced to introduce a definition of the length of a rational interval $[p,q]$, which must of course be $q-p$. Since one is not concerned at that point about the interval actually being "full" of points, one can simply introduce this as an axiom of the theory: all of us at some point have owned a ruler and know how they work with integers and small fractions, and it's not too much of a stretch to stipulate that one can have a ruler with as small a rational subdivision as required, without having to resort to infinite subdivision. (Which is another point worth emphasising: without infinite processes, there is no need for the real numbers in toto: one can simply introduce "enough" rationals for the precision one requires, and work modulo this "smallest length".)



This way, one starts with "length" and ends up with "real numbers", rather than trying to go the other way, which is theoretically difficult and mentally taxing and counterintuitive (besides all the Cantorian stuff).


combinatorics - Given a set of digits, what is the biggest number we can make using exponentiation - numberphile noodle quiz

The question is motivated by a question on a can of number noodles.




enter image description here



Each item is a digit between $0$ and $9$. Clearly, if you form a string and consider it to represent a base $10$ integer, then you'll get the biggest number if you start with the high digits $999...$ and end with the zeros $...1111000000000$.



In this youtube video, username numberphile (who does enthusiastic number theory videos for the layman) presents the digis of his can and brings forward the following generalized question:




Given a set of digits from my noodle can, what's the biggest number you can make using only




(a) addition,



(b) multiplication



and



(c) exponentiation?




So here my thoughts:




Generally, I'm given a multiset of digits and I can use up any of these together two form numbers greater than $9$. And each of the three operations is binary, i.e.



$(a,b)\mapsto a+b,\ \ \ \ \ \ $



$(a,b)\mapsto a\cdot b,\ \ \ \ \ \ $



$(a,b)\mapsto a^b,$



and only the last one isn't commutative and associative. The question seems to imply we can use bracketing as we like, so I won't bother about these.




I can decide the first two tasks simply by checking if the concatenation of digits is better than binary operation for accieving big numbers: Given $n+1$ digits $(d_0,d_1,d_2,...,d_n)$, their concatenation



$\text{con}:(d_n,d_{n-1},d_{n-2},\dots,d_1,d_0) \mapsto \sum_{k=0}^n d_k 10^k$



gives us summands involving higher and higher powers of $10$.



Since for $d_{n}\ne 0$ we have



$\sum_{k=0}^{n} d_k 10^k < 10^{n+1},$




we find



$d_{n+1}\cdot\text{con}(d_n,\dots) =d_{n+1}\sum_{k=0}^{n} d_k 10^k < d_{n+1}10^{n+1} < \sum_{k=0}^{n+1} d_k 10^k = \text{con}(d_{n+1},d_n,\dots).$



i.e. $\text{con}$ is stronger than multiplication for any free digit.






However, checking some pairs of digits for the third operation




$$2^3<3^2<23<32,$$



$$34<43<4^3<3^4$$



or



$$5^2=25<2^5<52$$



I can see no obvious pattern! The algorithm to construct the biggest number will depend on the values of the symbols. So my first question is this (the question in the video):





(c) Using concatenation and exponentiation, what is the biggest number you can make? How can you show that it is really is the maximum?




I guess that proof should be done in general, but here are the specific digits of the problem:



enter image description here



I'm just going to assume no available computer will be able to compute that absurd power, but I'd be interested in some of its characteristics nevertheless.







Further thoughts:



Now if there are $N$ digits in total, which are free to use in concatenation, you can use number sets of up to size $N$ for the binary operaton. E.g. If there were only three digits $\{0,3,7\}$, the number sets of size $N=2$ would be $\{\{0,37\}, \{0,73\},\{3,70\},\{7,30\}\}$.
Side note: There is an obvious additional rule regarding zero, namely that you can't have strings starting with $0$.



For each number $N$, there are a fixed number of possible bracketings, like e.g. $(a,((b,c),d))$, which determine the result, $a^{(b^c)^d}$ here. I guess the bracketing is a simple (downwards) tree graph where each node has eighter none or two (downwards) offspings. One should be able to count these.





The amount of of different numbers for the binary operation (exponentiation is of primary interest here) one can generate is limited by the quantity of number sets you can build using concatenation of digits and the number of braketings for each set size. How many different filled bracket structures can you have?




Notice that for the noncommutative exponentiation, the two number sets $\{a,b\}$ have to be replaces by pairs $\{(a,b),(b,a)\}$ because $a^b\ne b^a$.



And much more difficult:




Taking into account that different such structures can end up in the same number: What is the quantity of different results you can have? Is there a workable method of working out the "kernel" of this construction?





Lastly (minor priority):




As we deal with high number of digits here, the majority of results have relatively low Kolmogorov complexity. Given a certain operaton (e.g. exponentiation), the biggest number we can generate and the number of results we can have, can we conclude a pattern regarding the gap size between different biggest numbers as a function of the numbers of digits we use as input?


sequences and series - Show that if $sum_{k=1}^m c_k =0 $, $sum_{n=0}^{infty} sum_{k=1}^m frac{c_k}{nm+k} $ converges.



This is a generalization of this:
Is this:$\sum_{n=1}^{\infty}{(-1)}^{\frac{n(n-1)}{2}}\frac{1}{n}$ a convergent series?




Here is my solution.



To show that
if
$\sum_{k=1}^m c_k
=0
$,
$\sum_{n=0}^{\infty} \sum_{k=1}^m \frac{c_k}{nm+k}
$
converges,

let
$s(n)
=\sum_{k=1}^m \frac{c_k}{nm+k}
$.
Then



$\begin{array}\\
s(n)-\sum_{k=1}^m \frac{c_k}{nm}
&=\sum_{k=1}^m \frac{c_k}{nm+k}-\sum_{k=1}^m \frac{c_k}{nm}\\
&=\sum_{k=1}^m c_k(\frac1{nm+k}-\frac1{nm})\\

&=\sum_{k=1}^m c_k\frac{nm-(nm+k)}{(nm+k)nm}\\
&=-\sum_{k=1}^m c_k\frac{k}{(nm+k)nm}\\
&=-\frac1{n^2}\sum_{k=1}^m c_k\frac{k}{(m+k/n)m}\\
\end{array}
$



Since
$\sum_{k=1}^m \frac{c_k}{nm}
=\frac1{nm}\sum_{k=1}^m c_k
=0

$,



$\begin{array}\\
|s(n)|
&=|\frac1{n^2}\sum_{k=1}^m c_k\frac{k}{(m+k/n)m}|\\
&\le\frac1{n^2}\sum_{k=1}^m \big|c_k\frac{k}{(m+k/n)m}\big|\\
&<\frac1{n^2}\sum_{k=1}^m \big|c_k\frac{k}{(m)m}\big|\\
&=\frac1{n^2m^2}\sum_{k=1}^m \big|kc_k\big|\\
&=\frac{C}{n^2m^2}\\
\end{array}

$



where
$C
= \sum_{k=1}^m \big|kc_k\big|
$.



Therefore
$\sum_{n=1}^{\infty} s(n)
$

is absolutely convergent.


Answer



Another way can be this. Let $A_{k}=\sum_{s\leq k}c_{s}
$. We have by partial summation $$\sum_{k\leq m}\frac{c_{k}}{nm+k}=\sum_{k\leq m-1}\frac{A_{k}}{\left(nm+k\right)\left(nm+k+1\right)}=\frac{1}{n^{2}}\sum_{k\leq m-1}\frac{A_{k}}{\left(m+k/n\right)\left(m+\left(k+1\right)/n\right)}
$$ then $$\left|\sum_{k\leq m}\frac{c_{k}}{nm+k}\right|\leq\frac{1}{n^{2}}\sum_{k\leq m-1}\frac{\left|A_{k}\right|}{\left(m+k/n\right)\left(m+\left(k+1\right)/n\right)}\leq\frac{1}{n^{2}m^{2}}\sum_{k\leq m-1}\left|A_{k}\right|=
$$ $$=\frac{C}{n^{2}m^{2}}.
$$


calculus - Evaluating this integral using the Gamma function



I was wondering if the following integral is able to be evaluated using the Gamma Function.
$$\int_0^{\infty}t^{-\frac{1}{2}}\mathrm{exp}\left[-a\left(t+t^{-1}\right)\right]\,dt$$
I already have a tedious solution that doesn't exceed the scope of the first few semesters of calculus, but I want to tackle this with the Gamma Function. I just don't know how or if it's even possible.




If anyone can give a hint, I'd really like to finish it on my own.



EDIT:
You are allowed to use the fact that
$$
\int_{-\infty}^{\infty}\exp(-x^2)\,dx = \sqrt{\pi}
$$


Answer



Let $t=u^2$, and the integral becomes




$$2 \int_0^{\infty} du \, e^{-a \left (u^2 + \frac1{u^2} \right)} = 2 e^{2 a} \int_0^{\infty} du \, e^{-a \left ( u+\frac1{u} \right )^2}$$



Let $v=u+1/u$, then



$$u = \frac12 \left (v \pm \sqrt{v^2-4} \right ) $$



$$du = \frac12 \left (1 \pm \frac{v}{\sqrt{v^2-4}} \right ) dv $$



Now, it should be understood that as $u$ traverses from $0$ to $\infty$, $v$ traverses from $\infty$ down to a min of $2$ (corresponding to $u \in [0,1]$), then from $2$ back to $\infty$ (corresponding to $u \in [1,\infty)$). Therefore the integral is




$$e^{2 a} \int_{\infty}^{2} dv \left (1 - \frac{v}{\sqrt{v^2-4}}\right ) e^{-a v^2} + e^{2 a} \int_{2}^{\infty} du \left (1 + \frac{v}{\sqrt{v^2-4}}\right ) e^{-a v^2} $$



which is



$$\begin{align}2 e^{2 a}\int_2^{\infty} dv \frac{v}{\sqrt{v^2-4}} e^{-a v^2} &= e^{2 a} \int_4^{\infty} \frac{dy}{\sqrt{y-4}} e^{-a y}\\ &= e^{-2 a} \int_0^{\infty} dq \, q^{-1/2} \, e^{-a q} \end{align}$$



I guess the gamma function comes from this integral, but I find it easier to refer to gaussian integrals.


calculus - Why is $ lim_{xto 0^+} ln x = -infty. $



Simple question, can't seem to find an intuitive explanation anywhere. But the question is as follows, why does Why is
$$
\lim_{x\to 0^+} \ln x = -\infty.

$$



and Why is



$$
\lim_{x\to \infty} \ln x = \infty.
$$
Honestly the second one is a tad bit easier to understand. You are trying to find the natural log of an infinite number so the number you would be left with, would also be infinite, but the first one ($\ln{0}$) just doest not make sense for me.



Thanks in advance.



Answer



I do not believe a statement like $\ln \infty = \infty$
has an exact, universally accepted mathematical meaning.
What I think we can agree on is that whatever that formula represents,
it is based on the truth of the generally-accepted
true mathematical statement,
$$
\lim_{x\to\infty} \ln x = \infty
$$
which in turn is not really an equation, but is rather a way of

saying that if we want to the value of $\ln x$ to be greater than
some positive number (choose any number you want),
all we have to do is to ensure that $x$ is a large enough positive number.
There is no upper bound on how large we can force $\ln x$ to be,
and all we have to do in order to make $\ln x$ "large enough"
is name a number $N$ and assert that $x > N$.



So what we're really trying to explain is why
$$
\lim_{x\to 0^+} \ln x = -\infty.

$$
That is, to force $\ln x$ to be less than some arbitrarily large negative
number, all we have to do is make $x$ close enough to (but greater than) $0$.



Now, we know that
$$
\ln x = - \ln \frac1x,
$$
and we know that
$$

\lim_{x\to 0^+} \frac 1x = \infty.
$$
So by making $x$ close to zero, but positive,
we can make $\frac1x$ be as large a positive number as we like.
And therefore we can make $\ln \frac1x$ be as large a positive number
as we need to.



But if $\ln \frac1x$ is a large positive number,
then $\ln x = -\ln \frac1x$ is a large negative number.
And that's how we know that

$$
\lim_{x\to 0^+} \ln x = -\infty.
$$


Sunday 28 October 2018

arithmetic - Division with 4 digit number in denominator



I've got a question in my task sheet. The question is as follows.
$$
\frac{43\cdot93\cdot47\cdot97}{3007}=X
$$

Find the exact value of $X$. I've tried a lot, but couldn't find easier way to do it without calculator, which of course, is not allowed in exam. There are no options, they're just asking the value of $X$.



Would love if someone could help to give Method to solve the problem. As I said, I know how to solve above problem with the help of calculators and I've already found the factorization with help of calc, but no luck in manual mode. :(



Thanks in advance. :)


Answer



HINT $\ \rm mod \; 97\!: \: 100 \;\equiv\; 3 $



Hence $\; 3007 \;\equiv\; 30 \cdot 100 + 7$




$\quad\quad\quad\quad\quad\quad\;\; \;\equiv\; 30 \:\;\cdot\;\: 3 \;\: + \; 7 \;$



$\quad\quad\quad\quad\quad\quad\;\; \;\equiv\; 0$



I.e. cast out 97's in analogy to cast out nines. See also here where I discuss casting out 91's.


integration - How do I integrate (1/polynomial) without using partial fractions?

I had a lecture earlier today where the use of partial fractions was introduced. He used partial fractions and a more 'brute force' method to $\int\frac{1}{(x^2 + 5x + 6)}\mathrm dx$. I could solve this using partial fractions but I need to be reminded of the more difficult method(which I've learned months ago) for my current maths subject's purposes. I've been trying to find a solution that yields to $\ln\left|\frac{(x + 2)}{(x + 3)}\right| + C$ to no avail. Can anyone help me how to solve the problem without using partial fractions?

abstract algebra - Given $K(alpha)/K$ and $K(beta)/K$ disjoint extensions with at least one of them odd degree then $K(alpha,beta)=K(alphabeta)$



I have problems with this exercise




Let be $K(\alpha)/K$ and $K(\beta)/K$ disjoint extensions with at least one of them odd degree. Prove that $\alpha\beta$ is a primitive element for the extension $K(\alpha,\beta)/K$.





Some of my ideas were




  • Prove that $K(\alpha,\beta) \subset K(\alpha\beta)$ or that $K(\alpha) \subset K(\alpha\beta)$.


  • Use that in this situation $K(\alpha)=K(\alpha^2)$.


  • Tried to relate the irreducible polynomials from the extensions involved.




I didn't find anything useful. Can you help me?




Thank you in advance.


Answer



This is false - a counterexample is given by $ \alpha = \sqrt[3]{2} $, $ \beta = \sqrt[3]{3} $, $ K = \mathbf Q $. The fields $ \mathbf Q(\sqrt[3]{2}) $ and $ \mathbf Q(\sqrt[3]{3}) $ intersect trivially (left as an exercise), are both of degree $ 3 $ over $ \mathbf Q $, but $ \alpha \beta = \sqrt[3]{6} $ is of degree $ 3 $ over $ \mathbf Q $, so is not a primitive element of the extension $ \mathbf Q(\alpha, \beta)/\mathbf Q $, which is of degree $ 9 $.


real analysis - Show that ${{x_n}}$ is convergent and monotone



Question: For $c>0$, consider the quadratic equation

$$
x^2-x-c=0,\qquad x>0.
$$
Define the sequence $\{x_n\}$ recersively by fixing $x_1>0$ and then, if $n$ is an index for which $x_n$ has been defined, defining
$$
x_{n+1}=\sqrt{c+x_n}.
$$
Prove that the sequence $\{x_n\}$ converges monotonically to the solution of the above equation.



My uncompleted solution: General speaking, the sequence ${\{x_{n+1}}\}$ is a subsequence of ${\{x_n}\}$. Hence, if $\lim_{n \to \infty} x_n = x_s$, then $\lim_{n \to \infty} x_{n+1} = x_s$ as well. So, from sum and productproperties of convergent sequences,

(finally) it follows that $x_s=\sqrt{c+x_s}$ which is equivalent to the mentioned quadratic equation for $x>0$. To show that ${\{x_n}\}$ is monotone, it is enough to show that it is bounded, since ${\{x_n}\}$ is convergent. But I don't know how to show that ${\{x_n}\}$ is bounded.



Thank you in advance for a clear guidance/solution.



EDIT : (by considering first two comments to this question, so) The question is, show that ${\{x_n}\}$ is $1-$ convergent, and, $2-$ monotone.


Answer



Let $f(x)=\sqrt{c+x}$, $x>0$. There is a unique fixed point $x^*$ such that $f(x^*)=x^*$. $x^*$ is in fact the positive solution of $x^2-x-c=0$. The sequence $x_n$ is defined by $x_{n+1}=f(x_n)$.



If $0


If $x^*x>x^*$. From this it is easy to show that if $x_1>x^*$ then $x_n$ is decreasing and bounded below by $x^*$. This implies that $x_n$ converges and the limit is $x^*$.



If $x_1=x^*$, then $x_n=x^*$ for all $n$.


probability - Expected number of dice rolls of an unfair dice to roll every side equally many sides

I am having trouble with solving the following problem:



The probability that a $d$-sided dice lands on its $k$th side is equal to $p_k$ for $k\in \{k\in\mathbb{N},k≤d\}$ and $p_1+p_2+p_3+...+p_d=1$. Roll this dice (at least once) until every side is rolled equally many times. Find a function $F(p_1,p_2...)$ which gives the expected number of rolls $n$ after which that happens.




I have attempted to solve it by counting distinct sequences, and than taking a weighted average: the sum of all $n$s times the probability of this occurring after $n$ rolls. Using this method, I have managed to obtain a result for a two-sided dice (a coin) by using Catalan numbers. This does, however, not work for higher dimensions.



Does anyone have an idea on how this could be solved for at least a 3 or 4-sided dice?



Edit:
I gave my idea for counting the distinct sequences for an equality to occur after exactly $n$ steps in this post: Number of ways a dice can roll every side equally many times for the first time after x rolls

Saturday 27 October 2018

elementary number theory - mod Distributive Law, factoring $!!bmod!!:$ $ abbmod ac = a(bbmod c)$



I stumbled across this problem





Find $\,10^{\large 5^{102}}$ modulo $35$, i.e. the remainder left after it is divided by $35$




Beginning, we try to find a simplification for $10$ to get:
$$10 \equiv 1 \text{ mod } 7\\ 10^2 \equiv 2 \text{ mod } 7 \\ 10^3 \equiv 6 \text{ mod } 7$$



As these problems are meant to be done without a calculator, calculating this further is cumbersome. The solution, however, states that since $35 = 5 \cdot 7$, then we only need to find $10^{5^{102}} \text{ mod } 7$. I can see (not immediately) the logic behind this. Basically, since $10^k$ is always divisible by $5$ for any sensical $k$, then:
$$10^k - r = 5(7)k$$
But then it's not immediately obvious how/why the fact that $5$ divides $10^k$ helps in this case.




My question is, is in general, if we have some mod system with $a^k \equiv r \text{ mod } m$ where $m$ can be decomposed into a product of numbers $a \times b \times c \ \times ...$, we only need to find the mod of those numbers where $a, b, c.....$ doesn't divides $a$? (And if this is the case why?) If this is not the case, then why/how is the solution justified in this specific instance?


Answer



The "logic" is that we can use a mod distributive law to pull out a common factor $\,c=5,\,$ i.e.



$$ ca\bmod cn =\, c(a\bmod n)\quad\qquad $$



This decreases the modulus from $\,cn\,$ to $\,n, \,$ simplifying modular arithmetic. Also it may eliminate CRT = Chinese Remainder Theorem calculations, eliminating needless inverse computations, which are much more difficult than above for large numbers (or polynomials, e.g. see this answer).



This distributive law is often more convenient in congruence form, e.g.




$$\quad \qquad ca\equiv c(a\bmod n)\ \ \ {\rm if}\ \ \ \color{#d0f}{cn\equiv 0}\ \pmod{\! m}$$



because we have: $\,\ c(a\bmod n) \equiv c(a\! +\! kn)\equiv ca+k(\color{#d0f}{cn})\equiv ca\pmod{\!m}$



e.g. in the OP: $\ \ I\ge 1\,\Rightarrow\, 10^{\large I+N}\!\equiv 10^{\large I}(10^{\large N}\!\bmod 7)\ \ \ {\rm by}\ \ \ 10^I 7\equiv 0\,\pmod{35}$



Let's use that. First note that exponents on $10$ can be reduced mod $\,6\,$ by little Fermat,



i.e. notice that $\ \color{#c00}{{\rm mod}\,\ 7}\!:\,\ 10^{\large 6}\equiv\, 1\,\Rightarrow\, \color{#c00}{10^{\large 6J}\equiv 1}.\ $ Thus if $\ I \ge 1\ $ then as above




$\phantom{{\rm mod}\,\ 35\!:\,\ }\color{#0a0}{10^{\large I+6J}}\!\equiv 10^{\large I} 10^{\large 6J}\!\equiv 10^{\large I}(\color{#c00}{10^{\large 6J}\!\bmod 7})\equiv \color{#0a0}{10^{\large I}}\,\pmod{\!35} $



Our power $\ 5^{\large 102} = 1\!+\!6J\ $ by $\ {\rm mod}\,\ 6\!:\,\ 5^{\large 102}\!\equiv (-1)^{\large 102}\!\equiv 1$



Therefore $\ 10^{\large 5^{\large 102}}\!\! = \color{#0a0}{10^{\large 1+6J}}\!\equiv \color{#0a0}{10^{\large 1}} \pmod{\!35}\ $






Remark $\ $ For many more worked examples see the complete list of linked questions. Often this distributive law isn't invoked by name. Rather its trivial proof is repeated inline, e.g. from a recent answer, using $\,cn = 14^2\cdot\color{#c00}{25}\equiv 0\pmod{100}$




$\begin{align}&\color{#c00}{{\rm mod}\ \ 25}\!:\ \ \ 14\equiv 8^{\large 2}\Rightarrow\, 14^{\large 10}\equiv \overbrace{8^{\large 20}\equiv 1}^{\rm\large Euler\ \phi}\,\Rightarrow\, \color{#0a0}{14^{\large 10N}}\equiv\color{#c00}{\bf 1}\\[1em]
&{\rm mod}\ 100\!:\,\ 14^{\large 2+10N}\equiv 14^{\large 2}\, \color{#0a0}{14^{\large 10N}}\! \equiv 14^{\large 2}\!\! \underbrace{(\color{#c00}{{\bf 1} + 25k})}_{\large\color{#0a0}{14^{\Large 10N}}\!\bmod{\color{#c00}{25}}}\!\!\! \equiv 14^{\large 2} \equiv\, 96\end{align}$



This distributive law is actually equivalent to CRT as we sketch below, with $\,m,n\,$ coprime



$\begin{align} x&\equiv a\!\!\!\pmod{\! m}\\ \color{#c00}x&\equiv\color{#c00} b\!\!\!\pmod{\! n}\end{align}$
$\,\Rightarrow\, x\!-\!a\bmod mn\, =\, m\left[\dfrac{\color{#c00}x-a}m\bmod n\right] = m\left[\dfrac{\color{#c00}b-a}m\bmod n\right]$



which is exactly the same solution given by Easy CRT. But the operational form of this law often makes it more convenient to apply in computations versus the classical CRT formula.



sequences and series - Check that $f(x)=sum_{n=1}^{infty}x^2(1-x^2)^{n-1}$ is continuous or not.

Define $f:[0,1]\to \Bbb R$ by



$$f(x)=\sum_{n=1}^{\infty}x^2(1-x^2)^{n-1}$$



Check that $f$ is continuous or not.



Attempt: $$f(x)=\sum_{n=1}^{\infty}x^2(1-x^2)^{n-1}\\
=\lim\int_{0}^{1}x^2(1-x^2)^{n-1}$$
Now, Putting $x=\sin\theta$, then $\mathrm dx=\cos\theta\mathrm d\theta$, therefore integral reduces to




$$\lim\int_{0}^{\pi/2}\sin^2\theta(1-\sin^2\theta)^{n-1}\cos\theta\mathrm d\theta\\
=\lim\int_{0}^{\pi/2}\sin^2\theta\cos^n\theta\mathrm d\theta$$



Now from here the result will depend on $n$ i.e. $n=2m,n=2m+1$
In these two cases the result will be different, Hence $f$ is not continuous.



am I right? Different approaches are invited. Thank you.

Searching function starting exactly constant and approaching another constant

for the default of an R API parameter i seek a function that has the property of yielding a good guess.




  • I want the function to be defined for $\mathbb Z^+$ (But no reason not to define it for $\mathbb R^+$, i guess)

  • I want it to be smooth

  • It should satisfy both




    $$\mathrm f(x_{small}) = 1 \forall x_{small} \in \left(0, k\right]$$
    $$\lim_{x \rightarrow \infty} \mathrm f(x) = l$$




edit to be clear: $k$ and $l$ should be constants appearing in the function definition.



E.g. with $k = 1000$ and $l = 0$, the $f(x) = 1 \forall x \in (0, 1000]$. Then it should gradually decline and approach 0.



To simplify, the function can be written as:




$$
\mathrm f(x) = \begin{cases}
1 & \text{if } x \le k \\
[\dots] & \text{otherwise}
\end{cases}
$$

Friday 26 October 2018

algebra precalculus - Non-linear system of equations over the positive integers — more unknowns than equations

This exercise appeared on a german online tutoring board and caught my attention but stumbled me for hours. The task is to find 6 distinct positive three digit integers satisfying:



$x_{1}+x_{2}+x_{3}+x_{4}+x_{5}+x_{6}=4.198$
$x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+x_{4}^{2}+x_{5}^{2}+x_{6}^{2}=3.215.224$
$x_{1}^{3}+x_{2}^{3}+x_{3}^{3}+x_{4}^{3}+x_{5}^{3}+x_{6}^{3}=2.600.350.972$



According to the power mean inequality or Cauchy-Schwarz the numbers must lie relatively closely together. However a brief search lead nowhere.
For simplicity I set $4.198=A$, $3.215.224=B$ and $2.600.350.972=C$ and then my approach was to manipulate the three equations and perhaps use that no square is negative. For example
$6B-A^{2}=\sum_{iIf now $6B-A^{2}$ would result in something like
$5*1^{2}+4*2^{2}+3*3^{2}+2*4^{2}+1*5^{2}=105$

we could tell exactly what our $x$ were. Unfortunately it gives $1.668.140$ and we cannot conclude much.
Similar reasoning with factoring to $\sum_{iIf there exists such a factorization, my intuition says it would only make sense if the $x$ form an arithmetic sequence, otherwise we would get different factors on the right side that can't appear on the left side. (does this sound reasonable?) But substituting gives no solution. Also, I don't know what other, more sophisticated factorization would lead me to the solution.
I'm running out of ideas, how can this problem be solved?




Links to the original problem:
https://www.geocaching.com/geocache/GC69JE0_lotto-mal-anders
https://www.gutefrage.net/frage/matherechenart-gesucht-mehrere-variablen-mit-festen-ergebnis?foundIn=unknown_listing

Thursday 25 October 2018

sequences and series - Strictly speaking, is it true that $zeta(-1)ne1+2+3+cdots$?



There are various notorious proofs that $1+2+3+\cdots=\frac{-1}{12}.$



Some of the more accessible proofs basically seem to involve labelling this series as $S=\sum_\limits{i=1}^
\infty i$ and playing around with it until you can say $12S=-1$.




Even at High School, I could have looked at that and thought "well since you're dealing with infinities and divergent series, those manipulations of $S$ are not valid in the first place so you're really just rearranging falsehoods." It's a bit like the error in this proof that $1=0$, or $\forall x.(\text{false}\implies x)$, it's a collapse of logic.



Greater minds than mine have shown that $\zeta(-1)=\frac{-1}{12}$ and I have no argument with that, but I do dispute the claim that $\zeta(-1)=S$.



My thinking here is that, although the analytic continuation of $\zeta$ is well-defined, that analytic continuation is not the same thing as $\sum_\limits{i=1}^\infty i$.



Once you have





  1. defined $\zeta(s) =\sum_\limits{n=1}^\infty\frac{1}{n^s}$ where $\vert s\vert>1$

  2. defined $\zeta^\prime(s)=...$ by analytic continuation for all $s$



then you can only claim




  1. $\zeta(s)=\zeta^\prime(s)$ where $\vert s\vert>1$.




Basicaly, your nice, differentiable-everywhere definition of the Zeta function is not substituable for the original series $S$ in the unrestricted domain.



Hence, $\zeta(-1)=\frac{-1}{12}\nRightarrow S=\frac{-1}{12}$.



Right? Convince me otherwise.


Answer



You only have



$$\zeta(s)=\sum_{n=1}^\infty n^{-s}$$




for $\mathfrak R(s)>1$. The right-hand side of the equation is not defined otherwise.



Like you said, $\zeta(s)$ is defined by analytic continuation on the rest of the complex numbers, so the formula $\zeta(s)=\sum_{n=1}^\infty n^{-s}$ is not valid on $\mathbb C \setminus \{z\in \mathbb C, \mathfrak R(z)>1\}$.



Therefore,



$$\frac{-1}{12}=\zeta(-1)\ne \sum_{n=1}^\infty n \quad\text{(which $=+\infty$ in the best case scenario)}.$$



So what you say is correct.


number theory - Solving linear congruence (modular inverse or fraction) by solving Bezout equation

Isn't finding the inverse of $a$, that is, $a'$ in $aa'\equiv1\pmod{m}$ equivalent to solving the diophantine equation $aa'-mb=1$, where the unknowns are $a'$ and $b$? I have seem some answers on this site (where the extended Euclidean Algorithm is mentioned mainly) as well as looked up some books but there is no mention of this. Am I going wrong somewhere or is this a correct method of finding modular inverses? Also can't we find the Bézout's coefficients by solving the corresponding diophantine equation instead of using the extended Euclidean Algorithm?

Wednesday 24 October 2018

calculus - Ways to differentiate $(x)(x+y)$



I checked the differentiation of $(x)(x+y)$ using an online derivative tool which gives the result:




$\frac{d}{dx}\left(\left(x\right)\left(x+y\right)\right) = x+y+x\left(\frac{d}{dx}\left(y\right)+1\right)$



But using a different tool I found that derivate is:



$\frac{d}{dx}\left(\left(x\right)\left(x+y\right)\right) = 2x+y$



If there's no partial differentiation involved then does it mean there are different ways to interpret the given problem? i.e.



In first case, it is interpreted as $f(g(x), h(x,y)) = (x)(x+y)$ ?
and in second case it is $f(g(x), h(x)) = (x)(x+y)$ ?




I don't understand how the product rule is getting applied here and why $y$ is constant in second case?


Answer



In the first case, the online solver "thinks" $y$ is a function of $x$ and therefore only indicates the derivative, while in the second case the program treats $y$ as an independent variable. In both cases the product rule is being applied.



To be more explicit:



Let $f(x)=x$ and $g(x)=x+y$. Then x(x+y)=f(x)g(x). Then the product rule says that



$(f(x)(g(x))'=f'(x)g(x)+f(x)g'(x)$.




Note that $f'(x)=1$ and that if $y$ is and independent variable from $x$ then $g'(x)=1+0=1$. Substituting in the product rule we obtain



$(f(x)(g(x))'= 1(x+y)+x(1) = x+x+y=2x+y$.



However if $y$ depends of $x$, $g'(x)= 1 + d/dx (y)$. When substituting again on the product rule, you obtain the other result.


compactness - First-order Peano arithmetic and (the lack of) implicit definition of addition

I'm trying to show, through the existence of non-standard models of arithmetic, that the first-order Peano axioms (without those of multiplication) don't implicitly define addition in the sense of implicit definition as employed in the Beth definability theorem. I think I have it figured out, but am a bit unsure, and wanted to check if I am doing something misguided or wrong.







First, I'm understanding the relevant sense of implicit definition as follows:



Non-logical symbol $\alpha$ is implicitly definable in a theory $\Delta$ iff for all structures I and I' such that




  1. I and I' simultaneously satisfy $\Delta$

  2. domain of I = domain of I'

  3. $I(\beta) = I'(\beta)$ for all non-logical symbols $\beta$ except for $\alpha$

  4. $I(c) = I'(c)$ for every constant c




then $I(\alpha) = I'(\alpha)$ as well.



And the relevant Peano axioms are:



A1. (∀𝑥)𝑆(𝑥)≠0



A2. (∀𝑥)(∀𝑦)(𝑆(𝑥)=𝑆(𝑦)→𝑥=𝑦)




A3. (∀𝑥)+(𝑥,0)=𝑥



A4. (∀𝑥)(∀𝑦)+(𝑥,𝑆(𝑦))=𝑆(+(𝑥,𝑦))






Now, for the argument.



I know of the compactness argument that can be run to generate a non-standard number – call it 'a' – that is greater than any of the standard natural numbers (and brings along with an infinite number of successors and predecessors). Let's say we run this procedure to generate two other non-standard numbers, b and c, such that we have a set of standard and non-standard numbers that looks like:




0, 1, 2, 3, ... , a-2, a-1, a, a+1, a+2, ... , b-2, b-1, b, b+1, b+2, ..., c-2, c-1, c, c+1, c+2, ...



where the ellipses represent infinite descending and/or ascending chains of (non-)standard numbers. A2 will ensure that, a+0=a, b+0=b, and c+0=c. But what about, e.g., a+b?



I think you can run another compactness argument, as follows: Consider the following set of sentences, in the language consisting of the standard language of arithmetic together with the constant symbols 'a', 'b', and 'c' (but not written in strict first-order notation):



$\{\text{true sentences of arithmetic}\} \cup \{a≠0, a≠1, a≠2, ..., b≠0, b≠1, b≠2, ... , c≠0, c≠1, c≠2, ...\} \cup \{a≠b, a≠b+1, a≠b-1, a≠b+2, ..., a≠c, a≠c+1, a≠c-1, a≠c+2, ... , b≠c, b≠c+1, b≠c-1, b≠c+2, ...\} \cup \{a



If I'm thinking about this correctly, you can find a model of any finite subset of this union, and so by compactness can model the whole set (and a fortiori model the true sentences of arithmetic).




But, can we not also replace the last set of the union, with a set of sentences where all instances of 'c' are replaced by 'c+1' – i.e., $\{a+b=c+1, a+(b+1)=c+1+1, a+(b-1)=c+1-1, a+(b+2)=c+1+2, ...\}$ – and run an analogous compactness argument?



And if so, would we have shown that the Peano axioms don't (Beth) implicitly define '+' (as a+b=c in one case, and a+b=c+1 in the other, even though the domains are the same, and all other non-logical symbols are assigned the same values)? (I think I am a bit skeptical of this because I don't really understand how we individuate the non-standard numbers.)



I know there's a lot of sloppy text here, but any help would be greatly appreciated.



Thanks!

Monday 22 October 2018

integration - Evaluating the definite integral $int_{-c}^{c} e^{-ax^2}cos^2(bx) ,mathrm{d}x$

How can one evaluate the following integral?



$$\int_{-c}^{c} e^{-ax^2}\cos^2(bx) \,\mathrm{d}x$$




Wolfram Alpha gives this. Is there not a more compact form?



If $\int_{-c}^{c} e^{-ax^2} \, \mathrm{d}x=k$, then can we express the first integral in terms of $k$? Thanks.

real analysis - Finding a counter example to one of four formulas for images and inverse images




So this is a question out of Real Analysis book that I'm working through, and I'm quite stuck.



Give a counterexample to one of the following four formulas for images and inverse images of sets (the other three are true)




  1. $f(X_1 \cup X_2) = f(X_1) \cup f(X_2$),

  2. $f^{-1}(Y_1 \cup Y_2) = f^{-1}(Y_1) \cup f^{-1}(Y_2)$,

  3. $f(X_1 \cap X_2) = f(X_1) \cap f(X_2)$,

  4. $f^{-1}(Y_1 \cap Y_2) = f^{-1}(Y_1) \cap f^{-1}(Y_2)$




I went through Overview of basic results about images and preimages and from other resources I can find online, all 4 of these are true... I can't figure out what I am missing.



Thanks for the help


Answer



3 is false, if F is a contant map and $X_1, X_2$ disjoint.
Exercise. Prove for 3 that the
left hand side is a subset of the right hand side.


Sunday 21 October 2018

calculus - Why the limit of $frac{sin(x)}{x}$ as $x$ approaches 0 is 1?

I need a rigorous proof that verify why the limit of $\dfrac{\sin(x)}{x}$ as $x$ approaches $0$ is $1$.
I tried before but i do not know how start this proof.
I would appreciate if somebody help me. Thanks.

continuity - Discontinuous function counter example

When providing counter-examples for various things in Calculus, we often utilise piecemeal functions because we can easily 'construct' something 'pathological' by doing that.



Somebody asked me




"To determine if a function is discontinuous or not, can't we just either see if its piecewise defined or if there are any fractions? If the function ISN'T piecewise defined or has any fractions (with x's in the denominator), then wouldn't it always be continuous then?"



I realised that I couldn't think of a counter example immediately, and I still cannot think of one now! Does there exist such a thing at an elementary level?

Friday 19 October 2018

calculus - Construct a function which is continuous in $[1,5]$ but not differentiable at $2, 3, 4$





Construct a function which is continuous in $[1,5]$ but not differentiable at $2, 3, 4$.




This question is just after the definition of differentiation and the theorem that if $f$ is finitely derivable at $c$, then $f$ is also continuous at $c$. Please help, my textbook does not have the answer.


Answer



$|x|$ is continuous, and differentiable everywhere except at 0. Can you see why?



From this we can build up the functions you need: $|x-2| + |x-3| + |x-4|$ is continuous (why?) and differentiable everywhere except at 2, 3, and 4.



statistics - How do you calculate probability of rolling all faces of a die after n number of rolls?








Im pretty new to the stackexchange, and posted this is statistics, and then discovered this site, and thought it was much more appropriate, so here I go again:



It is fairly easy to figure out what is the average rolls it would take to roll all faces of a die [1 + 6/5 + 6/4 + 6/3 + 6/2 + 6/1 = 14.7], but that got me thinking of a seemingly more complicated problem.



Say you roll a die 1-5 times, the is the odds of ALL faces showing, is obviously 0. If you roll a die 6 times, the odds of all faces showing can easily be calculated like so:



1 * (5/6) * (4/6) * (3/6) * (2/6) * (1/6) = .0154 or 1.54%



Now is where I get stuck. How to do 7, or more times, and calculate it with n.




Any tips is helpful!

calculus - How to prove $lim_{n to infty} sqrt{n}(sqrt[n]{n} - 1) = 0$?




I want to show that $$\lim_{n \to \infty} \sqrt{n}(\sqrt[n]{n}-1) = 0$$ and my assistant teacher gave me the hint to find a proper estimate for $\sqrt[n]{n}-1$ in order to do this. I know how one shows that $\lim_{n \to \infty} \sqrt[n]{n} = 1$, to do this we can write $\sqrt[n]{n} = 1+x_n$, raise both sides to the n-th power and then use the binomial theorem (or to be more specific: the term to the second power). However, I don't see how this or any other trivial term (i.e. the first or the n-th) could be used here.



What estimate am I supposed to find or is there even a simpler way to show this limit?



Thanks for any answers in advance.


Answer



The OP's attempt can be pushed to get a complete proof.
$$
n = (1+x_n)^n \geq 1 + nx_n + \frac{n(n-1)}{2} x_n^2 + \frac{n(n-1)(n-2)}{6} x_n^3 > \frac{n(n-1)(n-2) x_n^3}{6} > \frac{n^3 x_n^3}{8},
$$

provided $n$ is "large enough" 1. Therefore, (again, for large enough $n$,) $x_n < 2 n^{-2/3}$,
and hence $\sqrt{n} x_n < 2n^{-1/6}$. Thus $\sqrt{n} x_n$ approaches $0$ by the sandwich (squeeze) theorem.






1In fact, you should be able to show that for all $n \geq 12$, we have
$$
\frac{n(n-1)(n-2)}{6} > \frac{n^3}{8} \iff \left( 1-\frac1n \right) \left( 1- \frac2n \right) \geq \frac34.
$$


Thursday 18 October 2018

sequences and series - Evaluating $sum_{k=0}^inftysin(kx)$ and $sum_{k=0}^inftycos(kx)$



Playing with sines I wanted to find
$$
S(x)=\sum_{k=0}^\infty\sin(kx)
$$




Writing it as
$$
S(x)=\mathrm{Im}\big(A(x)\big),\quad\text{where}\quad A(x)=\sum_{k=0}^\infty e^{ikx}
$$


and using $z=e^{ix}=\cos(x)+i\sin(x)$, cheating about $\|z\|=1$ in order to write
$$
A(x)=\frac{1}{1-z},
$$



I found that




$$
2S(x)=\frac{\sin(x)}{1-\cos(x)}

$$




as WolframAlpha does.




My question is: why with the same method I found
$$
\sum_{k=0}^\infty\cos(kx)=\frac{1}{2}
$$


while WolframAlpha and G.H.Hardy on the book "Divergent Series" (pg. 2) give $-1/2$?


mobius inversion - Sum over divisors of gcd of two numbers

How can I calculate this sum?




$\sum\limits_{d~|~(n_1, n_2)} \mu(d) \tau\left(\dfrac{n_1}{d}\right) \tau\left(\dfrac{n_2}{d}\right)$,



where $(n_1, n_2)$ is gcd of $n_1$ and $n_2$, $\mu$ is Mobius function and $\tau(n)$ is the number of positive divisors of $n$.



I tried to use Mobius inversion formula, but still can't manage to deal with the problem.



Any ideas? Thanks.



P.S. Note that sum in the above equation runs over positive divisors only.

calculus - Why does an anti-derivative "preserve" a translation from the function being integrated?



To illustrate my question, consider the function given by the following equation:



$f(x)=36(x-8)^{11}$



My AP Calculus AB teacher says the anti-derivative is $F(x) = 3(x-8)^{12}+C$. I understand easily using the reverse power rule that if we simply had $g(x)=36x^{11}$, then we could easily use the reverse power rule as well as the constant multiple rule to get an anti-derivative of $G(x) = 3x^{12}+C$. But what to do about the translation -- and why does what the teacher did work?




I know from class that $[f(x+a)]' = f'(x+a)$, which was justified to us graphically and seemed to make sense. But that is a differentiation rule, not an integration rule. Obviously, the two operations are related, but I've been itching to see a more formal proof of why the "horizontal translation" rule apparently holds in both cases... IF it universally holds, of course -- maybe it's in fact specific to certain types of functions, and just happens to hold in the particular problem my teacher gave us!



Is anyone willing to attempt to justify why what my teacher did to find the anti-derivative works, in a satisfying way? (The chain rule is, I suppose, okay if you must use it, although a proof without would be better because I haven't really learned the chain rule much yet.)


Answer



It really is as simple as, if $ g'(x) = h(x) $, then $ \int{h(x)dx} = g(x) $.



Now, in the above sentence, replace $ g(x) $ with $ f(x-c) $ and $ h(x) $ with $ f'(x-c) $. Theres nothing more to say.


real analysis - Continuity and almost everywhere convergence




(Folland 2.37) Suppose that $f_{n}$ and $f$ are measurable complex-valued functions,
and $\phi: \mathbb{C} \to \mathbb{C}$. If $\phi$ is continuous and
$f_{n} \to f$ a.e., then $\phi \circ f_{n} \to \phi \circ f$ almost
everywhere.




My Proof:



Since $f_{n} \to f$ a.e. it follows that $$N= \{x: \lim_{n \to \infty}
(f_{n} - f) \neq 0 \}$$ is a null set.



Since $\phi$ is continuous, it follows that $\forall \varepsilon >0$,
$\exists \delta >0$ s.t. $\forall x$, $\forall f_{k}(x) \in
> (f_{n}(x))_{n\geq1}$ $$||f_{k}(x) - f(x)|| < \delta \implies || (\phi
> \circ f_{k})(x) - (\phi \circ f)(x)|| < \varepsilon$$ Now suppose

$\exists w$ s.t. $\exists \varepsilon >0$ s.t. $\forall K \in
> \mathbb{N}$, $\exists k \geq K$ with $|| (\phi \circ f_{k})(x) - (\phi
> \circ f)(x)|| \geq \varepsilon$.



By continuty of $\phi$ it follows that $\exists \delta > 0$ such that
$\forall K \in \mathbb{N}$ $\exists k \geq K$ with $||f_{k}(w) -
> f(w)|| \geq \delta$.



Thus $w \in N$, and so $\phi \circ f_{n} \to \phi \circ f$ everywhere
except on a set of measure zero.





My question is this: I am trying to be very specific with my definition of continuity and limits, and I am not sure if I have been successful. In particular, I am not sure if the section:




Now suppose $\exists w$ s.t. $\exists \varepsilon >0$ s.t. $\forall K
> \in \mathbb{N}$, $\exists k \geq K$ with $|| (\phi \circ f_{k})(x) -
> (\phi \circ f)(x)|| \geq \varepsilon$.



By continuty of $\phi$ it follows that $\exists \delta > 0$ such that

$\forall K \in \mathbb{N}$ $\exists k \geq K$ with $||f_{k}(w) -
> f(w)|| \geq \delta$.




is valid. Any advice on this proof would be useful.


Answer



I'm not really following the notation in your proof. Here is how I would write it:



Suppose $\lim_{n\to\infty}f_n(x) = f(x)$ and let $\varepsilon>0$. By continuity of $\varphi$, we can choose $\delta>0$ such that $|\varphi(f(x))-\varphi (y)|<\varepsilon$ when $|f(x)-y|<\delta$. Then we can choose $N$ such that $n\geqslant N$ implies $|f(x)-f_n(x)|<\delta$. It follows that $$|\varphi(f(x))-\varphi(f_n(x))| < \varepsilon $$
for $n\geqslant N$, as desired.



number theory - Prove common divisors of $a,b$ divide $gcd(a,b)$ without Bezout, primes or guessing the form of the GCD



Every proof of this fact that I've seen relies on guessing a "formula" for the GCD first, such as "the smallest positive integer of the form $ax+by$" or $\frac{ab}{\text{lcm}(a,b)}$. Then one shows that the guess was indeed correct and proves the result. I don't find these proofs very intuitive and I would like to know if there's a simpler proof that doesn't involve guessing what the GCD looks like (this includes the fundamental theorem of arithmetic, which seems like overkill).



The proof should go like this:



The statement is trivially true for $1$ and $(a,b)$ itself. Let $(a,b)=d$. Suppose $\exists c$ such that $1, $c \mid a$ and $c \mid b$ but $c \not \mid d$. Since $c, we have $1 \le (c,d) < c$. Suppose $(c,d)=1$. Then $a=dk$ and $c \mid a$ imply $c \mid k$, hence $cd \mid a$. In the same way $cd \mid b$, a contradiction.




Now suppose $1<(c,d). Then $\frac{c}{(c,d)} > 1$. I would like to show that $\frac{cd}{(c,d)} \mid a$, but here I get stuck. Can it be done with my restrictions? If not, why?



EDIT:



So my original proof only used multiplicative properties of $\Bbb Z$, but I have learned that the very existence of the GCD requires additive properties as well. However, I've found a new proof that doesn't seem to use any additive properties (not even duality with LCM). I believe it is closer to what I was looking for. The reasoning behind this proof relies on additive properties of $\Bbb Z$, but they seem to disappear in my formal proof. What's going on here? How is this proof equivalent to other proofs?



Proof. Let $c$ be a common divisor of $a$ and $b$ ($a) but $c \not \mid d$.



Since $c \not \mid d$, we can't have $a=d$, so $a=kd$ for some $k>1$. Also $a=tc$, for some $t>k$. We have $kd=tc \implies c=\frac kt d$. Observe that $k \not \mid t$, otherwise $d=\frac tk c \implies c \mid d$. Let $v=(k,t)$; then $1 \le v < k$. Of course $(\frac kv, \frac tv)=1$.
Now, $$b=k'd=t'c=t' \frac kt d \implies k'=t' \frac kt= t' \frac{k/v}{t/v} \implies t/v \mid t'$$ But then $b= t' \frac kt d = t' \frac {k/v}{t/v} d=\frac {t'}{t/v} \left(\frac kv d \right)$. Also $a=kd=v \left( \frac kv d \right)$. This shows that $\frac kv d > d$ is a common divisor and completes the proof. $\square$




Note that $c \mid \frac kv d$ as well.



This proof is a formalization of the following hand-waving:



suppose that $a=4d=6c$. Then the respective times $d$ and $c$ are contained in any common multiple of $d$ and $c$ must always have a ratio of $2:3$. This means that there must be a factor of $2d$ (and therefore $3c$) in any common multiple. If, for example, $b=5d$, then $b=6c+d$. But $c \mid b$ and $c \mid 6c$ imply $c \mid d$. This is impossible, because $3c=2d \implies 3=2 \frac dc$, a contradiction. This situation arises every time there are two common divisors and neither divides the other.


Answer



This is not so much a direct answer to your question as an indication of how one of the standard approaches might naturally be motivated



Suppose $c|a$ and $c|b$ then $c|ha+kb$ for any integer choice of $h$ and $k$.




It is natural to constrain $c$ as much as possible, and we do this by taking the least positive value of $ha+kb$. Let's call this $f$, so we have $c|f$.



Now let's think about how this relates to $a$. We have $f\le a$ since $1a+0b=a$ and so we can divide $a$ by $f$ to get $a=mf+n$ with $0\le n\lt f\le a$. But $n=a-mf=(1-mh)a-mkb$ can't be a positive value, so must be zero. We therefore have $f|a$. Likewise $f|b$.



We now know that any common factor of $a$ and $b$ divides $f$, and also that $f$ is a common factor.






The tricky part of the proof, which you can do by uniqueness of prime factorisation as well, is to show that any common factor divides the highest common factor. Note that proving uniqueness of prime factorisation uses the additive properties of the integers and doesn't just depend on multiplicative properties.




So you will find that, at least implicit within your argument is an appeal to the additive properties of the integers.



This is quite a subtle point, and is the reason why the most efficient proofs are written the way they are. I agree they can seem a bit like magic, but they can also be motivated, as I have tried to illustrate.


Wednesday 17 October 2018

congruences - Solve $3y + 2 equiv 3 (5)$ in math exercise



I'm stuck at the ending part of a math exercise on congruences.



I must solve the following system of congruences $S$:



$x \equiv 2\ (3)$



$x \equiv 3\ (5)$




I was first asked to give the remainders of the division of $3y +2$ by 5, with knowing the remainders of the division of $y$ by 5.



Here's what I did:



-If $y \equiv 0\ (5)$, then $3y +2 \equiv 2\ (5)$



-If $y \equiv 1\ (5)$, then $3y +2 \equiv 0\ (5)$



-If $y \equiv 2\ (5)$, then $3y +2 \equiv 3\ (5)$




-If $y \equiv 3\ (5)$, then $3y +2 \equiv 1\ (5)$



-If $y \equiv 4\ (5)$, then $3y +2 \equiv 4\ (5)$



Here's the part of my exercise I'm stuck with:



I must find the solutions of $3y +2 \equiv 3\ (5)$, knowing that each solution $x$ is like $x$ = $15z + 8$ with $z$ which is an integer. Then, I will need to prove that each integer $x$ like $x = 15z + 8$ is a solution of the system $S$.



I really don't know what to do with this part of my exercise, what shall I do?




Thanks for your answers.


Answer



Hint : $15z+8\equiv 8\ (\ mod\ 3\ )$ and $15z+8\equiv 8\ (\ mod\ 5\ )$ because of $3|15$ and $5|15$.


Tuesday 16 October 2018

Series, computing the limit $limlimits_{ntoinfty}frac{1+2+cdots+n}{n^3}$




How to compute the following limit? The series is given by



$$\lim_{n\rightarrow\infty}\frac{1}{n^3}(1+2+\cdots+n)$$



Thanks for your help...


Answer



Just for fun:



$$\lim_{n\rightarrow\infty}\frac{1}{n}\left(\frac{1}{n}\frac{1}{n}+\frac{2}{n}\frac{1}{n}+\ldots+\frac{n}{n}\frac{1}{n}\right)=\lim_{n\rightarrow\infty}\frac{1}{n}\times\int_{0}^{1}\text{d}x=0$$


real analysis - How discontinuous can a derivative be?



There is a well-known result in elementary analysis due to Darboux which says if $f$ is a differentiable function then $f'$ satisfies the intermediate value property. To my knowledge, not many "highly" discontinuous Darboux functions are known--the only one I am aware of being the Conway base 13 function--and few (none?) of these are derivatives of differentiable functions. In fact they generally cannot be since an application of Baire's theorem gives that the set of continuity points of the derivative is dense $G_\delta$.



Is it known how sharp that last result is? Are there known Darboux functions which are derivatives and are discontinuous on "large" sets in some appropriate sense?


Answer



What follows is taken (mostly) from more extensive discussions in the following sci.math posts:




http://groups.google.com/group/sci.math/msg/814be41b1ea8c024 [23 January 2000]



http://groups.google.com/group/sci.math/msg/3ea26975d010711f [6 November 2006]



http://groups.google.com/group/sci.math/msg/05dbc0ee4c69898e [20 December 2006]



Note: The term interval is restricted to nondegenerate intervals (i.e. intervals containing more than one point).



The continuity set of a derivative on an open interval $J$ is dense in $J.$ In fact, the continuity set has cardinality $c$ in every subinterval of $J.$ On the other hand, the discontinuity set $D$ of a derivative can have the following properties:





  1. $D$ can be dense in $\mathbb R$.


  2. $D$ can have cardinality $c$ in every interval.


  3. $D$ can have positive measure. (Hence, the function can fail to be Riemann integrable.)


  4. $D$ can have positive measure in every interval.


  5. $D$ can have full measure in every interval (i.e. measure zero complement).


  6. $D$ can have a Hausdorff dimension zero complement.


  7. $D$ can have an $h$-Hausdorff measure zero complement for any specified Hausdorff measure function $h.$





More precisely, a subset $D$ of $\mathbb R$ can be the discontinuity set for some derivative if and only if $D$ is an $F_{\sigma}$ first category (i.e. an $F_{\sigma}$ meager) subset of $\mathbb R.$



This characterization of the discontinuity set of a derivative can be found in the following references: Benedetto [1] (Chapter 1.3.2, Proposition, 1.10, p. 30); Bruckner [2] (Chapter 3, Section 2, Theorem 2.1, p. 34); Bruckner/Leonard [3] (Theorem at bottom of p. 27); Goffman [5] (Chapter 9, Exercise 2.3, p. 120 states the result); Klippert/Williams [7].



Regarding this characterization of the discontinuity set of a derivative, Bruckner and Leonard [3] (bottom of p. 27) wrote the following in 1966: Although we imagine that this theorem is known, we have been unable to find a reference. I have found the result stated in Goffman's 1953 text [5], but nowhere else prior to 1966 (including Goffman's Ph.D. Dissertation).



Interestingly, in a certain sense most derivatives have the property that $D$ is large in all of the ways listed above (#1 through #7).



In 1977 Cliff Weil [8] published a proof that, in the space of derivatives with the sup norm, all but a first category set of such functions are discontinuous almost everywhere (in the sense of Lebesgue measure). When Weil's result is paired with the fact that derivatives (being Baire $1$ functions) are continuous almost everywhere in the sense of Baire category, we get the following:




(A) Every derivative is continuous at the Baire-typical point.



(B) The Baire-typical derivative is not continuous at the Lebesgue-typical point.



Note that Weil's result is stronger than simply saying that the Baire-typical derivative fails to be Riemann integrable (i.e. $D$ has positive Lebesgue measure), or even stronger than saying that the Baire-typical derivative fails to be Riemann integrable on every interval. Note also that, for each of these Baire-typical derivatives, $\{D, \; {\mathbb R} - D\}$ gives a partition of $\mathbb R$ into a first category set and a Lebesgue measure zero set.



In 1984 Bruckner/Petruska [4] (Theorem 2.4) strengthened Weil's result by proving the following: Given any finite Borel measure $\mu,$ the Baire-typical derivative is such that the set $D$ is the complement of a set that has $\mu$-measure zero.



In 1993 Kirchheim [5] strengthened Weil's result by proving the following: Given any Hausdorff measure function $h,$ the Baire-typical derivative is such that the set $D$ is the complement of a set that has Hausdorff $h$-measure zero.




[1] John J. Benedetto, Real Variable and Integration With Historical Notes, Mathematische Leitfäden. Stuttgart: B. G. Teubne, 1976, 278 pages. [MR 58 #28328; Zbl 336.26001]



[2] Andrew M. Bruckner, Differentiation of Real Functions, 2nd edition, CRM Monograph Series #5, American Mathematical Society, 1994, xii + 195 pages. [The first edition was published in 1978 as Springer-Verlag's Lecture Notes in Mathematics #659. The second edition is essentially unchanged from the first edition with the exception of a new chapter on recent developments (23 pages) and 94 additional bibliographic items.] [MR 94m:26001; Zbl 796.26001]



[3] Andrew M. Bruckner and John L. Leonard, Derivatives, American Mathematical Monthly 73 #4 (April 1966) [Part II: Papers in Analysis, Herbert Ellsworth Slaught Memorial Papers #11], 24-56. [MR 33 #5797; Zbl 138.27805]



[4] Andrew M. Bruckner and György Petruska, Some typical results on bounded Baire $1$ functions, Acta Mathematica Hungarica 43 (1984), 325-333. [MR 85h:26004; Zbl 542.26004]



[5] Casper Goffman, Real Functions, Prindle, Weber & Schmidt, 1953/1967, x + 261 pages. [MR 14,855e; Zbl 53.22502]




[6] Bernd Kirchheim, Some further typical results on bounded Baire one functions, Acta Mathematica Hungarica 62 (1993), 119-129. [94k:26008; Zbl 786.26002]



[7] John Clayton Klippert and Geoffrey Williams, On the existence of a derivative continuous on a $G_{\delta}$, International Journal of Mathematical Education in Science and Technology 35 (2004), 91-99.



[8] Clifford Weil, The space of bounded derivatives, Real Analysis Exchange 3 (1977-78), 38-41. [Zbl 377.26005]


calculus - Proving that $n over 2^n$ converges to $0$




I'm completely clueless on this one.
I can easily calculate the limit using L'Hopital's rule, but proving that the series is converging to 0 is far more tricky.



$$a_n= {n \over 2^n}$$




Any help?


Answer



Using the fact that $2^n>n^2$ for $n > 4$, we have: $$0 \leq a_n < \frac{n}{n^2}=\frac{1}{n}.$$



Hence, $\displaystyle \lim_{n \to \infty}a_n=0.$


sequences and series - An expression for the sum $sumlimits _{k=1}^{n-1} k , (n-k)^2$

I really don't know how to find the sum of the series: $$\sum\limits _{k=1}^{n-1} k \, (n-k)^2 = 1(n-1)^2+2(n-2)^2+3(n-3)^2+\dots+(n-1)1^2.$$



My attempt:
I tried to approach the old school approach of how we find the sum of arithmetic-co geometric progression but unable to do so.



Any help is appreciated!

elementary number theory - Calculate $2^{5104} bmod 10$ using mental arithmetic

I am practising interview for Jane Street's trader internship and I found the following question.




Question: Calculate $2^{5104} \bmod 10$ using mental arithmetic.





I know that $2^5 \bmod 10 \equiv 2 \bmod 10.$
So,
\begin{align*}
2^{5104} & = (2^5)^{1020} 2^4 \\
& \equiv 2^{1020}2^4 \\
& = (2^5)^{204}2^4 \\
& \equiv(2^5)^{40}2^8 \\
& \equiv (2^5)^8 2^8 \\

& \equiv (2^5)^3 2 \\
& \equiv 6 \bmod 10.
\end{align*}



However, I find the calculations above very taxing if I use mental arithmetic. I believe there should be a faster way to answer the question but I am not able to find one.

Monday 15 October 2018

discrete mathematics - Probability Mass Function: Geometric Distribution over an interval



Equation: Given a random variable $Z$, let $Z ~ Geometric(\theta)$, Find $P(5 \leq Z \leq 9)$.




Attempt 1:



Try something like $P(5 \leq Z \leq 9) = P(Z = 9) - P(Z = 5) = \theta[(1-\theta)^9-(1-\theta)^5]$



I know this is a valid method for continuous distributions, but I wasn't sure if it would work the same way for a discrete function, like geometric. So I tried to compute it manually in attempt #2.



Attempt 2:



$P(5 \leq Z \leq 9) = P(Z = 5) + P(Z = 6) + P(Z = 7) + P(Z = 8) + P(Z= 9) = $Some answer




I know this one works, but I feel it's a bit too much in terms of computation. Like, if I was given an equation that asked for an interval from $3$ to $1000$, then there's no way I could manually compute that by hand.



Which is why I was wondering if there was a more efficient method to calculate the geometric distribution over a given interval? I tried to put everything into a summation and derive an equation, but I get stuck after pulling the theta out of the equation such that it's



$\sum\limits_{i=0}^n (1- \theta)^i\theta = $ $\theta \sum\limits_{i=0}^n (1- \theta)^i $



Anybody have a better solution?


Answer



Your original solution (first attempt) is not quite correct as written. Instead, you should be calculating $P(Z \geq 5) - P(Z > 9)$. Intuitively, this is "the chance that $Z$ is at least $5$, but not more than $9$." For a geometric distribution, you could write this as $P(Z > 4) - P(Z > 9)$. But it is actually not too difficult to compute $P(Z > x)$ for any geometric distribution; this holds iff the first $x$ trials are failures, so it happens with probability $P(Z > x) = (1 - \theta)^x$ (assuming $\theta$ is the probability of success in a single trial). In general, this means that, for geometric $Z$, we have:

$$
\begin{align*}
P(a \leq Z \leq b) &= P(Z \geq a) - P(Z > b) \\
&= P(Z > a-1) - P(Z > b) \\
&= (1-\theta)^{a-1} - (1 - \theta)^b
\end{align*}
$$



Also note that your summation (from the second attempt) works as well, once simplified. To find $P(a \leq Z \leq b)$, we note that, from the geometric distribution, this is exactly equal to
$$\sum_{k=a}^b \theta (1 - \theta)^{k-1}$$

But this is a geometric series with initial term $\theta(1 - \theta)^{a-1}$ and common ratio $1 - \theta$. The sum of such a finite geometric series with $b-a + 1$ terms is:



$$
\begin{align*}
\frac{\theta(1-\theta)^{a-1} \cdot \left( 1 - (1 - \theta)^{b-a + 1}\right)}{1 - (1 - \theta)} &= (1-\theta)^{a-1} \cdot \left( 1 - (1 - \theta)^{b-a + 1}\right) \\
&= (1-\theta)^{a-1} - (1 - \theta)^b
\end{align*}
$$

So the two methods agree.


Sunday 14 October 2018

sequences and series - Intuition behind $zeta(-1)$ = $frac{-1}{12}$

When I first watched numberphile's 1+2+3+... = $\frac{-1}{12}$ I thought the sum actually equalled $\frac{-1}{12}$ without really understanding it.



Recently I read some wolframalpha pages and watched some videos and now I understand (I think), that $\frac{-1}{12}$ is just an associative value to the sum of all natural numbers when you analytically continue the riemann-zeta function. 3Blue1Brown's video really helped. What I don't really understand is why it gives the value $\frac{-1}{12}$ specifically. The value $\frac{-1}{12}$ seems arbitrary to me and I don't see any connection to the sum of all natural numbers. Is there any intuition behind why you get $\frac{-1}{12}$ when analytically continue the zeta function at $\zeta(-1)$?



EDIT(just to make my question a little clearer):
I'll use an example here. Suppose you somehow didn't know about radians and never associated trig functions like sine to $\pi$ but you knew about maclaurin expansion. By plugging in x=$\pi$ to the series expansion of sine, you would get sine($\pi$) = 0. You might have understood the process in which you get the value 0, the maclaurin expansion, but you wouldn't really know the intuition behind this connection between $\pi$ and trig functions, namely the unit circle, which is essential in almost every branch of number theory.



Back to this question, I understand the analytic continuation of the zeta function and its continued form for $s < 0$ $$\zeta(s)=2^s\pi^{s-1}\sin\frac{\pi s}2\Gamma(1-s)\zeta(1-s)$$ and how when you plug in s = -1, things simplify down to $\frac{-1}{12}$ but I don't see any connection between the fraction and the infinite sum. I'm sure there is a beautiful connection between them, like the one between trig functions and $\pi$, but couldn't find any useful resources on the internet. Hope this clarified things.

probability - Independent Exponential Random Variables




I am currently trying to figure out a problem and it is using notation that I have never seen before so I am pretty stuck, any suggestions would be greatly appreciated!



Let $X, Y, Z$ be independent exponential random variables with the same mean, $σ$. Find the value of $σ$ so that $\Pr[\max(X,Y,Z)>1]=0.05$.



Any help with how to solve this problem or leading down the right path would be awesome as I am not to sure where to even start!



EDIT: So I have followed what André Nicolas has said and got a $σ$ of $0.25$. However I am still not sure where he got this formula from: The probability they are all $\le a$ is $(1-e^{-a/\sigma})^3$.



I cant find anything like this is my notes or textbook, could any reference where he got this?



Answer



A start: The probability that $\max(X,Y,Z)\gt 1$ is $1$ minus the probability they are all $\le 1$.



The probability they are all $\le a$ is $(1-e^{-a/\sigma})^3$.


Prove using induction that $n^3 − n$ is divisible by 6 whenever $n > 0$.



Prove using induction that $n^3 − n$ is divisible by $6$ whenever $n > 0$




My attempt:



Base step: For $n=1$



$1^3 - 1 = 0$.



$0$ which is divisible by $6$.
Thus, $n= 1$ is true.



Assumption step: Let $n=k$




$k^3-k$



Inductive step: $f(k+1)-f(k)$



$(k+1)^3-(k+1)-k^3-k$



This is where I am stuck. How do I go forward to prove using induction that $n^3 − n$ is divisible by $6$?


Answer



In your assumption step, you need to assume the statement is true for $n=k$, i.e. $k^3-k$ is divisible by $6$.




In the induction step, expand and simplify $(k+1)^3-(k+1)$:
$$\begin{aligned}(k+1)^3-(k+1)&=k^3+3k^2+3k+1-k-1\\&=k^3-k+3k^2+3k\\&=(k^3-k)+3k^2+3k\\&=(k^3-k)+3k(k+1)\end{aligned}$$



Note that $k^3-k$ is divisible by $6$ by the inductive hypothesis (the assumption). $k$ and $k+1$ are consecutive integers, so their product must be even. Hence $3k(k+1)$ is also divisible by $6$. Thus, whenever $k^3-k$ is divisible by $6$, we also have that $(k+1)^3-(k+1)$ is divisible by $6$.


elementary number theory - $gcd(a,b) = gcd(a, a+2b)$ where $a$ is an odd integer

$a$ and $b$ are integers where $a$ is odd prove that $\gcd(a,b) = \gcd(a, a+2b)$



I know from $\gcd$ and divisibility of integer combinations that $\gcd(a,b)=d$
and that $d\mid a$ and $d\mid(a+2b)$, therefore $d$ is a common divisor of $a$ and $a+2b$. I'm having trouble with using the fact that $a$ is odd, and how to show that $d$ is the greatest common divisor. Thanks

sequences and series - Why does $sum_{n = 0}^infty frac{n}{2^n}$ converge to 2?

Apparently,



$$
\sum_{n = 0}^\infty \frac{n}{2^n}
$$



converges to 2. I'm trying to figure out why. I've tried viewing it as a geometric series, but it's not quite a geometric series since the numerator increases by 1 every term.

Saturday 13 October 2018

elementary number theory - Write 100 as the sum of two positive integers




Write $100$ as the sum of two positive integers, one of them being a multiple of $7$, while the other is a multiple of $11$.




Since $100$ is not a big number, I followed the straightforward reasoning of writing all multiples up to $100$ of either $11$ or $7$, and then finding the complement that is also a multiple of the other. So then

$100 = 44 + 56 = 4 \times 11 + 8 \times 7$.



But is it the smart way of doing it? Is it the way I was supposed to solve it? I'm thinking here about a situation with a really large number that turns my plug-in method sort of unwise.


Answer



From Bezout's Lemma, note that since $\gcd(7,11) = 1$, which divides $100$, there exists $x,y \in \mathbb{Z}$ such that $7x+11y=100$.



A candidate solution is $(x,y) = (8,4)$.



The rest of the solution is given by $(x,y) = (8+11m,4-7m)$, where $m \in \mathbb{Z}$. Since we are looking for positive integers as solutions, we need $8+11m > 0$ and $4-7m>0$, which gives us $-\frac8{11}





If you do not like to guess your candidate solution, a more algorithmic procedure is using Euclid' algorithm to obtain solution to $7a+11b=1$, which is as follows.



We have
\begin{align}
11 & = 7 \cdot (1) + 4 \implies 4 = 11 - 7 \cdot (1)\\
7 & = 4 \cdot (1) + 3 \implies 3 = 7 - 4 \cdot (1) \implies 3 = 7 - (11-7\cdot (1))\cdot (1) = 2\cdot 7 - 11\\
4 & = 3 \cdot (1) + 1 \implies 1 = 4 - 3 \cdot (1) \implies 1 = (11-7 \cdot(1)) - (2\cdot 7 - 11) \cdot 1 = 11 \cdot 2-7 \cdot 3
\end{align}

This means the solution to $7a+11b=1$ using Euclid' algorithm is $(-3,2)$. Hence, the candidate solution $7x+11y=100$ is $(-300,200)$. Now all possible solutions are given by $(x,y) = (-300+11n,200-7n)$. Since we need $x$ and $y$ to be positive, we need $-300+11n > 0$ and $200-7n > 0$, which gives us
$$\dfrac{300}{11} < n < \dfrac{200}7 \implies 27 \dfrac3{11} < n < 28 \dfrac47$$
The only integer in this range is $n=28$, which again gives $(x,y) = (8,4)$.


calculus - Show that $int_{0}^infty frac{1}{(1+x-u)(pi^2+ln^2(x))}dx=frac{1}{u}+frac{1}{ln(1-u)}$

I am trying to show that $$I=\int_{0}^\infty \frac{1}{(1+x-u)(\pi^2+\ln^2(x))}dx=\frac{1}{u}+\frac{1}{\ln(1-u)}$$



My first thought was to follow the $m=\frac{1}{x}$ which led to

$$I=1+(u-1)\int_{-\infty}^\infty \frac{1}{(e^{-m}+(1-u))(\pi^2+m^2)}dm $$
But that unfortunately did not seem to go anywhere. I original saw this integral in the Laplace Transform section of Jack D'Aurizio's notes, but I, for the life of me, could not seem to use the Laplace Transform to evaluate the integral. I'm also aware that this is very similar to the integral involving the Gregory Coefficients, but I want to avoid those if I can. I've attempted to use differentiating under the integral but couldn't find any reasonable way of doing it. I've tried series, but that did not work either. Any push in the right direction would be much appreciated.

Friday 12 October 2018

radicals - How to prove that $sqrt 3$ is an irrational number?








I know how to prove $\sqrt 2$ is an irrational number. Who can tell me that why $\sqrt 3$ is a an irrational number?

integration - Absolutely continuous and differentiable almost everywhere

I've read the following claim and I wonder if someone can direct me to or provide me with a proof of it:




"A strongly absolutely continuous function which is differentiable
almost everywhere is the indefinite integral of strongly integrable
derivative"




It was in the context of Bochner integrable functions so I'm assuming that "strongly" means with respect to the norm.




Thanks!

Thursday 11 October 2018

real analysis - Completion of rational numbers via Cauchy sequences



Can anyone recommend a good self-contained reference for completion of rationals to get reals using Cauchy sequences?


Answer



Here is a link that came up in a Google search. You can also search for "construction of real numbers". Since this is standard material, you can find it in many books on analysis.




http://www.math.ucsd.edu/~tkemp/140A/Construction.of.R.pdf


algebra precalculus - 'Plus' Operator analog of the factorial function?







Is there a similar function for the addition operator as there is the factorial function for the multiplication operator?



For factorials it is 5! = 5*4*3*2*1, is there a function that would do 5+4+3+2+1?




Thanks,

Wednesday 10 October 2018

real analysis - Let $Usubset mathbb{R}^n$ (open set) and $f:Ulongrightarrow V$ a homeomorphism then we can say that $V$ is a open set in $mathbb{R}^n,?$



Let $U\subset \mathbb{R}^n$ (open set) , $V\subset \mathbb{R}^n$ and $f:U\longrightarrow V$ a homeomorphism then we can say that $V$ is a open set in $\mathbb{R}^n$ ?




Any hints would be appreciated.


Answer



Yes, this result goes by the name Invariance of Domain and is due to Brouwer. It's a non-trivial result.


Why are the complex numbers unordered?

first post on here!




I have been learning about complex numbers, and how they do not satisfy the trichotomy like real numbers do.



For example, there is no way to say $i<3$, $i>3$, or $i=3$.



But consider this: If $xSo let $x=-1$ and $y=9$.
Then $-1<9$, so $\sqrt {-1} < \sqrt 9$. In other words, $i<3$.



Can someone explain the flaw in this reasoning?




I imagine that it has something to do with $\sqrt {-1}$ not being real, and so the x,y statement doesn't apply, but I can't think of a way to say why it doesn't apply. It seems like it should, since -1 is real, even if its square root is not.



Thanks!

real analysis - Show that if $b_kuparrow infty$ and $Sigma_{k = 1}^infty a_kb_k$ converges, then $b_m Sigma_{k = m}^infty a_k → 0$ as $m → infty$.

Suppose that $\Sigma_{k=1}^\infty a_k$ converges. Prove that if $b_k\uparrow \infty$ and $\Sigma_{k = 1}^\infty a_kb_k$ converges, then



$b_m \Sigma_{k = m}^\infty a_k → 0$ as $m → \infty$.



Attemtp: Suppose $\Sigma_{k=1}^\infty a_k$ converges, and $\Sigma_{k = 1}^\infty a_kb_k$ also.Then we know by Abel's Formula that the sequences converge only iff its partial sums converge. If we let $\Sigma a_k = \Sigma \frac{a_kb_k}{b_k}$, then because $b_k$ is increasing we have $\frac{1}{b_k}$ is decreasing to zero. So we almost have a telescoping series.



Then let $c_k = \Sigma_{ j = k}^{\infty} a_jb_j$. Then $\Sigma_{k = n}^m a_k = \Sigma \frac{a_kb_k}{b_k}= \Sigma_{k = n}^m \frac{c_k - c_{k+1}}{b_k}$



I don't know how to continue. I am having trouble applying Abel's Formula and the subscripts are confusing. Can someone please help me? I am suppose to use Abel's Formula. Thank you very much.




Abel's Formula: Let $a_k,b_k$ be real sequences, and for each pair of integers $n \geq m \geq 1$ set $A_{n,m} = \Sigma_{k =m}^n a_k$
$\Sigma_{k = n}^m a_kb_k = A_{n,m}b_n - \Sigma_{k = m}^{n-1} A_{k,m}(b_{k+1} -b_k)$

Tuesday 9 October 2018

convergence divergence - What is the $I_{0}(x)$ function?



While trying to calculate the following infinite sum:



$$ \sum_{k=0}^{\infty} \frac {4^k}{(k!)^{2}}$$




I got the result: $I_{0}(4) = 11.301...$



I've never encountered this function before ($ I_{0}(x) $), can someone please describe it and explain why the above infinite sum converges to an output of this function?



I expected something having to do with the exponential function since $$ \sum_{k=0}^{\infty} \frac {\mu^k}{k!} = e^\mu $$


Answer



The modified Bessel function of the first kind has a power series expansion
$$ I_{\alpha}(x)=\sum_{k=0}^{\infty}\frac{1}{k!\Gamma(k+\alpha+1)}\Big(\frac{x}{2}\Big)^{2k+\alpha} $$




Taking $\alpha=0$ and using $\Gamma(k+1)=k!$, and then setting $x=4$, we get
$$ I_0(4)=\sum_{k=0}^{\infty}\frac{4^k}{(k!)^2} $$
which is your sum.


calculus - Is there a name for function with the exponential property $f(x+y)=f(x) cdot f(y)$?



I was wondering if there is a name for a function that satisfies the conditions



$f:\mathbb{R} \to \mathbb{R}$ and $f(x+y)=f(x) \cdot f(y)$?



Thanks and regards!


Answer




If $f(x_0)= 0$ for some $x_0\in\mathbb{R}$, then for all $x\in\mathbb{R}$, $f(x)=f(x_0+(x-x_0))=f(x_0)\cdot f(x-x_0)=0$. Therefore, either $f$ is identically $0$ or never $0$. If $f$ is not $0$, then it is a homomorphism from the group $\mathbb{R}$ with addition to the group $\mathbb{R}\setminus\{0\}$ with multiplication. If $f(x)<0$ for some $x$, then $f(\frac{x}{2})^2\lt 0$, which is impossible, so $f$ is actually a homomorphism into the positive real numbers with multiplication. By composing with the isomorphism $\log:(0,\infty)\to\mathbb{R}$, such $f$ can be analyzed by first analyzing all additive maps on $\mathbb{R}$. Assuming continuity, these all have the form $x\mapsto cx$ for some $c\in \mathbb{R}$, and hence $f(x)=\exp(cx)$. Assuming the axiom of choice, there are discontinuous additive functions on $\mathbb{R}$ that can be constructed using a Hamel basis for $\mathbb{R}$ over $\mathbb{Q}$, and thus there are also discontinuous homomorphisms from $\mathbb{R}$ to $(0,\infty)$.



So for an actual answer to the question: Yes, they are called (the zero map or) homomorphisms from the additive group of real numbers to the multiplicative group of positive real numbers.


measure theory - If $mu(A)




Let $(X,\Sigma,\mu)$ be a finite measure space. Let $\mathcal{F}$ be a family of measurable functions $f:X\to\mathbb{R}$. Prove that if $$\lim_{t\to\infty}\left(\sup_{f\in\mathcal{F}}\int_{\{x\in X:|f(x)|\ge t\}}|f|d\mu \right)=0,$$




then $$\sup_{f\in\mathcal{F}}\int_X|f|d\mu<\infty,$$



and for all $\epsilon >0$ there exists $\delta >0$ such that: $$A\in\Sigma,\mu(A)<\delta\Longrightarrow \sup_{f\in\mathcal{F}}\int_A|f|d\mu<\epsilon.$$




For the first part.



Let $t>0$ such that: $\displaystyle\sup_{f\in\mathcal{F}}\int_{\{x\in X:|f(x)|\ge t\}}|f|d\mu<1$.



Fix $f\in\mathcal{F}$. Then $$\displaystyle\int_X|f|d\mu=\int_{\{|f|\ge t\}}|f|d\mu+\int_{\{|f|


And $1+t\mu(X)$ does not depend of $f$, so we get $\sup_{f\in\mathcal{F}}\int_X|f|d\mu<\infty$.



Is that correct?



I don't know how to do the second part. Could it be true that $v(A):=\sup_{f\in\mathcal{F}}\int_A|f|d\mu$ is a finite measure? I wanted to try something similar to that known result when $\mathcal{F}$ is just one function (some call it absolutely continuous of the measure $v$, I think).



Any hint? Thank you.


Answer



Your first part is correct.




For the second part, try to bound $\int_A |f|\,d\mu$ similarly to how you bounded $\int_X |f|\,d\mu$ in the first part:



$$\int_A |f|\,d\mu=\int_{A\cap \{|f|Then try to make the right side as small as possible, by choosing $t$ and $\delta$ appropriately.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...