Friday 30 September 2016

real analysis - How to construct a bijection from $(0, 1)$ to $[0, 1]$?











I wonder if I can cut the interval $(0,1)$ into three pieces:
$(0, \frac{1}{3})\cup(\frac{1}{3},\frac{2}{3})\cup(\frac{2}{3},1)$, in which I'm able to map point $\frac{1}{3}$ and $\frac{2}{3}$ to $0$ and $1$ respectively.
Now the question remained is how to build a bijection mapping from those three intervels to $(0,1)$.



Or, my method just goes in a wrong direction. Any correct approaches?


Answer



Consider the sequence
$$
\frac 12, \frac 13, \frac 14, \frac 15, \dots, \frac 1n, \dots

$$
Map every other point $f : (0,1) \to [0,1]$ that is not in this sequence to itself, and then map the above sequence to the corresponding points in this one :
$$
0, 1, \frac 12, \frac 13, \frac 14, \dots.
$$
In other words, map $\frac 12$ to $0$, $\frac 13$ to $1$, and then map $\frac 1n$ to $\frac 1{n-2}$ for $n \ge 4$.



The reason why you can map some set into some bigger set bijectively is precisely because they are infinite, so you must exploit this fact. If you don't, you have no chance.



Now to answer your actual question, the trick you try to use doesn't feel relevant to me ; I'm not saying there is absolutely no way it could work, because I actually know there is, since your set (the union of the three intervals) and the interval $(0,1)$ have the same cardinality. The problem with your idea is that I don't think a construction will naturally come out of it. In general, to map bijectively a set into a bigger one you must be "moving things around", so I expect any fairly understandable construction involving your idea to be similar to the one I've shown you.




Hope that helps,


calculus - Minima of a continuous function




A function $f:\mathbb{R}\mapsto \mathbb{R}$ is continuous and smooth in $[a,b]$. How can I prove that the minimum of the function will be either at the end points or where the derivative goes to zero.



The extreme value theorem states that there is at least one minimum and one maximum. However, I was looking for a proof that the global minimum is either at the end points or the derivative.



EDIT



As pointed out in the comments, it would be more appropriate to say if $f(c)$ is a minimum, then how do we prove that $c$ is either the end points or $f'(c)=0$ ?


Answer



Proof by contradiction to show that the global minimum is either at the end points or at an interior point where the derivative is zero.




Suppose it is not at the end points (say, at $x_0 \in(a,b)$) and the derivative is non-zero. Then for all $\epsilon > 0$, there exist $\delta > 0$ such that
$$\left|\frac{f(x)-f(x_0)}{x-x_0}-f'(x_0)\right|<\epsilon$$
for $|x-x_0|<\delta$.
In other words
$$f'(x)-\epsilon <\frac{f(x)-f(x_0)}{x-x_0}If $f'(x)<0$, we can choose small enough $\epsilon$ such that
$$\frac{f(x)-f(x_0)}{x-x_0}<0$$
or in other words for $x_0+\delta>x>x_0$
$$f(x)
Similarly, for $f'(x)>0$, we can choose small enough $\epsilon$ such that
$$\frac{f(x)-f(x_0)}{x-x_0}>0$$
or in other words for $x_0-\delta$$f(x)Contradiction.


Find roots of polynomial with degree $ge 5$

During our research we came up with the problem of computing the root of a polynomial of degree $\ge 5$ exactly. The coefficients are, except for the linear and constant term, all non-negative and there are only terms with even degree. The only thing we know is that there formulas for a degree up to 4 and no formula for a higher degree, but is it possible to compute the roots of a higher-degree polynomial exactly, too? If so, what is the complexity?



Here is some more information:



The equation looks like $A(n) - x - a_0 = 0$ for some arbitrary integral $n$. Thereby, $A(0)=x$ and $A(i) = a_i \cdot A(i-1) \cdot (A(i-1)+b_i) $ for $i \in \{1,\ldots,n\}$. All the values $a_i,b_i$ are non-negative real numbers for $i \in \{1,\ldots,n\}$ whereas $a_0$ is arbitrary.

discrete mathematics - Mod of numbers with large exponents




I've read about Fermat's little theorem and generally how congruence works. But I can't figure out how to work out these two:




  • $13^{100} \bmod 7$

  • $7^{100} \bmod 13$



I've also heard of this formula:




$$a \equiv b\pmod n \Rightarrow a^k \equiv b^k \pmod n $$



But I don't see how exactly to use that here, because from $13^1 \bmod 7$ I get 6, and $13^2 \bmod 7$ is 1. I'm unclear as to which one to raise to the kth power here (I'm assuming k = 100?)



Any hints or pointers in the right direction would be great.


Answer



The formula you've heard of results from the fact that congruences are compatible with addition and multiplication.



The first power $13^{100}$ is easy: $13\equiv -1\mod 7$, so
$$13^{100}\equiv (-1)^{100}=1\pmod 7.$$




The second power uses Lil' Fermat: for any number $a\not\equiv 0\mod 13$, we have $a^{12}\equiv 1\pmod{13}$, hence
$$7^{100}\equiv 7^{100\bmod12}\equiv 7^4\equiv 10^2\equiv 9\pmod{13}$$


real analysis - Does a differentiable function map open intervals to open intervals?



I know that preimages are open for open images, since differentiability implies continuity. I suspect there is a counterexample to the above though. This is not homework, just study.


Answer



No. Consider $f(x)=x^2$ and the open interval $(-1,1)$.


calculus - How to prove $int_0^1 frac{1+x^{30}}{1+x^{60}} dx = 1 + frac{c}{31}$, where $0 < c < 1$



This is an exercise from Apostol (p.285) that I'm having trouble with (in fact, I'm having trouble with the whole section):




Prove that $\displaystyle{\int_0^1 \frac{1+x^{30}}{1+x^{60}} = 1 + \frac{c}{31}}, \qquad \text{where } 0 < c < 1.$




This comes from the section of Exercises following Taylor expansions, and Taylor's formula with error term. It seems like the approach should involve getting this as the error term of the Taylor expansion of a function that we know something about? I'm having trouble making much more progress than that.




Updated Progress: From the definition of the error term of the Taylor expansion we have:



$$E_n (x) = \frac{1}{n!} \int_a^x (x-t)^n f^{(n+1)} (t) dt$$



Alternatively, we may express this in the Lagrange form of the error term (derived from the weighted mean value theorem of integrals):



$$E_n (x) = \frac{f^{(n+1)}(c)}{(n+1)!} (x-a)^{n+1} \qquad \text{where } a < c < x.$$



Now, let $a = 0$, $x = 1$, $n = 30$, $f^{(n+1)}(t) = \dfrac{1+t^{30}}{(1+t^{60})(1-t)^{30}}$, and set the two alternative expressions of $E_n(x)$ equal to each other:




$$\begin{align*}
\implies & \frac{1}{n!} \int_a^x (x-t)^n f^{(n+1)}(t) dt & = & \dfrac{f^{(n+1)}(c)}{(n+1)!} (x-a)^{n+1}\\
\implies & \frac{1}{30!} \int_0^1 (1-t)^{30} \dfrac{1+t^{30}}{(1+t^{60})(1-t)^{30}} & = & \dfrac{1}{31!} \dfrac{1+c^{30}}{(1+c^{60})(1-c)^{30}}\\
\implies & \int_0^1 \dfrac{1+t^{30}}{1+t^{60}} & = & \frac{1}{31} \frac{1+c^{30}}{(1+c^{60})(1-c)^{30}}.
\end{align*}
$$



I cannot seem to get from here to $=1 + \dfrac{c}{31}$.




If someone can show me pretty explicitly, step-by-step how to tackle this problem, I'd appreciate it. Dealing with the error term on Taylor expansions is giving me quite a bit of trouble, so hopefully seeing a full solution will help clear things up.


Answer



The approach of anon is better, but we can use the power series expansion of $\frac{1}{1+x^{60}}$ to conclude that for $-1\le x\le 1$,
$$\frac{1+x^{30}}{1+x^{60}}=(1+x^{30})(1-x^{60}+x^{120}-x^{180}+\cdots).$$
Multiply out, and integrate term by term. We get that our integral $I$ is given by
$$I=1+\frac{1}{31}-\frac{1}{61}-\frac{1}{91}+\frac{1}{121}+\cdots.$$
Group terms by twos, either starting at the beginning, or starting after the first term. From the two groupings, we can see that
$$1We can use the same idea to get a string of increasingly more precise inequalities, such as
$$1+\frac{1}{31}-\frac{1}{61}-\frac{1}{91}

Thursday 29 September 2016

real analysis - Minimizer of a multivariable function



Let $f:[0,\infty)^n\to \mathbb{R}$ be a function continuously differentiable in the interior $(0,\infty)^n$ and that $\frac{\partial}{\partial x_j}f(\textbf{x})\to -\infty$ as $x_j\to 0^+$ for $j=1,\dots,n$.



Can it be shown rigorously that when this function is minimized over a set determined by a linear equation say $\{\textbf{x}=(x_1,\dots,x_n):\sum_j a_j x_j=b, x_j\ge 0\}$, the minimizer doesn't have a $0$ entry at a position when the constraint set allows non zero entries for that position?



Thanks.


Answer



This is false. A counterexample is given by




$$f(x,y)=\sqrt[4]{(x-1)^2+y^2}-1.1\sqrt y$$



with the linear constraint $x+y=1$. The partial derivative with respect to $y$ is



$$
\frac{\partial f}{\partial y} = \frac y{2\left((x-1)^2+y^2\right)^{3/4}}-\frac{1.1}{2\sqrt y}\;,
$$



which goes to $-\infty$ as $y\to0$ for fixed $x$ (including $x=1$). On the line $x+y=1$, we have $x=1-y$ and




$$
f(1-y,y)=\sqrt[4]{y^2+y^2}-1.1\sqrt y=\left(\sqrt[4]2-1.1\right)\sqrt y\;,
$$



which is minimal for $y=0$.


number theory - For prime $p>5$ and positive integer $k



Trying to prove:




Let $p > 5$ be a prime and let $k$ be any positive integer $< p$. Show that the decimal expansion of $\frac{k}{p}$ consists of (p-1) repeating decimal digits. (hint: use Fermat's Little Theorem and Geometric Series).





I was trying to understand the theorem using an example but I am not very sure about why (p-1) repeating decimal digits.



E.g suppose $p = 11$, k = $4$, $5$, $7$.




  • $\frac{4}{11} = 0.36363636363636\ldots$

  • $\frac{5}{11} = 0.454545454545454545\ldots$

  • $\frac{7}{11} = 0.63636363636363636\ldots$





So where does $p-1 = 10$ come from? It seems the repeating length is always $2$.



Answer



If $10^{p-1} = 1 + mp$, where $1 < m < 10^{p-1}$, that says $$ \frac{1}{p} = \frac{m}{10^{p-1}-1} = \frac{m}{10^{p-1}(1-10^{-p+1})} = \sum_{j=1}^\infty \frac{m}{10^{j(p-1)}} $$
The decimal representation of this sum consists of "$0.$" followed by the digits of $m$ (padded at the front with zeros if necessary to length $p-1$) repeated: each term of the sum represents one block of $p-1$ digits.



Nobody said that $p-1$ has to be the smallest period. You can consider

$4/11$ as $0.(36)(36)(36)\ldots$ with period $2$, but you could also write it as
$0.(3636363636)(3636363636)\ldots$ with period $10=11-1$. An example where $p-1$ is the smallest period is
$$1/7 = 0.(142857)(142857)\ldots$$


calculus - How does $exp(x+y) = exp(x)exp(y)$ imply $exp(x) = [exp(1)]^x$?




In Calculus by Spivak (1994), the author states in Chapter 18 p. 341 that
$$\exp(x+y) = \exp(x)\exp(y)$$ implies
$$\exp(x) = [\exp(1)]^x$$

He refers to the discussion in the beginning of the chapter where we define a function $f(x + y) = f(x)f(y)$; with $f(1) = 10$, it follows that $f(x) = [f(1)]^x$. But I don't get this either. Can anyone please explain this? Many thanks!


Answer



I think you should assume $x$ is an integer (since $a^x$ is defined using $\exp$ if $x$ is a positive real). You can write $\exp(x) = \exp(\underbrace{1+1+1+1+\cdots+1}_x)$.



Using the property of $\exp$, you find that $\exp(x) = \exp(1)\exp(1)\dots\exp(1) = (\exp(1))^x$.


algebra precalculus - Two Questions Regarding Polynomial Functions.

I am studying Polynomial functions and their Graphs.



I am currently looking at the definition for a polynomial function and I am trying to arrive at a deeper understanding; thus, please excuse questions that seem obvious.



Nonetheless,



A polynomial function of degree $n$ (What is $n$, a arbitrary variable chosen?) is a function of the form:



$$

P(x) = a_n x^n + a_{n - 1} x^{n - 1} + \ldots + a_1 x + a_0
$$



where $n$ is a nonnegative integer and $a_n$ does not equal 0.



The numbers $a_0$, $a_1$, $a_2$, $\ldots$, $a_n$ are called the coefficients of the polynomial.



The number $a_0$ is the constant coefficient or constant term.



The number $a_n$, the coefficient of the highest power, is the leading coefficient, and the term $a_n x^n$ is the leading term.




Questions:



1) What do the subscripts indicate, such as $n$?



2) In algebra, I learned that constants are for example $1, 2, 3, 4$; however, they describe constants as constant coefficients in the book. Can someone explain the reason behind that?



As you can see, I am new to learning mathematics, so please be simple.

Testing A Series For Convergence

Determine whether the series
$\sum_{n=0}^{\infty} \frac{3n^2 + 2n + 1}{n^3 + 1}$ with n from 0 to infinity
converges or diverges.



So far I thought about dividing the numerator by the denominator, but that got very messy.

I thought about comparing that to the series of $\frac{1}{k^3 + 1} but then I got stuck.



Also, a related question.
A theorem states that if the limit of a series as n approaches infinity is not equal to zero, the series diverges. However it states that the series is from n=1 to infinity. Would it also apply in this case where it goes from n=0 to infinity?



Thanks!

elementary number theory - Proof that the product of primitive Pythagorean hypotenuses is also a primitive Pythagorean hypotenuse



Just to be clear, I call an integer $c$ a 'primitive Pythagorean hypotenuse' if there exist coprime integers $a$ and $b$ satisfying $a^2+b^2 = c^2$. I noticed that the set of such primitive Pythagorean hypotenuses seems to form a closed set under multiplication (and indeed this is backed up by this Mathworld page -- see equations (25)-(26)). However, I can't find a proof of it (the link above provides a book reference but I don't have access to it), or come up with one myself.



Some thoughts: Euler showed that $c$ is a primitive Pythagorean triple iff it is odd and can be written as the sum of two squares $c = m^2+n^2$ for some coprime integers $m$ and $n$. It is straightforward to show that the product of a sum of two squares is also a sum of two squares (and hence the set of all Pythagorean hypotenuses is closed under multiplication), but I'm having trouble with the primitive/coprime part. In other words I can show that if $c_1 = m_1^2 + n_1^2$ and $c_2 = m_2^2 + n_2^2$ then there exist $m_3$ and $n_3$ such that $c_1c_2 = m_3^2 + n_3^2$, but I can't show that if $gcd(m_1,n_1)=gcd(m_2,n_2)=1$ then it is possible to choose $m_3$ and $n_3$ such that $gcd(m_3,n_3)=1$. Any hints or references would be appreciated!




Edit (in response to individ's comments): Note that the formulae
$$ m_3 = m_1 n_2 - n_1 m_2 \ , \ n_3 = m_1 m_2 + n_1 n_2 $$
and
$$ m_3 = m_1 n_2 + n_1 m_2 \ , \ n_3 = m_1 m_2 - n_1 n_2 $$
do not always produce coprime $m_3$ and $n_3$. The earliest example of this I could find is
$$ m_1 = 8, n_1 = 1, m_2 = 31, n_2 =92 $$
for which the first formula above gives $m_3 = 705, n_3 = 340$ (both divisible by 5), and the second formula above gives $m_3=767, n_3=156$ (both divisible by 13). However $m_3^2 + n_3^2 = 612,625$ is still the hypotenuse of a primitive triple because
$$ 199176^2 + 579343^2 = 612625^2 $$
and $gcd(199176,579343)=1$. (Sorry for the rather ugly example, but it was the simplest I could find (maybe because I don't know how to search efficiently!).)



Answer



Let consider the ring of Gaussian integers $\mathbb Z[i]$.
For $\zeta=a+ib\in \mathbb Z[i]$ we say that $\zeta$ is primitive if $\gcd(a,b)=1$.
Note that $\zeta$ is primitive if and only if for each $d\in\mathbb Z$, $d\mid\zeta$ implies $d=\pm 1$. Consequently, factors of primitive numbers are primitive.



An odd integer $x\in\mathbb Z$ is a primitive Pythagorean hypotenuse if and only if there exists a primitive $\xi\in\mathbb Z[i]$ such that $x=\xi\bar\xi$.



Let $x=\xi\bar\xi$ be a primitive Pythagorean hypotenuse and, since $\mathbb Z[i]$ is an UFD, take the prime factorization $\xi=\pi_1^{e_1}\cdots \pi_r^{e_r}$.
Since $\xi$ is primitive and $x$ odd, each factor $\pi_j$ is primitive and with no common factor with $2$, hence $\Im(\pi_j^4)\neq 0$.
Since $\pi_j\bar\pi_j\mid x$ can assume that $\Im(\pi_j^4)>0$ for each $j$.




Let $y=\eta\bar\eta$ be another primitive Pythagorean hypotenuse with $\eta=o_1^{d_1}\cdots o_s^{d_S}$ and $\Im(o_j^4)>0$.



We claim that $\xi\eta$ is primitive.
For assume on contrary that $d\mid\xi\eta$ for some $d\in\mathbb Z$.
Let $\pi\in\mathbb Z[i]$ be an irreducible factor of $d$. Then, wlog, $\pi\mid\xi$.
Then $\pi$ is primitive and $\bar\pi\mid d\mid\xi\eta$, but $\bar\pi\not\mid\xi$, hence $\bar\pi\mid \eta$.
This is a contradiction, because $\Im(\bar\pi^4)<0$.



This proves that $xy$ is a primitive Pythagorean hypotenuse.



Wednesday 28 September 2016

elementary number theory - Divisibility Proof with Induction - Stuck on Induction Step




I'm working on a problem that's given me the run around for about a weekend. The statement:



For all $m$ greater than or equal to $2$ and for all $n$ greater than or equal to $0$, $m - 1$ divides $m^n - 1$.



Using induction over $n$, the base step comes easy since $m^n - 1$ is $0$ when $n = 0$.



My induction hypothesis is letting $k \geq 0$ and assuming that $m - 1$ divides $m^k - 1$. In order to show that $m - 1$ divides $m^{k+1} - 1$, I obviously need to use the induction hypothesis. However, no matter where I try to use the fact that $m^k - 1 = (m-1)a$ for some $a$ in the integers, the expression $m^{k+1} - 1$ always becomes more difficult to get to $m^{k+1} - 1$ being equal to $(m-1)b$ for some $b$ in the integers.



In other words, I can't figure out any actually helpful way to apply the induction hypothesis with the goal of proving the next step.




Any help would be appreciated!


Answer



Hint:



$$m^{k+1}-1=m^{k+1}-m^k+m^k-1$$



Can you take it from there?


number theory - Are there/Why aren't there any simple groups with orders like this?



The orders of the simple groups (ignoring the matrix groups for which the problem is solved) all seem to be a lot like this:




2^46 3^20 5^9 7^6 11^2 13^3 17 19 23 29 31 41 47 59 71 


starts with a very high power of 2, then the powers decrease and you get a tail - it's something like exponential decay.



Why does this happen? I want to understand this phenomenon better.



I wanted to find counter-examples, e.g. a simple group of order something like




2^4 3^2 11^5 13^9


but it seems like they do not exist (unless it slipped past me!).



We have the following bound $|G| \le \left(\frac{|G|}{p^k}\right)!$ which allows $3^2 11^4$ but rules out orders like $3^2 11^5$, $3^2 11^6$, .. while this does give a finite bound it is extremely weak when you have more than two primes, it really doesn't explain the pattern but a much stronger bound of the same type might?



I also considered that it might be related to multiple transitivity, a group that is $t$-transitive has to have order a multiple of $t!$, and e.g. 20! =



2^18 3^8 5^4 7^2 11 13 17 19



which has exactly the same pattern, for reasons we do understand. But are these groups really transitive enough to explain the pattern?


Answer



I am posting the following counterexample to the question, as requested by caveman in the comments.



The Steinberg group ${}^2A_5(79^2)$ has order
$$ 2^{23}\cdot 3^4\cdot 5^6\cdot 7^2\cdot 11^1\cdot 13^3\cdot 43^1\cdot 79^{15}\cdot 641^1\cdot 1091^1\cdot 3121^1\cdot 6163^2.$$



There are other counterexamples, too. For example ${}^2A_9(47^2)$ has order

$$ 2^{43}\cdot 3^{13}\cdot 5^2\cdot 7^3\cdot 11^1\cdot 13^2\cdot 17^2\cdot 23^5\cdot 31^1\cdot 37^1\cdot 47^{45}\cdot 61^1\cdot 97^1\cdot 103^3\cdot 3691^1\cdot 5881^1\cdot 14621^1\cdot 25153^1\cdot 973459^1\cdot 1794703^1\cdot 4778021^2.$$



I would guess there are infinite counterexamples, but the numbers (of course) get very very large!


Poisson Distribution (Influenza)

The number $X$ of people in a certain town who get the influenza follows a

Poisson distribution. The proportion of people who did not get the flu is
$0.01$. Find the probability mass function of $X$.



Ok so the probability mass function formula is $e^{-\lambda}\frac{\lambda^k}{k!}$. I saw the solution but I do not understand how this was solved?

calculus - Probability of two fixed-length line segments intersecting within a circular domain

Imagine placing a line segment P of length a on the XY plane such that its middle is at the origin, but its orientation is random (i.e. random angle). Then suppose you placed another line segment Q of length b (which is less than or equal to a) such that its centre was randomly chosen within a radius (a+b)/2 from the origin, but its orientation was also random. What would the probability of these two line segments intersecting be?



I suspect there may be an integral over the circular surface to be done here, but am not quite sure what to do! Any help much appreciated. Thank you.



This image shows you what I mean. Here the two line segments don't intersect.

combinatorics - Solving ${{2x - 3}choose{1- x^2}} = 3$

I was given this problem by my sister, she took it from a past paper in her calculus class (12 grade).



As the title reads, we are asked to solve the equation:




$${{2x - 3}\choose{1- x^2}} = 3$$





Now this seems a particularly strange problem to me. I see no way to solve this, provided that they haven't studied anything regarding binomial expansions with rational coefficients.



After going through the base conditions that $x$ should satisfy and trying some combinatorial identities I came empty handed. Wolfram Alpha provides a numerical solution that I don't see how to come by using pen and paper.



How would you solve this without making any use of any mathematical apparatus (i.e. gamma function) behind high school level calculus?

Tuesday 27 September 2016

elementary number theory - Find remainder when $777^{777}$ is divided by $16$


Find remainder when $777^{777}$ is divided by $16$.




$777=48\times 16+9$. Then $777\equiv 9 \pmod{16}$.



Also by Fermat's theorem, $777^{16-1}\equiv 1 \pmod{16}$ i.e $777^{15}\equiv 1 \pmod{16}$.




Also $777=51\times 15+4$. Therefore,



$777^{777}=777^{51\times 15+4}={(777^{15})}^{51}\cdot777^4\equiv 1^{15}\cdot 9^4 \pmod{16}$ leading to $ 81\cdot81 \pmod{16} \equiv 1 \pmod{16}$.



But answer given for this question is $9$. Please suggest.

analysis - The definition of a measurable function

From my understanding there are two main ways of defining a measurable function from a measure space to a Banach space (whose base field contains the real numbers): a function whose preimage maps Borel sets to measurable sets or a function that is a pointwise limit of simple functions.



I believe that these two definitions are equivalent when the Banach space is a finite dimensional space over the real numbers. But if the Banach space is not finite dimensional, then being a pointwise limit of simple functions becomes a stronger condition. How does one (or where can one find) a proof that if a function is a pointwise limit of simple functions, then its preimage maps Borel sets to measurable sets?



Also what it the point of (ever) defining measurable functions to be functions whose preimage maps Borel sets to measurable sets? Is it just because the one implication is easier than the other?

real analysis - Is there a function that's continuous and has all directional derivatives as a linear function of direction, but still fails to be differentiable?



There are many standard examples of functions $f:\mathbb{R}^2\to\mathbb{R}$ that possess all directional derivatives at a point and yet fail to differentiable or even continuous there.



The most common counterexamples are not even continuous, e.g.

$$f(x,y)=\begin{cases}\frac{x^2y}{x^2+y^4}&(x,y)\neq(0,0)\\0&(x,y)=(0,0)\end{cases}$$
This example illustrates the easiest type of nondifferentiability given the existence of all directional derivatives, but I'm interested in more sophisticated pathologies -- how far we can go in improving the situation before we finally hit differentiability.



The natural next step would be to impose continuity and yet still fail to achieve differentiability. A good example here is
$$f(x,y)=\sqrt[3]{x^2y}$$
This function is continuous at the origin, unlike the last example, and all its directional derivatives exist there, but it still fails to be differentiable. The easiest way to show this is to note that the partials are zero, but there are directional derivatives $D_u$ that are nonzero, so the directional derivative $D_u$ cannot be given by a linear function of the direction $u$ for all $u$. The problem is that the directional derivative isn't a linear function of the direction.



The next step would be to ask for both continuity and all-directional-derivatives-exist-as-a-linear-function-of-direction and yet still fail to achieve differentiability. The example given in this answer is not continuous, so it shows only that the second condition alone doesn't imply differentiability. What if we impose continuity?



I'm almost certain there is such a counterexample, but I can't think of one, nor could I find one in a quick glance at Courant, Apostol, or Munkres. Showing nondifferentiability here would be harder, since the standard arguments, discontinuity and the failure of the directional derivative operator to be linear, aren't available. Presumably the trick will be to have something go wrong along a nonlinear path even though everything goes right linearly.



Answer



Try



$$ f(x,y) = \cases{ \sqrt{|x|} - \sqrt[4]{| x^2 - y|} & for $0 < y < 2 x^2$ \cr 0 & otherwise}$$



It is continuous and has $D_u f(0,0) = 0$ for all $u$, but fails to be
(Fréchet) differentiable at the origin because $f(x,x^2) = \sqrt{|x|} \ne O(\| (x,x^2)\|)$.


Pi irrationality repetition limits

I am not a mathematician at all and I had a thought about Pi that I can't work out.



Pi is irrational, with an infinite sequence of numbers
A recurring number is infinite




Would it be theoretically possible that at one point down Pi's sequence, it just turned into an infinite sequence of one number?



If not, what is stopping this? I recognise through reductio ad absurdum that my proposition is impossible, as it means that PI must at some point end in all of 1-recurring and 2-rcurring...9-recurring simultaneously, but I lack the toolset to understand where my thinking breaks down.

set theory - What is the "opposite" of the Axiom of Choice?



One might think that, trivially, the "opposite" of AC is $\neg$AC. However, thinking about it differently, I'm not sure this is intuitively the case.



AC says that every set has a choice function. However, we can prove the existence of many choice functions (e.g. the finite ones) without AC. So, in particular AC just gives us the "non-constructive" choice functions. In general, throughout mathematics, it is often said that AC is only necessary to prove the existence of non-constructive sets. So, one might be tempted to say that the axiom which is intuitively the "opposite" of AC is V=L, which says that all sets are constructible. However, we've already known for a long time that V=L implies AC! So, is it just wrong to characterize AC as "nonconstructive"? Would a constructivist be wary of AC?



Intuitively, I think most people would characterize the "opposite" of AC to be the axiom that any set whose existence requires the Axiom of Choice doesn't exist. The intuitive thought behind this is much stronger than $\neg$AC, which just says there exists at least one set without a choice function. However, this intuition is pretty flawed in a number of ways. For one, it would have to quantify over non-existent sets! Barring this, I think the main difficulty is that, as evidenced by V=L, the relativity of the notion of set makes this idea unworkable. Is it unworkable? What is the best candidate for an axiom that is the opposite of AC? Can we do no better than just $\neg$AC?


Answer



First let me point out that you are mixing "constructive" and "constructible". There are plenty of non-constructive sets which are constructible. In particular because constructive usually refers to something more akin to the "definable from the usual structure of mathematical objects" and not "there is a set theoretic formula which defines it out of the blue".




The negation of the axiom of choice is as non-constructive as the axiom of choice. Just like the axiom of choice doesn't tell us what the choice function might be, the negation of the axiom of choice doesn't tell us what is the family of sets which does not have a choice function. It could be that the axiom of choice fails, for the first time, in some arbitrarily high von Neumann rank; or it could be that it only fails for very very very large families of sets; or it could be that it fails for particular type of families of sets, while it holds for other.



We can't say. In order to draw practical conclusions from the failure of the axiom of choice we usually have to assume more. For example "there is an infinite Dedekind-finite set of real numbers" is a failure of the axiom of choice which is more specific than just "the axiom of choice fails".



To solve this problem of non-constructivity you don't need to negate the axiom of choice, you need to change the system from a deeper place. Probably the law of excluded middle, which is responsible for us being able to assure that the non-constructive objects exist. This is why most constructive systems reject this law; and there is a theorem that the axiom of choice implies the law of excluded middle.






Within the confines of classical logic, and staying with $\sf ZF$:





  1. You might want to ensure that there are sets which cannot be injected into an ordinal; but you can also require that there are sets which cannot be injected into a power set of an ordinal; and you can continue in this fashion.



    So one option is to say the following: For each ordinal $\alpha$, there is a set which cannot be injected into $\mathcal P^\alpha(\eta)$ for any ordinal $\eta$. (Where $\mathcal P^\alpha$ is defined by iterating power sets, and taking union at limits.)


  2. You might want to say, the axiom of choice implies the existence of all sort of crazy sets of real numbers. I'll choose "Every set is Lebesgue measurable", or "Every set has Baire property" or "Every game is determined". These axioms usually bring "order" to the world of non-constructive sets of reals. These are all consistent with $\sf DC$, which means that you can even develop reasonable analysis in those models.



    And if the axiom of choice is mainly used for "There are some messed up sets of real numbers", things like the axioms above would essentially say "Every set of reals is well-behaved".


  3. You might want to keep your focus on the real numbers, and say "Screw everything, I'm going nuclear" and require that the real numbers are a countable union of countable sets. This has the benefit of destroying completely the usual ways we define measure and category for sets of reals.



    You can even go further than that, and require that for every ordinal $\kappa$, $\mathcal P(\kappa)$ is the countable union of sets of size $\kappa$. This is known as axiom $\sf (K)$, but we don't quite know if it's consistent (relative to very large cardinals, which are necessary to say the least). So you can instead take solace in choose "Every limit ordinal has cofinality $\omega$ which has some messed up consequences as well.



  4. You can decide that you want amorphous sets, since those are "the ultimate counterexample". These are infinite sets that cannot be partitioned into two infinite sets. Strongly amorphous sets have the additional property that any partition is all singletons (except finitely many parts). These sets cannot be linearly ordered, they cannot be mapped onto $\omega$, and their cofinite sets make a complete ultrafilter.



    Strongly amorphous sets cannot be endowed with any reasonable structure; but in general it is possible to have amorphous sets which are vector spaces over finite fields. Those spaces have the strange property that every proper subspace is finite.




And there are many other ways to say "I see the axiom of choice as responsible for ...", then the opposite would be "... fails in the most horrible way imaginable!".


calculus - Closed-form of $intlimits_0^1left(frac{left(x^2+1right)arcsin(x)}{sqrt{1-x^2}}+2xlnleft(x^2+1right)right)frac{ln x}{x^3+x},dx$



I've conjectured the following closed-form:
$$
I = \int\limits_0^1\left(\frac{\left(x^2+1\right)\arcsin(x)}{\sqrt{1-x^2}}+2x\ln\left(x^2+1\right)\right)\frac{\ln x}{x^3+x}\,dx = -2\,G\,\ln2,
$$
where $G$ is Catalan's constant.

Numerically
$$ I \approx -1.2697979381877088371491554851603117320986537271546606092465\dots$$
How to prove it?


Answer



Apply the obvious substitution $x\mapsto\sin{x}$ to the first integral
$$I=\int^\frac{\pi}{2}\limits_0\frac{x\ln(\sin{x})}{\sin{x}}\ {\rm d}x+\int\limits^1_0\frac{2\ln{x}\ln(1+x^2)}{1+x^2}\ {\rm d}x$$
The latter integral has been addressed here and is equivalent to
\begin{align}
\int\limits^1_0\frac{2\ln{x}\ln(1+x^2)}{1+x^2}\ {\rm d}x=4\Im{\rm Li}_3(1-i)-2\mathbf{G}\ln{2}+\frac{3\pi^3}{16}+\frac{\pi}{4}\ln^2{2}
\end{align}

As for the first integral,
\begin{align}
\int\limits^\frac{\pi}{2}_0\frac{x\ln(\sin{x})}{\sin{x}}\ {\rm d}x
&=2\int\limits^1_0\frac{\arctan{x}\ln\left(\frac{2x}{1+x^2}\right)}{x}\ {\rm d}x\\
&=2{\rm Ti}_2(1)\ln{2}+2\int\limits^1_0\frac{\arctan{x}\ln{x}}{x}\ {\rm d}x-2\int\limits^1_0\frac{\arctan{x}\ln(1+x^2)}{x}\ {\rm d}x\\
&=2\mathbf{G}\ln{2}-2\sum^\infty_{n=0}\frac{(-1)^n}{(2n+1)^3}+2\int\limits^1_0\frac{\ln{x}\ln(1+x^2)}{1+x^2}\ {\rm d}x+4\int\limits^1_0\frac{x\arctan{x}\ln{x}}{1+x^2}\ {\rm d}x
\end{align}
and the integral
\begin{align}
4\int\limits^1_0\frac{x\arctan{x}\ln{x}}{1+x^2}\ {\rm d}x=-8\Im{\rm Li}_3(1-i)-\frac{5\pi^3}{16}-\frac{\pi}{2}\ln^2{2}

\end{align}
has also been established in the link above. (Both integrals were covered in the evaluation of $\mathscr{J}_2$.) Hence the closed form is indeed
\begin{align}
I=-2\mathbf{G}\ln{2}
\end{align}


Number sequence as geometric sequence



In a number sequence, I've figured the $n^{th}$ element can be written as $10^{2-n}$.




I'm now trying to come up with a formula that describes the sum of this sequence for a given $n$. I've been looking at the geometric sequence, but I'm not sure how connect it.


Answer



Hint: We have $10^{2-n}=10^2\cdot 10^{-n}=10^2\cdot(10^{-1})^n$ and $$\sum_{k=0}^n x^k=\frac{1-x^{n+1}}{1-x}.$$


functions - Is 'every exponential grows faster than every polynomial?' always true?

My algorithm textbook has a theorem that says



'For every $r > 1$ and every $d > 0$, we have $n^d = O(r^n)$.'




However, it does not provide proof.



Of course I know exponential grows faster than polynomial in most cases, but is it true for all case?



What if the polynomial function is something like $n^{100^{100}}$ and exponential is $2^n$? Will the latter outgrow the former at some point?

Monday 26 September 2016

sequences and series - $cos^2(frac{pi}{101})+cos^2(frac{2pi}{101})+cos^2(frac{3pi}{101})+...+cos^2(frac{100pi}{101})=?$




Find the value: $$\cos^2\left(\frac{\pi}{101}\right)+\cos^2\left(\frac{2\pi}{101}\right)+\cos^2\left(\frac{3\pi}{101}\right)+\cos^2\left(\frac{4\pi}{101}\right)+\cos^2\left(\frac{5\pi}{101}\right)+\cdots \\ \cdots+\cos^2\left(\frac{99\pi}{101}\right)+\cos^2\left(\frac{100\pi}{101}\right)$$





My attempt:I've tried it by considering the sum
$$\sin^2\left(\frac{\pi}{101}\right)+\sin^2\left(\frac{2\pi}{101}\right)+\sin^2\left(\frac{3\pi}{101}\right)+\sin^2\left(\frac{4\pi}{101}\right)+\sin^2\left(\frac{5\pi}{101}\right)+\cdots \\ \cdots+\sin^2\left(\frac{99\pi}{101}\right)+\sin^2\left(\frac{100\pi}{101}\right)$$



along with



$$\cos^2\left(\frac{\pi}{101}\right)+\cos^2\left(\frac{2\pi}{101}\right)+\cos^2\left(\frac{3\pi}{101}\right)+\cos^2\left(\frac{4\pi}{101}\right)+\cos^2\left(\frac{5\pi}{101}\right)+\cdots \\ \cdots +\cos^2\left(\frac{99\pi}{101}\right)+\cos^2\left(\frac{100\pi}{101}\right)$$ which gives $ 100$ as resultant but failed to separate the sum of
$$\sin^2\left(\frac{\pi}{101}\right)+\sin^2\left(\frac{2\pi}{101}\right)+\sin^2\left(\frac{3\pi}{101}\right)+\sin^2\left(\frac{4\pi}{101}\right)+\sin^2\left(\frac{5\pi}{101}\right)+\cdots\\ \dots+\sin^2\left(\frac{99\pi}{101}\right)+\sin^2\left(\frac{100\pi}{101}\right)$$ at last.



I tried the next approach by using de Movire's theorem but failed to separate the real and imaginary part.




I've invested a great amount of time in the so it would be better if someone please come up with an answer.


Answer



$$\cos\left(\frac{k\pi}{101}\right)= \frac{1}{2} \left(e^{i\frac{k\pi}{101}}+e^{-i\frac{k\pi}{101}} \right) \\
\cos^2\left(\frac{k\pi}{101}\right)= \frac{1}{4} \left(e^{2i\frac{k\pi}{101}}+e^{-2i\frac{k\pi}{101}} +2\right) \\
\sum_{k=1}^{100}\cos^2\left(\frac{k\pi}{101}\right)= \frac{1}{4} \sum_{k=1}^{100}\left(e^{2i\frac{k\pi}{101}}+e^{-2i\frac{k\pi}{101}} +2\right)
$$



Now,
$$1+\sum_{k=1}^{100}e^{2i\frac{k\pi}{101}}=\sum_{k=0}^{100}\left(e^{2i\frac{\pi}{101}}\right)^k=\frac{1-(e^{2i\frac{\pi}{101}})^{101}}{1-e^{2i\frac{\pi}{101}}}=0 \\
1+\sum_{k=1}^{100}e^{-2i\frac{k\pi}{101}}=\sum_{k=0}^{100}\left(e^{-2i\frac{\pi}{101}}\right)^k=\frac{1-(e^{-2i\frac{\pi}{101}})^{101}}{1-e^{-2i\frac{\pi}{101}}}=0 $$




Therefore
$$\sum_{k=1}^{100}\cos^2\left(\frac{k\pi}{101}\right)= \frac{1}{4} \left(-1-1+200\right) $$


linear algebra - Determinant of Triangular Matrix

I understand that you can find the determinant of a matrix along it's diagonal if it is in triangular form. For a matrix such as this:
$$
\begin{pmatrix}
1 & 5 & 0\\
2 & 4 & -1\\
0 &-2 & 0

\end{pmatrix}
$$
When put into triangular form I get:
$$
\begin{pmatrix}
1 & 5 & 0\\
0 & 1 & 1/6\\
0 & 0 & 1/3
\end{pmatrix}
$$

Since I multiplied row two by -1/6 during the row reduction I would expect the determinant to be
$$
1\cdot 1\cdot 1/3\cdot (-1/6),$$
but the answer for the determinant of the original matrix is -2. Where exactly am I going wrong?

linear algebra - Why we only need to verify additive identity, and closed under addition and scalar multiplication for subspace?



In the book Linear Algebra Done Right, it is said that to determine quickly whether a given subset of $V$ is a subspace of $V$, the three conditions, namely additive identity, closed under addition, and closed under scalar multiplication, should be satisfied. The other parts of the definition of a vector space are automatically satisfied.



I think I understand why commutativity, associativity, distributive properties, and multiplicative identity works because their operations are still within the subspace.



But, why don't we need to verify additive inverse, similar to verifying additive identity? Could there be cases where there will be no $v + w = 0$ in the new subspace, $v, w \in U$, $U$ is a subspace?


Answer




You also need to check that the set is non-empty. In this case closure under scalar multiplication guarantees that the additive inverse of any $v$ in the set is also in the set, since for the scalar $-1$, $(-1)v$ is in the set.



EDIT: Similarly, for the scalar $0$, $0v={\bf 0}$ is in the set (by the closure of scalar multiplication), whenever the set contains an element/vector $v$.


real analysis - How to define a bijection between $(0,1)$ and $(0,1]$?




How to define a bijection between $(0,1)$ and $(0,1]$?
Or any other open and closed intervals?





If the intervals are both open like $(-1,2)\text{ and }(-5,4)$ I do a cheap trick (don't know if that's how you're supposed to do it):
I make a function $f : (-1, 2)\rightarrow (-5, 4)$ of the form $f(x)=mx+b$ by
\begin{align*}
-5 = f(-1) &= m(-1)+b \\
4 = f(2) &= m(2) + b
\end{align*}
Solving for $m$ and $b$ I find $m=3\text{ and }b=-2$ so then $f(x)=3x-2.$




Then I show that $f$ is a bijection by showing that it is injective and surjective.


Answer



Choose an infinite sequence $(x_n)_{n\geqslant1}$ of distinct elements of $(0,1)$. Let $X=\{x_n\mid n\geqslant1\}$, hence $X\subset(0,1)$. Let $x_0=1$. Define $f(x_n)=x_{n+1}$ for every $n\geqslant0$ and $f(x)=x$ for every $x$ in $(0,1)\setminus X$. Then $f$ is defined on $(0,1]$ and the map $f:(0,1]\to(0,1)$ is bijective.



To sum up, one extracts a copy of $\mathbb N$ from $(0,1)$ and one uses the fact that the map $n\mapsto n+1$ is a bijection between $\mathbb N\cup\{0\}$ and $\mathbb N$.


sequences and series - Polynomial for product over a set


Conjecture:
If $A$ is a nonempty, finite set with size $|A|=n$, and
$$P_A(x)=\prod_{a\in A}(x-a),$$
then $P_A(x)$ can be expanded as

$$P_A(x)=\sum_{k=0}^{n}(-1)^{n-k}x^k\sum_{{U\subseteq A}\atop{|U|=n-k}}\prod_{u\in U}u.\tag{1}$$




I have conjectured this based on algebraic evidence. That is, I expanded out the cases $n=1,...,4$ both manually and through $(1)$, and in each case the conjecture held. The problem is, I'm having difficulty proving this result. I'm fairly certain that a proof would involve induction, but I'm not very good at that, and have so far failed.



I was initially interested in this formula because I recognized that $(1)$ would imply that
$$k\ne0,n\iff \sum_{{U\subseteq S_n}\atop{|U|=k}}\prod_{u\in U}u=0\tag{2}$$
for
$$S_n=\left\{\exp\frac{2\pi ik}{n}:k=0,1,...,n-1\right\}.$$
It may seem un-intuitive at first, but $(2)$ is true because

$$P_{S_n}(x)=x^n-1,$$
(as $S_n$ is the set of roots of $x^n-1$) so each term of the expansion must vanish except for the cases $k=0,n$.



After seeing this, I was naturally curious about a proof of $(1)$. Could I have some help or hints? Thanks.

measure theory - Using Fatou's Lemma



Let $(X,\Sigma,\mu)$ be a measure space and let $\{E_n\}_{n=1}^\infty$ be a sequence of sets in $\Sigma$. I try to show that $$\lim_{m\to \infty} \mu(\cap_{n=m}^\infty E_n)\leq \liminf \mu(E_n)$$ by using the Fatou's lemma.




Attempt: Let $f_n=\chi_{E_n}$. By Fatou's lemma, we have $\liminf\int f_n \ge\int\liminf f_n$. Clearly, $\liminf\int f_n=\liminf \mu(E_n)$. So it remains to show that $\int \liminf f_n \ge \lim_{m\to \infty}\mu(\cap_{n=m}^\infty E_n)$



I know this link has the same question but I want to conclude my attempt. Thanks!


Answer



Hint: Since $f_n$ only takes the values $0$ and $1$, the same is true of $\liminf f_n$. Hence $\liminf f_n$ is itself the characteristic function of some set. Try to find out which set.


number theory - Prove that for any integer $k>1$ and any positive integer $n$, there exist $n$ consecutive odd integers whose sum is $n^k$

Found these problems in a problem book and got stuck. The book doesn't have solutions to I've come here for help.



(1) Prove that for any integer $k>1$ and any positive integer $n$, there exist $n$ consecutive odd integers whose sum is $n^k$.



(2) Let $n$ be a positive integer and $m$ any integer of the same parity as $n$. Prove that there exist $n$ consecutive odd integers whose sum is $mn$.

integration - Inverse Laplace transform involving an Laplace transform and an absolute sine function



I want to compute this inverse Laplace transform that involve in itself a Laplace transform, is there a general approach for this? And how can I find what the inverse looks like?



$$\mathcal{L}_\text{s}^{-1}\left[\frac{1}{1+\text{L}\cdot\text{C}_2\cdot\text{s}^2}\cdot\mathcal{L}_t\left[\left|\sin\left(\omega t+\theta\right)\right|\right]_{\left(\text{s}\right)}\right]_{\left(t\right)}=$$
$$\mathcal{L}_\text{s}^{-1}\left[\frac{1}{1+\text{L}\cdot\text{C}_2\cdot\text{s}^2}\cdot\left\{\int_0^\infty e^{-\text{s}t}\cdot\left|\sin\left(\omega t+\theta\right)\right|\space\text{d}t\right\}\right]_{\left(t\right)}=$$
$$\mathcal{L}_\text{s}^{-1}\left[\frac{1}{1+\text{L}\cdot\text{C}_2\cdot\text{s}^2}\cdot\left\{\int_0^\infty e^{-\text{s}t}\cdot\left(\frac{2}{\pi}-\frac{4}{\pi}\sum_{\text{n}\ge1}\frac{\cos\left(2\text{n}\left(\omega t+\theta\right)\right)}{4\text{n}^2-1}\right)\space\text{d}t\right\}\right]_{\left(t\right)}$$




Thanks in advance


Answer



According to the convolution theorem:




$$\mathcal{L}[f_1(t)*f_2(t)]=F_1(s)F_2(s)$$




where $*$ is convolution.




So here



$$\begin{align}
\mathcal{L}^{-1}\left[\frac{1}{1+{L}C_2 {s}^2}\cdot\mathcal{L}\left[\left|\sin\left(\omega t+\theta\right)\right|\right]\right]&=\mathcal{L}^{-1}\left[\frac{1/{L}{C}_2}{1/{L}{C}_2+{s}^2}\right]*|\sin\left(\omega t+\theta\right)|\\[10pt]
&=\frac{1}{\sqrt{LC_2}}\sin(\frac{1}{\sqrt{LC_2}}t)*|\sin\left(\omega t+\theta\right)|
\end{align}$$


Sunday 25 September 2016

number theory - How does one get that $1^3+2^3+3^3+4^3+cdots=frac{1}{120}$?




While watching interesting mathematics videos, I found one of the papers of Srinivasa Ramanujan to G.H.Hardy in which he had written $1^3+2^3+3^3+4^3+\cdots=\frac{1}{120}$.



The problem is that every term on the left is more than $\frac{1}{120}$ yet the sum is $\frac{1}{120}$. How is that ???




I know that there are many and much more interesting things presented by Ramanujan (like $1-1+1-1+1...=\frac{1}{2}$ and $1+2+3+4.....=\frac{-1}{12}$) but for now I am interested in the summation in the title.
Any idea/hint is heartily welcome. Thanks.



Here is the video I'm talking about.


Answer



I think I have rediscovered it (after watching the video you linked and read wiki biography of Ramanujan).



Start with $$
\frac{1}{x+1} =1 -x +x^2 -x^3 +-\cdots, \quad |x| <1.

$$ and differentiate to get $$
-\frac{1}{(x+1)^2} =-1 +2x -3x^2 +4x^3 -+\cdots, \quad |x| <1. \\
\frac{2}{(x+1)^3} =2 \cdot 1 -3 \cdot 2x +4 \cdot 3 x^2 -+\cdots, \quad |x| <1. \\
-\frac{6}{(x+1)^4} =-3 \cdot 2 \cdot 1 +4 \cdot 3 \cdot 2x -5 \cdot 4 \cdot 3 x^2 +-\cdots, \quad |x| <1.
$$



Take a magic mushroom and, ignoring $|x| <1$, let us take $x=1$ in each one. $$
\frac{1}{2} =1 -1 +1 -1 +- \cdots \\
-\frac{1}{4} =-1 +2 -3 +4 -+ \cdots \\
\frac{1}{4} =2 \cdot 1 -3 \cdot 2 +4 \cdot 3 -+ \cdots \\

-\frac{3}{8} =-3 \cdot 2 \cdot 1 +4 \cdot 3 \cdot 2 -5 \cdot 4 \cdot 3 +- \cdots
$$ Or more formally, $$
\sum_{m=1}^\infty (-1)^{m+1} m =\frac{1}{4}. \\
\sum_{m=1}^\infty (-1)^{m+1} m (m+1) =\frac{1}{4}. \\
\sum_{m=1}^\infty (-1)^{m+1} m (m+1) (m+2) =\frac{3}{8}.
$$



But notice $$ \begin{align}
\sum_{m=1}^\infty (-1)^{m+1} m^2
=&\sum_{m=1}^\infty (-1)^{m+1} m (m+1) -\sum_{m=1}^\infty (-1)^{m+1} m \\

=&\frac{1}{4} -\frac{1}{4} =0.
\end{align}
$$ and $$ \begin{align}
\sum_{m=1}^\infty (-1)^{m+1} m^3
=&\sum_{m=1}^\infty (-1)^{m+1} m (m+1) (m+2) -3\sum_{m=1}^\infty (-1)^{m+1} m^2 -2\sum_{m=1}^\infty (-1)^{m+1} m \\
=&\frac{3}{8} -3 \cdot 0 -2 \cdot \frac{1}{4}
=-\frac{1}{8} \quad \quad \ldots \spadesuit
\end{align}
$$




On the other hand, $$
\zeta(-3) :=1^3 +2^3 +3^3 +\cdots \\
2^4 \zeta(-3) =2 \cdot 2^3 +2 \cdot 4^3 +2 \cdot 6^3 +\cdots \\
$$ Subtract them, aligning the 2nd, 4th, 6th term like Ramanjunan did in his notebooks (shown in the video). $$
-15 \zeta(-3) =1^3 -2^3 +3^3 -+\cdots \quad \quad \ldots \heartsuit
$$ $\heartsuit$ and $\spadesuit$ together give us: (Hold your breath.) $$
\sum_{m=1}^\infty m^3 =\frac{1}{120}.
$$



Recently I also found a proof of Riemann conjecture, but the answer box is too narrow for me to type all that down.




P.s. seriously, I think Ramanujan's effort is sort of finding an interpretation of divergent series so that they have a real value, while their manipulation to be still consistent to our usual notion of arithmetics: arranging, addition, expanding, etc.?
Maybe this can be compared to the attempt to define quaternion as an extension of complex numbers, while inevitably discarding commutative law?


elementary number theory - Ordered pair solutions of $x^2-y^2 equiv a pmod p.$



This is the full question:





show that if $p$ is an odd prime then the number of ordered pair solutions of the congruence$x^2-y^2 \equiv a \pmod p,$ is $p-1$ unless $a \equiv 0 \pmod p,$ in which case number of solutions is $2p-1$.




Considering $x,y \,\in \mathbb{Z}$. In the second case, since $a \equiv 0 \pmod p,$ it follows that $p\mid (x-y)(x+y)$, but then there will be infinitely many solutions of this congruence relation because there are no bounds mentioned in the question on $x$ and $y$.



So is the question incomplete? or is it implicitly stated that $0\leq x,y .For this bound, do we get $2p-1$ solutions?


Answer



ETA: For whatever reason I thought the question here was to prove that the statement in the OP's thread in the shaded yellow box, which is what I did below.



You already answered your own question for the case where $a$ is zero so we now consider the case where $a$ is nonzero i.e., $a \in (\mathbb{F}_p)^{\times}$.





  1. Let $a \in (\mathbb{F}_p)^{\times}$. For every such nonzero $b \in (\mathbb{F}_p)^{\times}$, there is exactly one $c \in (\mathbb{F}_p)^{\times}$ satisfying $a=bc$.


  2. The above equation factors to $(x-y)(x+y) = a; x,y \in (\mathbb{F}_p)^{\times}$. By the above, for each nonzero $b=(x-y)\in (\mathbb{F}_p)^{\times}$, there is exactly one $c=(x+y)\in (\mathbb{F}_p)^{\times}$ such that $(x-y)(x+y)=a$.


  3. So for each nonzero $b\in (\mathbb{F}_p)^{\times}$, there is exactly one pair $(x,y); x,y \in (\mathbb{F}_p)^{\times}$ such that both $(x-y)=b$ and $(x-y)(x+y)=a$.




Can you finish from here.


calculus - Test the series $sum_{k=0}^{infty}frac{(-1)^k 1 cdot3cdot5 cdots(2k+1)}{1cdot4cdot7cdots(3k+1)}$



Use the ratio test for absolute convergence to determine
whether the series absolutely or
diverge.



$$\sum_{k=0}^{\infty}\frac{(-1)^k 1 \cdot3\cdot5 \cdots(2k+1)}{1\cdot4\cdot7\cdots(3k+1)}$$




I don't understand how the general term becomes this $$\lim_{k\to\infty} \frac{a_{k+1}}{a_k} = \lim_{k\to\infty} \frac{2k+3}{3k+4}$$



Obviously this Absolutely converges, I just dont get the 2nd step.


Answer



Let us write
$$a_k=\frac{ 1 \cdot3\cdot5 \cdots(2k+1)}{1\cdot4\cdot7\cdots(3k+1)}.$$
Then, we have
$$\begin{align}
\frac{a_{k+1}}{a_k}&=\frac{1\cdot3\cdot5\cdots [2(k+1)+1]}{1\cdot 4\cdot 7\cdots [3(k+1)+1]}\quad\cdot\quad\frac{1\cdot 4\cdot 7\cdots (3k+1)}{1\cdot3\cdot5\cdots(2k+1)}\\
&=\frac{1\cdot3\cdot5\cdots (2k+3)}{1\cdot 4\cdot 7\cdots (3k+4)}\quad\cdot\quad\frac{1\cdot 4\cdot 7\cdots (3k+1)}{1\cdot3\cdot5\cdots(2k+1)}\\

&=\frac{1\cdot3\cdot5\cdots(2k+1) (2k+3)}{1\cdot 4\cdot 7\cdots (3k+1)(3k+4)}\quad\cdot\quad\frac{1\cdot 4\cdot 7\cdots (3k+1)}{1\cdot3\cdot5\cdots(2k+1)}\\
&=\frac{1\cdot3\cdot5\cdots(2k+1)}{1\cdot 4\cdot 7\cdots (3k+1)}\quad\cdot\quad\frac{2k+3}{3k+4}\quad\cdot\quad\frac{1\cdot 4\cdot 7\cdots (3k+1)}{1\cdot3\cdot5\cdots(2k+1)}\\
&=\frac{2k+3}{3k+4}.
\end{align}$$
This proves the 2nd step you asked.


elementary set theory - What could be said about the cardinality of $bigcup_{iin I} A_i$ if $I$ and all the $A_i$ have cardinality $2^{aleph_0}$



The countable union of a countable set is countable. Does the same hold for sets with cardinality $|\mathbb R|$. More specifically, if $A_i$ are sets of the same cardinality as the real numbers, and $I$ is an index set also with cardinality $|\mathbb R|$, is $|\bigcup_{i\in I} A_i| = |\mathbb R|$?


Answer



This question can be reduced to the question "is it true there is a bijection between $\mathbb{R}$ and $\mathbb{R}^2$?" The answer is yes.


calculus - Apparently smooth function, discontinuous derivative




This question is about the existence of discontinuous derivatives, but it doesn't provide much examples except this one and $y = |x|$.



The function



$$f(x) = \left\{ \begin{array}{lr}
\cos(ax) & 0 \leq x \leq c\\
\cos(ac) e^{-b (x - c)} & x > c
\end{array}
\right.$$




has $[0, +\infty)$ as its domain. $a, b, c \in \mathbb (0,+\infty)$ are real constants. Its first derivative is



$$f'(x) = \left\{ \begin{array}{lr}
-a \sin(ax) & 0 \leq x \leq c\\
- b \cos(ac) e^{-b (x - c)} & x > c
\end{array}
\right.$$



$f'(x)$ is continuous only when $a/b = \cot(ac)$ and not in general.




$f(x)$ is a continuous function, without a vertical tangent, infinitely differentiable in its domain, and $a,b$ can be chosen such that the function doesn't have corners in $x = c$; despite this, it has a discontinuous derivative.



1) How can (even graphically) the derivative be not continuous where the function is so?



2) Are there any other similar examples that can be done?


Answer



Disclaimer: This is not exactly an answer to the two questions that you asked, but rather an explanation of what may have caused your confusion.



There two incorrect statements in your question, which may be part of the reason you're confused. First of all, the correct derivative of the function in your example is
$$f'(x) = \left\{\begin{array}{lr} -a\sin(ax) & 0\le x\color{red}{<}c \\ -b\cos(ac)e^{-b(x-c)} & x>c \end{array}\right.$$

Placing "$\le$" instead of the red "$<$" was wrong precisely because in general the derivative of this function at $c$ doesn't exist (except in the special case when $a/b=\cot(ac)$, as you observed).



Because of that, your statement that




$f(x)$ is a continuous function, without a vertical tangent, $\color{red}{\text{infinitely differentiable}}$ in its domain




is false, since in fact this function isn't even differentiable once at $c$ (again, in general).




There's a bit of ambiguity in these discussions of "discontinuous" derivatives. The example that you linked to and $y=|x|$ are two very different things!




  • The derivative of $y=|x|$ doesn't exist at $x=0$; and your example is similar to it, since the derivative of your function doesn't exist at $x=c$ either.


  • But the linked example, as well as the great answer from @ElliotG, address a much more interesting, but a different question: functions for which the derivative exists everywhere but is discontinuous.



calculus - Solve in terms of the Gamma function



Show:
\begin{align*}
\int\limits_0^1\sqrt{\frac{1-x^2}{1+x^2}}\,\mathrm d x
&=\frac{\sqrt \pi}{4}\left(\frac{\Gamma \left(\frac14\right)}{\Gamma\left(\frac34\right)}-4\frac{\Gamma\left(\frac34\right)}{\Gamma\left(\frac14\right)}\right)
\end{align*}







First attempt:
\begin{align*}
\int\limits_0^1\sqrt{\frac{1-x^2}{1+x^2}}\,\mathrm d x
&=\int\limits_0^1{\left(1-x^2\right)}^{1/2}{\left(1+x^2\right)}^{-1/2}\,\mathrm d x\\
\qquad\text{let } x=u^{1/2}\\
\,\mathrm d x =\frac 1 2 u ^{-1/2}\,\mathrm d u\\
&=\frac 1 2 \int\limits_0^1{\left(1-u\right)}^{1/2}{\left(1+u\right)}^{-1/2}u^{-1/2}\,\mathrm d u\\\qquad \text{let }1-u= y\\ u= 1-y,\\
\,\mathrm du=-\,\mathrm dy\\

&=-\frac 1 2 \int_0^1y^{1/2}\left(2-y\right)^{-1/2}(1-y)^{-1/2}\,\mathrm d y\\
\\&=\quad\color{orange}{????}
\\
\end{align*}






Second attempt:
\begin{align*}
\\&\int\limits_0^1\sqrt{\frac{1-x^2}{1+x^2}}\,\mathrm dx\\

x^2=\cos(2\theta)\\
\mathrm dx=\frac{-\sin(2\theta)}x\,\mathrm d\theta\\
x=0\rightarrow\theta=\pi\\
x=1\rightarrow\theta=0\\
&=\int\limits_\pi^0\sqrt{\frac{1-\cos(2\theta)}{1+\cos(2\theta)}}\cdot\frac{-\sin(2\theta)}{\sqrt{\cos(2\theta)}}\,\mathrm d\theta\\
2\sin^2(x)=1-\cos(2x)\\
2\cos^2(x)=1+\cos(2x)\\
\sin(2x)=2\sin(x)\cos(x)\\
&=\int\limits_\pi^0\sqrt{\frac{2\sin^2\theta }{2\cos^2\theta}}\cdot\frac{-2\sin(\theta)\cos(\theta)}{\sqrt{\cos(2\theta)}}\,\mathrm d\theta\\
&=2\int\limits^\pi_0\tan(\theta)\cdot\frac{\sin(\theta)\cos(\theta)}{\sqrt{\cos(2\theta)}}\,\mathrm d\theta\\

&=2\int\limits^\pi_0\frac{\sin^2(\theta)}{\sqrt{\cos(2\theta)}}\,\mathrm d\theta\\
\text{let } \theta=2\phi\\
\mathrm d\theta =2\mathrm d\phi\\
\theta=0\rightarrow\phi=0\\\theta=\pi\rightarrow\phi=\pi/2\\
&=2\int\limits_0^{\pi/2}\frac{\sin^2(2\phi)}{\sqrt{\cos(4\phi)}}2\,\mathrm d\phi\\
&=4\int\limits_0^{\pi/2}\frac{\sin^2(2\phi)}{\sqrt{\cos(4\phi)}}\,\mathrm d\phi\\
\\& =\quad\color{green}{????}\\
\end{align*}







@doraemonpaul's Method:
\begin{align*}
\int\limits_0^1\sqrt{\frac{1-x^2}{1+x^2}}\,\mathrm dx
&=\int\limits_0^1\sqrt{\frac{1-x^2}{1+x^2}}\times\sqrt{\frac{1-x^2}{1-x^2}}\,\mathrm dx\\
&=\int\limits_0^1\frac{1-x^2}{\sqrt{1-x^4}}\,\mathrm dx\\
&=\int\limits_0^1(1-x^4)^{-1/2}\,\mathrm dx-\int\limits_0^1x^2(1-x^4)^{-1/2}\,\mathrm dx\\
\text{let }x=u^{1/4}\\
\mathrm dx=\frac14 u^{-3/4}\,\mathrm du\\
&=\int\limits_0^1(1-u)^{-1/2}\cdot\frac14 u^{-3/4}\,\mathrm du-\int\limits_0^1u^{1/2}(1-u)^{-1/2}\cdot\frac14 u^{-3/4}\,\mathrm du\\

&=\frac14\int\limits_0^1u^{-3/4}(1-u)^{-1/2}\cdot \,\mathrm du-\frac14\int\limits_0^1u^{-1/4}(1-u)^{-1/2}\,\mathrm du\\
\small{\text B(m,n)=\int\limits_0^1t^{m-1}(1-t)^{n-1} \,\mathrm d t}\\
\small{m_1-1=-3/4\quad n_1-1}&\small{=-1/2\quad m_2-1=-1/4\quad n_2-1=-1/2}\\
\small{m_1=1/4\quad \quad n_1=1/2\quad}&\small{\quad m_2=3/4\quad\quad n_2=1/2}\\
\\
&=\frac14\text B\left(\tfrac 14,\tfrac12\right)-\frac14\text B\left(\tfrac34,\tfrac12\right)\\
\small{\text B(m,n)=\frac{\Gamma(m)\Gamma(n)}{\Gamma(m+n)}}\\
&=\frac 14\left(\frac{\Gamma(\frac 14)\Gamma(\frac 12)}{\Gamma(\frac 34)}-\frac{\Gamma(\frac34)\Gamma(\frac 12)}{\Gamma(\frac 54)}\right)\\
&=\frac {\sqrt\pi}4\left(\frac{\Gamma(\frac 14)}{\Gamma(\frac 34)}-\frac{\Gamma(\frac34)}{\Gamma(\frac 54)}\right)\\
&=\frac {\sqrt\pi}4\left(\frac{\Gamma(\tfrac 14)}{\Gamma(\tfrac 34)}-\frac{\Gamma(\tfrac34)}{\tfrac 14\Gamma(\tfrac 14)}\right)\\

&=\frac{\sqrt\pi}4\left(\frac{\Gamma(\tfrac 14)}{\Gamma(\tfrac 34)}-4\frac{\Gamma(\tfrac34)}{\Gamma(\tfrac 14)}\right)\\
\end{align*}



NB: this question has been edited post comments ,


Answer



I think you treat this question too complicated.



First it should note that integral of the form $\int_0^1(1-x^2)^a(1+x^2)^b~dx$ is impossible to have easy-look substitution to transform it to a integral of the form $\int_0^1x^p(1-x)^q~dx$ (i.e. beta function).



Second it should note that trigonometric substitution also often bring this type of question too complicated.




But I admit that this question has a level of lucky or tricky.



$\int_0^1\sqrt{\dfrac{1-x^2}{1+x^2}}~dx$



$=\int_0^1\dfrac{1-x^2}{\sqrt{1-x^2}\sqrt{1+x^2}}~dx$



$=\int_0^1\dfrac{1-x^2}{\sqrt{1-x^4}}~dx$



$=\int_0^1(1-x^4)^{-\frac{1}{2}}~dx-\int_0^1x^2(1-x^4)^{-\frac{1}{2}}~dx$




$=\int_0^1(1-x)^{-\frac{1}{2}}~d(x^{\frac{1}{4}})-\int_0^1x^{\frac{1}{2}}(1-x)^{-\frac{1}{2}}~d(x^{\frac{1}{4}})$



$=\dfrac{1}{4}\int_0^1x^{-\frac{3}{4}}(1-x)^{-\frac{1}{2}}~dx-\dfrac{1}{4}\int_0^1x^{-\frac{1}{4}}(1-x)^{-\frac{1}{2}}~dx$



$=\dfrac{B\left(\dfrac{1}{4},\dfrac{1}{2}\right)}{4}-\dfrac{B\left(\dfrac{3}{4},\dfrac{1}{2}\right)}{4}$



$=\dfrac{\Gamma\left(\dfrac{1}{4}\right)\Gamma\left(\dfrac{1}{2}\right)}{4\Gamma\left(\dfrac{3}{4}\right)}-\dfrac{\Gamma\left(\dfrac{3}{4}\right)\Gamma\left(\dfrac{1}{2}\right)}{4\Gamma\left(\dfrac{5}{4}\right)}$



$=\dfrac{\sqrt{\pi}~\Gamma\left(\dfrac{1}{4}\right)}{4\Gamma\left(\dfrac{3}{4}\right)}-\dfrac{\sqrt{\pi}~\Gamma\left(\dfrac{3}{4}\right)}{\Gamma\left(\dfrac{1}{4}\right)}$




$=\dfrac{\sqrt{\pi}~\Gamma\left(\dfrac{1}{4}\right)}{4\times\dfrac{\sqrt{2}\pi}{\Gamma\left(\dfrac{1}{4}\right)}}-\dfrac{\sqrt{\pi}~\Gamma\left(\dfrac{3}{4}\right)}{\dfrac{\sqrt{2}\pi}{\Gamma\left(\dfrac{3}{4}\right)}}$



$=\dfrac{1}{4\sqrt{2\pi}}\Gamma^2\left(\dfrac{1}{4}\right)-\dfrac{1}{\sqrt{2\pi}}\Gamma^2\left(\dfrac{3}{4}\right)$


calculus - Limit $lim_{x to infty}x^e/e^x$



I had this problem on my math test, and was stuck on it for quite some time.



$\lim_{x \to \infty}x^e/e^x$



I knew that the bottom grew faster than the top, but I didn't know how to prove it. I wrote that the limit approaches 0, but I am not sure how to prove it mathematically.


Answer




Show first that it is in indeterminate form.



Then perform L'Hopital's rule, differentiating the top and bottom.



$$\lim_{x \to \infty} \frac{x^e}{e^x}= \lim_{x \to \infty} \frac{ex^{e-1}}{e^x}=e(e-1)(e-2)\lim_{x \to \infty} \frac{1}{x^{3-e}e^x}=0$$


abstract algebra - What do the elements of the field $mathbb{Z}_2[x]/(x^4+x+1)$ look like? What is its order?



Background: I'm looking at old exams in abstract algebra. The factor ring described was described in one question and I'd like to understand it better.



Question: Let $F = \mathbb{Z}_2[x]/(x^4+x+1)$. As the polynomial $x^4+x+1$ is irreducible over $\mathbb{Z}_2$, we know that $F$ is a field. But what does it look like? By that I am asking if there exists some isomorphism from $F$ into a well-known field (or where it is straightforward to represent the elements) and about the order of $F$.



In addition: is there something we can in general say about the order of fields of the type $\mathbb{Z}_2[x]/p(x)$ (with $p(x)$ being irreducible in $\mathbb{Z}_2[x]$)?


Answer




The elements of $F$ are $\{ f(x) + (x^4 + x + 1) \mid f(x) \in \mathbb{Z}_2[x], \deg f < 4 \}$. There are $2^4$ of them. Any field of order $2^4$ is isomorphic to $F$.



In general, if $p(x) \in \mathbb{Z}_2[x]$ is irreducible of degree $k$, then $\mathbb{Z}_2[x]/(p(x))$ is a field of order $2^k$.



There is a notation that makes this field more convenient to work with. Let $\alpha = x + (x^4 + x + 1) \in F$. Then for $f(x) \in \mathbb{Z}_2[x]$, $f(\alpha) = f(x) + (x^4 + x + 1)$. So, for example, we can write the element $x^2 + 1 + (x^4 + x + 1)$ as $\alpha^2 + 1$. In this notation,



$$F = \{ f(\alpha) \mid f(x) \in \mathbb{Z}_2[x], \deg f < 4 \}.$$



An isomorphic field is the nimber field of nimbers less than 16. The representation of the elements is simpler, but I'm finding nim-multiplication to be harder than polynomial multiplication (maybe there's a trick to it that I don't know).


calculus - Limit of $dfrac{sin(theta)}{theta}$ in degrees

What does $\lim \limits_{\theta\to0}\dfrac{\sin(\theta)}{\theta}$ equal when $\theta$ is expressed in degrees?



I know that theta in degrees is $\frac{\pi}{180}$ theta radians, but I don't get the final answer of $0.01745$.

Saturday 24 September 2016

real analysis - Construction of a function which is not the pointwise limit of a sequence of continuous functions



This is somewhat linked to a prior question of mine which was looking to see if a proof of mine regarding the Dirichlet function was correct (it wasn't). I now have an answer to the question which can be answered without using directly using the Baire category theorem or the likes; as it is sufficiently different to my original approach, I feel that a new question would be the best way of going about this.



The question is to




Construct a function $f:[0,1] \rightarrow \mathbb{R}$ which is not the pointwise limit of any sequence $(f_n)$ of continuous functions





to which I will have an answer below (while I didn't come up with much of this myself, I feel it's an interesting result to discuss).



Finally, as a warning to those who are seeing this as a result of searching for answers to their example sheet/homework questions, please have a think about the question before reading the answer below.


Answer



See this answer for examples. (A function is in Baire class one iff it is the pointwise limit of continuous functions, in Baire class two iff it is the pointwise limit of Baire class one functions, etc. The answer shows "natural" examples of functions in Baire class two but not Baire class one.)



One can in fact do better, and show that the sequence of Baire classes is rather long (it has length $\omega_1$, the first uncountable ordinal). A high level sketch of this fact uses some ideas of descriptive set theory. I follow here A.C.M. van Rooij, and W.H. Schikhof, A second course on real functions, Cambridge University Press, 1982. Stronger results can be found in A. Kechris, Classical descriptive set theory, Springer, 1995.



First, given a class $\mathcal A$ of functions on $\mathbb R$, define $\mathcal A^*$ as the class of pointwise limits of functions from $\mathcal A$, so if $\mathcal A$ is the class $\mathcal B^0$ of continuous functions (that is, Baire class zero functions), then $\mathcal A^*=\mathcal B^1$ is the the class of Baire class one functions, $(\mathcal A^*)^*=\mathcal B^2$ is the class of Baire class two functions, etc.




The Borel measurable functions are the closure of the continuous functions under the operation of taking pointwise limits.



Call a function $F:\mathbb R^2\to\mathbb R$ a catalogue of $\mathcal A$ iff $F$ is Borel measurable and for every $f\in\mathcal A$ there is an $s\in[0,1]$ such that $f(x)=F(x,s)$ for all $x$. (We can think of $s$ as a "code" for $f$.) Note that we are not requiring that for each $s\in[0,1]$, the function $f(x)=F(x,s)$ be in $\mathcal A$.




Theorem.




  1. If $\mathcal A_1,\mathcal A_2,\dots$ have catalogues, then so does $\bigcup_n \mathcal A_n$.


  2. If $\mathcal A$ has a catalogue, so does $\mathcal A^*$.



  3. The class $\mathcal B^0$ of continuous functions has a catalogue.


  4. The class of Borel measurable functions has no catalogue.


  5. If $\mathcal A$ is a class of functions that contains all the continuous functions and has a catalogue, then $\mathcal A^*\ne\mathcal A$.





The proof of item 4. is a diagonal argument: If $F:\mathbb R^2\to\mathbb R$ is a catalogue for the Borel measurable functions $f:\mathbb R\to\mathbb R$, then $g(x)=F(x,x)$ and $1+g$ are Borel measurable. So there must be $s$ such that $1+g(x)=F(x,s)$ for all $x$, in particular for $x=s$, so $1+g(s)=g(s)$, a contradiction.



Item 5. follows, because if $\mathcal A$ contains the continuous functions, and $\mathcal A=\mathcal A^*$, then $\mathcal A$ contains the Borel measurable functions. If $\mathcal A$ admits a catalogue, then so would the subclass of Borel measurable functions, but we just proved this is not the case.




For item 1, suppose $F_n$ is a catalogue of $\mathcal A_n$ for all $n$. The functions $(x,y,z)\mapsto F_n(x,y)$ and $(x,y,z)\mapsto\chi_{\{1/n\}}(z)$ are Borel measurable functions on $\mathbb R^3$, but then so is
$$ H(x,y,z)=\sum_n F_n(x,y)\chi_{\{1/n\}}(z). $$
Recall now that there are continuous functions $p_1,p_2:\mathbb R\to[0,1]$ such that if $p(x)=(p_1(x),p_2(x))$, then the restriction of $p$ to $[0,1]$ is a surjection of $[0,1]$ onto $[0,1]^2$.
Let $$ F(x,s)=H(x,p_1(s),p_2(s)). $$
Then $F$ is a catalogue for $\bigcup_n\mathcal A_n$.



For item 2., we can similarly find continuous functions $p_1,p_2,\dots$ such that for any $y_1,y_2,\dots\in[0,1]$ there is an $x\in[0,1]$ such that $p_i(x)=y_i$ for all $i$. Now define
$$ F^*(x,s)=\left\{\begin{array}{cl}\lim_n F(x,p_n(x))&\mbox{ if the limit exists,}\\ 0&\mbox{ otherwise.}\end{array}\right. $$
If $F$ is Borel measurable, so is $F^*$, and it is easy to see that if $F$ is a catalogue for $\mathcal A$, then $F^*$ is a catalogue for $\mathcal A^*$.




Finally, we reach item 3., the crux of the matter. We want to show that the class of continuous functions has a catalogue. For this, note first that $\{f\}$ has a catalogue for each continuous $f$. It follows that the class $\mathcal P$ of polynomials with rational coefficients has a catalogue, by item 1. But then, by item 2, so does $\mathcal P^*$. Now we can use Weierstrass approximation theorem to see that $\mathcal P^*$ contains all the continuous functions, and we are done.






To close, note that if the goal is simply to show that there are functions not in $\mathcal B^1$ (rather than to display the long Baire hierarchy, or to indicate explicit examples as in the link above), then a simple cardinality argument is possible: There are $\mathfrak c=2^{\aleph_0}=|\mathbb R|$ many continuous functions: There can be no less, because the constant functions are continuous, and we do not have more, because a continuous function is determined by its values on the countable set of rationals. Now, there are only $$|\mathbb R^{\mathbb Q}|=|\mathbb R|^{|\mathbb N|}=(2^{\aleph_0})^{\aleph_0}=2^{\aleph_0}=\mathfrak c$$ many functions from the rationals to the reals, and the continuous functions correspond to a (proper) subset of them.



Finally, if a function is a pointwise limit of continuous functions, then it is determined by a countable sequence of such functions, and there are, again, only $\mathfrak c^{\mathbb N}=\mathfrak c$ many such sequences. So $|\mathcal B^1|=\mathfrak c$. But there are $\mathfrak c^{\mathfrak c}>\mathfrak c$ many functions from $\mathbb R$ to itself, so not all of them can be in $\mathcal B^1$.



(Continuing this argument further, we see that, even if we take all the Baire functions together, not just those of class $1$, we still only have $\mathfrak c$ many functions, so there are functions that are not in any of the Baire classes.)


Solving a Diophantine equation of the form $x^2 = ay^2 + byz + cz^2$ with the constants $a, b, c$ given and $x,y,z$ positive integers



Is there any procedure for determining if an infinite amount of solutions exist for an equation of the type $x^2 = ay^2 + byz + cz^2$ for arbitrary integer constants $a, b, c$ and variables $x, y, z \in \mathbb{Z_+}$? If not, does knowing at least one non-trivial solution of the equation help determine if an infinite amount of solutions exist?



Example (if it helps): let $x^2 = 202 y^2+14yz+9z^2$. Here one solution is $y=z=1$ and $x=15$, do there exist infinitely many other solutions?


Answer



TLDR; If there exists one nonzero solution, then there exist infinitely many solutions, paramatrized by $\Bbb{Z}^2$. Popular candidates to test are triplets $(0,y,z)$ where $y\mid c$ and $z\mid a$, though such solutions need not exist.







Every integral solution to your equation with $x\neq0$ yields a rational solution to
\begin{equation}aY^2+bYZ+cZ^2=1,\end{equation}
by setting $Y:=\tfrac yx$ and $Z:=\tfrac zx$. Conversely, every rational solution to this equation yields an integral solution to your equation by multiplying out the denominators. This also shows that multiplying an integral solution through by an integer yields another integral solution.



Given a nonzero rational solution $(Y_0,Z_0)$ to your equation, also
$$(aY_0+bZ_0,-aZ_0),$$
is a rational solution, and moreover for every $k\in\Bbb{Z}$ also
$$\left((aY_0+bZ_0)k^2+2cZ_0k-cY_0,-aZ_0k^2+2cY_0k+bY_0+cZ_0\right),$$
is a rational solution. These are in fact all rational solutions. This then in turn yields all integral solutions, except those with $x=0$. For these we have

$$Y=\frac{-b\pm\sqrt{b^2-4ac}}{2a}Z,$$
so such solutions exist if and only if $\frac{-b\pm\sqrt{b^2-4ac}}{2a}$ are rational, i.e. if and only if $ax^2+bx+c$ has a rational root, which is easy to test.


calculus - $ lim_{x to infty} frac {e^{x/2}}{2^x} $



How do i know what is the limit :



$$ \lim_{x \to \infty} \frac {e^{x/2}}{2^x} $$



Because with l'hopital it is imposible to say, as the nth derivatives of both $ {e^{x/2}} $and $ {2^x}$ are always infinity...
I could use a hint that helps me think this through


Answer






$$\large\frac{e^{x/2}}{2^x}=\frac{e^{x/2}}{e^{x\ln(2)}}=e^{x\left[\frac12-\ln(2)\right]}\to\ ...$$


gcd and lcm - Suppose $b,c in textbf Z^+$ are relatively prime (i.e., $gcd(b,c) = 1$), and $a ,|, (b+c)$. Prove that $gcd(a,b) = 1$ and $gcd(a,c) = 1$

Suppose $b,c \in \textbf Z^+$ are relatively prime (i.e., $\gcd(b,c) = 1$), and $a \,|\, (b+c)$. Prove that $\gcd(a,b) = 1$ and $\gcd(a,c) = 1$.



I've been trying to brainstorm how to prove this. I have determined that $\gcd(b, b + c) = 1$, but I am not sure if this fact will aid in proving this statement at all.

Friday 23 September 2016

sequences and series - $limlimits_{ntoinfty} frac{n}{sqrt[n]{n!}} =e$





I don't know how to prove that
$$\lim_{n\to\infty} \frac{n}{\sqrt[n]{n!}} =e.$$
Are there other different (nontrivial) nice limit that gives $e$ apart from this and the following
$$\sum_{k = 0}^\infty \frac{1}{k!} = \lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n = e\;?$$


Answer




In the series for $$e^n=\sum_{k=0}^\infty \frac{n^k}{k!},$$
the $n$th and biggest(!) of the (throughout positve) summands is $\frac{n^n}{n!}$.
On the other hand, all summands can be esimated as
$$ \frac{n^k}{k!}\le \frac{n^n}{n!}$$
and especially those
with $k\ge 2n$ can be estimated
$$ \frac{n^k}{k!}<\frac{n^{k}}{(2n)^{k-2n}\cdot n^{n}\cdot n!}=\frac{n^{n}}{n!}\cdot \frac1{2^{k-2n}}$$
and thus we find
$$\begin{align}\frac{n^n}{n!}Taking $n$th roots we find

$$ \frac n{\sqrt[n]{n!}}\le e\le \sqrt[n]{2n+3}\cdot\frac n{\sqrt[n]{n!}}.$$
Because $\sqrt[n]{2n+3}\to 1$ as $n\to \infty$, we obtain $$\lim_{n\to\infty}\frac n{\sqrt[n]{n!}}=e$$
from squeezing.


calculus - Does the series converge: $sum_2^inftyfrac{log n}{n(log log n)^2}$



Does the series $$\sum_2^\infty\frac{\log n}{n(\log \log n)^2}$$
converge?



The root and ratio tests are inconclusive, the integral test may be too difficult to apply. I've tried the limit comparison test with $\log n/n$ but that is also inconclusive since $$\frac{\frac{\log n}{n(\log \log n)^2}}{\frac{\log n}{n}}=\frac{1}{(\log \log n)^2}\to 0.$$


Answer



Using the Cauchy condensation test, (cf Wikipedia), your series diverges.
The transformed series reads :
$$ \sum_{n} 2^n \frac{n}{2^n \log^2 n} = \sum_{n}\frac{n}{\log^2 n}$$




Remark : I interpreted $\log$ as a base two logarithm, which does not affect the result


infinity - Why does Michio Kaku say that $frac{1}{0} = infty$?

Why does Michio Kaku say that $\frac{1}{0} = \infty$?



http://youtu.be/AJ4zlvqOtE8?t=4m43s




Instead of $\frac{1}{0}$ that's not defined, so we don't know.

calculus - proving $limlimits_{xto infty }frac{x^2}{x^2+sin^2 x}=1$



I have to prove the following equation for homework
$$\lim_{x\to \infty }\frac{x^2}{x^2+\sin^2 x}=1$$



The proof must be done by proving that for every $e > 0$ exists a $M > 0$ so that for every $x > M$, $|f(x)-1| < e$ is true.



I can't seem to figure this one out.




I would greatly appropriate anyone who tries to help me out :) Thanks


Answer



$$\left| \frac{{{x}^{2}}}{{{x}^{2}}+{{\sin }^{2}}x}-1 \right| = \frac{{{\sin(x)}^{2}}}{{{x}^{2}}+{{\sin }^{2}}x} \leq \frac{1}{x^2}$$



Now making $\frac{1}{x^2} \leq \epsilon$ gives you the $M$....


elementary set theory - Bijection between $[0,1]$ and $[0,1]times[0,1]$

I know that $|\mathbb R|=|\mathbb R\times\mathbb R|$, and that $|[0,1]|=|\mathbb R|$, which suggests that $|[0,1]|=|[0,1] \times [0,1]|$ but I would like to know a bijection between the interval and square.

limits - Find $lim_{xrightarrow -infty } frac{3x^{2}-sin(5x)}{x^{2}+2}$





$\lim_{x\rightarrow -\infty } \frac{3x^{2}-\sin(5x)}{x^{2}+2}$



A. 0



B. 1



C. 2



D. 3





After using L'hopital's rule again and again I got this expression:



$$\lim_{x\rightarrow -\infty } \frac{6+25\sin(5x)}{2}$$



But how do we proceed, what is the value of $\sin(-\infty)$?



Any help will be appreciated!


Answer



Application of L'hopitals rule for the first time gives,




$$\lim_{x \to -\infty} \frac{6x-5\cos (5x)}{2x}$$



We can try to do it again but it won't be useful $\lim_{x \to -\infty} \sin (x)$ does not exist because it oscillates between $-1$ and $1$ for $\frac{\pi}{2}+2\pi k$ and $\frac{3\pi}{2}+2\pi k$ so L'hopitals rule will not give a sensible answer.



Instead we may try more elementary approaches and write is as follows.



$$\lim_{x \to -\infty} \frac{6-5\frac{\cos(5x)}{x}}{2}$$



At this point note that the quaintly $\frac{\cos (5x)}{x}$ goes to zero because the top is bounded by $-1$ and $1$.




So the answer is $3$.


Thursday 22 September 2016

discrete mathematics - Trouble understanding algebra in induction proof



I'm on hour 20 of studying for the discrete math midterm tomorrow, and I've got to be honest I'm a little panicked. In particular I'm having trouble with induction proofs, not because I don't understand the proofs but because of the algebra.




Here is an example of a problem/solution my professor put up:



6. Give an induction proof to prove the quantified predicate, 
(All n, n >= 1, 1/2 + 1/2^2 + ... + 1/2^n = (2^n - 1)/2^n)

Sol:
Let P(n) be the predicate
1/2 + 1/2^2 + ... + 1/2^n = (2^n - 1)/2^n


Base Case: n=1:
P(1) <=> 1/2 = (2-1)/2 <=> 1/2 = 1/2 <=> T

Induction Step:
(All n, n >= 1, P(n) => P(n+1))

P(n)
=> {definition of P(n)}
1/2 + 1/2^2 + ... + 1/2^n = (2^n - 1)/2^n
=> {add 1/2^(n+1) to both sides of "="}

1/2 + 1/2^2 + ... + 1/2^(n+1) = (2^n - 1)/2^n + 1/2^(n+1)
=> {arithmetic}
1/2 + 1/2^2 + ... + 1/2^(n+1) = (2*(2^n - 1) + 1)/2^(n+1)
=> {arithmetic}
1/2 + 1/2^2 + ... + 1/2^(n+1) = (2^(n+1) - 1)/2^(n+1)
=> {definition of P(n+1)}
P(n+1)


After step two of the induction step I get confused about these transformations on the right hand side:




$$(2^n - 1)/2^n + 1/2^{n+1}$$
$$(2\cdot(2^n - 1) + 1)/2^{n+1}$$
$$(2^{n+1} - 1)/2^{n+1}$$



I'll admit I'm not the best at math, but I really need to do well in this class. Could someone explain the logic of the transformations to me? I know the point is to replace the 'n' with 'n+1', but I don't see the logic of him multiplying by two in the second transformation.



Thanks!


Answer



So we're starting with this:




$$\frac{2^n - 1}{2^n} + \frac{1}{2^{n+1}}.$$



Note that the denominators are not the same. They need to be the same in order to combine them to get to the next step. To do this we multiply the numerator and denominator of the first term by $2$. This makes the denominators the same, because $2^n \cdot 2 = 2^{n+1}$:



$$\frac{2 \cdot \left(2^n - 1\right)}{2^{n+1}} + \frac{1}{2^{n+1}}.$$



This gets us to the next expression:



$$\frac{2 \cdot \left(2^n - 1\right) + 1}{2^{n+1}}.$$




Finally, we multiply the $2$ through in the numerator, and combine terms:



$$2 \cdot \left(2^n - 1\right) + 1 = 2 \cdot 2^n - 2 + 1 = 2^{n+1}-1,$$



which gives you the last expression.



Hope that helps! Good luck on your exam.


calculus - Radius of convergence of $sum_{n=1}^infty(n!)^2x^{n^2}$ and $sum_{n=1}^infty frac {x^{n^2}}{n!}$



This is a question from an exam I recently failed. 😢



What is the radius of convergence of the following power series? $$(a) \sum_{n=1}^\infty(n!)^2x^{n^2}$$ and $$(b) \sum_{n=1}^\infty \frac {x^{n^2}}{n!}$$




Edit: Here's my attempt at the first one, if someone could tell me if it's any good...



$\sum_{n=1}^\infty(n!)^2x^{n^2}$=$\sum_{n=1}^\infty a_nx^n$ where $a_n= 0$ for any $n\notin \{k\in N| \exists t\in N: k=t^2\}$ and $a_n = (n!)^2$ else. So the radius of convergence would be the inverse of $\lim_{n\rightarrow \infty}{(n!)^{2/n}}=\lim e^{2/n\cdot log(n!) }$. The exponent with log of factorial becomes a series, $\sum_{n=1}^{\infty} \frac{logn}{n}$ which diverges by comparison test with $\frac{1}{n}$, so the radius of convergence would be equal to $0$.



Second edit: This is wrong. Correct answer below.


Answer



Let's consider the first series $\sum_{n=1}^{\infty} (n!)^2 x^{n^2}$. The easiest way to find the radius of convergence is to forget this is a power series and treat $x$ as a constant. Let's assume $x > 0$ so that the terms of the series are positive and we can use any test we wish for the convergence/divergence of a non-negative series. Alternatively, replace $x$ with $|x|$ and check for absolute convergence instead. Using the ratio test, we get the expression
$$ \frac{(n+1)!^2 x^{(n+1)^2}}{(n!)^2 x^{n^2}} = (n+1)^2 x^{2n + 1}. $$
This expression converges if $0 < x < 1$ to $0$ while it diverges if $x \geq 1$. This implies that the series converges if $0 < x < 1$ and diverges if $x \geq 1$. But this is a power series so the only possible value for the radius of convergence is $R = 1$ (because it should converge when $|x| < R$ and diverge when $|x| > R$).



elementary number theory - Criterion for sum/difference of unit fractions to be in lowest terms




Pick two nonzero integers $a$ and $b$, so $(a,b)\in (\mathbb{Z}\setminus\{0\})\times(\mathbb{Z}\setminus\{0\})$. We want to add the fractions $1/a$ and $1/b$ and use the standard algorithm. First carefully find the least common multiple of $a$ and $b$ (it is only well-defined up to a sign but that will not be important). Then convert each of the two fractions to get this common number as their denominators. Finally, add the two new numerators and keep the denominator.



For the scope of this question, let's call $(a,b)$ "easy" iff the resulting sum fraction is already in lowest terms (irreducible fraction).



Examples: If $a=12$ and $b=16$, then



$$\frac{1}{12}+\frac{1}{16}=\frac{4}{48}+\frac{3}{48}=\frac{7}{48}$$



so $(12,16)$ is "easy". On the other hand, since




$$\frac{1}{10}+\frac{1}{15}=\frac{3}{30}+\frac{2}{30}=\frac{5}{30}$$



the pair $(10,15)$ is not "easy".



The question: Is there an equivalent or simpler way to define this "easiness"? For example in terms of the signs and prime factorizations of $a$ and $b$? Does this property already have a conventional name?



Note that signs seem to matter since $(3,6)$ is not easy while $(3,-6)$ is.


Answer



Let $\gcd(a,b)=d$, and write $a=da', b=db'$, where $\gcd(a',b')=1$. Then you have $$\frac{1}{a}+\frac{1}{b}=\frac{1}{da'}+\frac{1}{db'}=\frac{b'+a'}{da'b'}$$




Now, $\gcd(a'+b',a'b')=1$ because if prime $p|\gcd(a'+b',a'b')$ then $p|a'b'$ then wlog $p|a'$; but then $p|(a'+b')$ and hence $p|b'$, a contradiction. Hence the sum is "easy" exactly when $$\gcd(a'+b',d)=1$$



In the example $a=10, b=15$, we have $a'=2, b'=3, d=5$ and $a'+b'=5$.



In the example $a=3, b=\pm 6$, we have $a'=1, b'=\pm 2, d=3$, and $a'+b'$ is $3$ or $-1$.


Uniqueness of Pexider's functional equation



Let $f:\mathbb{R}\rightarrow\mathbb{R}$, $g:\mathbb{R}\rightarrow\mathbb{R}$, and $h:\mathbb{R}\rightarrow\mathbb{R}$ and consider Pexider's equation,
$$
f(x) + g(y) = h(x + y) \qquad \qquad (1)
$$
where $f$, $g$ and $h$ are unknown. I assume (for simplicity) that $f$, $g$, and $h$ are twice continuously differentiable. Then, we can find a solution by differentiating with respect to $x$ and $y$,
$$
h''(x+y)=0

$$
Integrating out, substituting back into $(1)$ and equating coefficients we get,
$$
h(z)=c_1z +c_2 + c_3 \qquad f(x)=c_1x +c_2 \qquad g(y)=c_1y +c_3
$$
for arbitrary constants $c_1$, $c_2$ and $c_3$.



I have now found a solution to $(1)$. How do I prove that this solution is unique? It seems immediate, but can't quite convince myself.


Answer



Suppose $h$ is twice continuously differentiable. Therefore,




$h''(x+y)=0$ if and only if $h(x+y)=(x+y)k+l$, where $k,l$ are constants (1)



Since $f(x)+g(y)=h(x+y)$, we have
$0=\frac{\partial^2}{\partial x \partial y}(L.H.S.) = h''(x+y)$ (2)



Therefore by (1) and (2) we have $f(x)+g(y)=(x+y) k+l$, and since $x,y$ are arbitrary, $f,g$ have to be linear functions.



So, the solution is unique in this sense.


Prove the inequality $n! geq 2^n$ by induction



I'm having difficulty solving an exercise in my course.



The question is:




Prove that $n!\geq 2^n$.





We have to do this by induction. I started like this:




  1. The lowest natural number where the assumption is correct is $4$ as: $4! \geq 2^4 \iff 24 \ge 16$.

  2. The assumption is: $n! \ge 2^n$.



Now proof for $(n+1)$ which brings me to: $(n+1)! \ge 2^{(n+1)}$



I think I can rewrite it somehow like this:




$$ {(n+1)} \times {n!} \stackrel{\text{(definition of factorial)}}{\ge} 2^n \times 2 $$



$$ (n+1) \times 2^n \ge 2^n \times 2 $$



Then I think I can eliminate the $2^n$ and have something like this: $n+1 \ge 2$, or $n \ge 1$.



But I think I'm wrong here some where and was hoping somebody has some advice on this. How can I prove the above assumption?



Any help would be appreciated, kind regards.



Answer



In the induction step you want to show that if $k!\ge 2^k$ for some $k\ge 4$, then $(k+1)!\ge 2^{k+1}$. Since you already know that $4!\ge 2^4$, the principle of mathematical induction will then allow you to conclude that $n!\ge 2^n$ for all $n\ge 4$. You have all of the necessary pieces; you just need to put them together properly. Specifically, you can argue as follows.



Suppose that $k!\ge 2^k$, where $k\ge 4$; this is your induction hypothesis. Then $$\begin{align*}
(k+1)! &= (k+1)k!\text{ (by the definition of factorial)}\\
&\ge (k+1)2^k\text{ (by the induction hypothesis)}\\
&> 2\cdot2^k\text{ (since }k\ge 4\text{)}\\
&= 2^{k+1}.
\end{align*}$$ This completes the induction step: it shows that if $k\ge 4$, then $$k!\ge 2^k \implies (k+1)!\ge 2^{k+1}.$$


abstract algebra - Generalized Rationalization in Finite Radical Field Extensions

In the square root case of a radical extension of, say, $\mathbb{Q}$, we have that $\mathbb{Q}(\sqrt{2}) = \{a + b \sqrt{2} | a, b \in \mathbb{Q} \}$.



The only semi-hard axiom to prove is that inverses exist. We reason that this is a field because the inverse of $a + b \sqrt{2}$ can be found by rationalizing the denominator. Specifically:



$$ \frac{1}{a + b \sqrt{2}} = \frac{a - b \sqrt{2}}{a^2 - 2b^2} = \frac{a}{a^2 - 2b^2} - \frac{b}{a^2 - 2b^2} \sqrt{2},$$ which is clearly of the form $x + y \sqrt{2}$, for $x, y \in \mathbb{Q}$, by the closure of the rationals on arithmetic operations.



Let's say we want to consider $\mathbb{Q}(\sqrt{2}, \sqrt[3]{2})$. My intuition says that this should be something similar -- at least we know that clearly $\{a + b \sqrt{2} + c \sqrt[3]{2} | a, b, c \in \mathbb{Q} \} \subseteq \mathbb{Q}(\sqrt{2}, \sqrt[3]{2})$ (though I'm not sure exactly).




I'm just learning field extensions, so I'm not sure if this is right, but I believe that you can say $[\mathbb{Q}(\sqrt{2}, \sqrt[3]{2}): \mathbb{Q}]= 2 \cdot 3$ because the degree of the square root extension is 2, and then you can show that $x^3 - 2$, a polynomial of degree 3, is irreducible over $\mathbb{Q}(\sqrt{2})$.



Can we generalize some form of "rationalization" to help us prove that inverses exist "directly" in the way we do from the square root case? At the very least, does the existence of a finite degree field extension prove that algebraic rationalizations of this form exist?

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...