Friday 28 February 2014

summation - What's the formula for the 365 day penny challenge?




Not exactly a duplicate since this is answering a specific instance popular in social media.



You might have seen the viral posts about "save a penny a day for a year and make $667.95!" The mathematicians here already get the concept while some others may be going, "what"? Of course, what the challenge is referring to is adding a number of pennies to a jar for what day you're on. So:



Day 1 = + .01
Day 2 = + .02
Day 3 = + .03

Day 4 = + .04


So that in the end, you add it all up like so:



1 + 2 + 3 + 4 + 5 + 6 + ... = 66795


The real question is, what's a simple formula for getting a sum of consecutive integers, starting at whole number 1, without having to actually count it all out?!


Answer




Have had a lot of friends ask about this lately, as it is all over FaceBook. The formula is actually quite simple:



(N (N + 1) ) / 2 where N = Highest value


Or Simply $\frac {n(n+1)}{2}$



Thus



365 (365 + 1) ) / 2 = 66795



Divide that by 100 (because there's 100 pennies in a dollar) and viola! $667.95



Now, this is an OLD math (think about 6th century BC), wherein these results are referred to as triangle numbers. In part, because as you add them up, you can stack the results in the shape of a triangle!



1 = 1
*
1 + 2 = 3
*

* *
1 + 2 + 3 = 6
*
* *
* * *
1 + 2 + 3 + 4 = 10
*
* *
* * *
* * * *







NoChance also has a fun story and answer to this question!



A little info on his lesson: -{for the super nerdy!}-

"...Carl Friedrich Gauss is said to have found this relationship in his
early youth, by multiplying n/2 pairs of numbers in the sum by the

values of each pair n+1. However, regardless of the truth of this
story, Gauss was not the first to discover this formula, and some find
it likely that its origin goes back to the Pythagoreans 5th century BC..." - wikipedia

"...The mathematical study of figurate numbers is said to have originated
with Pythagoras, possibly based on Babylonian or Egyptian precursors.
Generating whichever class of figurate numbers the Pythagoreans
studied using gnomons is also attributed to Pythagoras. Unfortunately,
there is no trustworthy source for these claims, because all surviving
writings about the Pythagoreans are from centuries later. It seems to
be certain that the fourth triangular number of ten objects, called

tetractys in Greek, was a central part of the Pythagorean religion,
along with several other figures also called tetractys. Figurate
numbers were a concern of Pythagorean geometry. ... - wikipedia







See? Fun stuff, numbers!


real analysis - Show $x_1=2,x_{n+1}=frac{x_n^2+2}{2x_n}≥sqrt{2}$

I want to show that $x_1=2,x_{n+1}=\frac{x_n^2+2}{2x_n}$ is a convergent sequence and thus want to bound it below by $\sqrt{2}$. I tried to use induction but my problem is that if $x_n>\sqrt{2}$ then the denominator is greater than 4 but the denominator is less than $\frac{1}{2\sqrt{2}}$ so they compete against each other. How can solve this issue?

probability - How to give rigorous proofs of these two limit statements?

Let $X$ be a random variable with cumulative distribution function $F(x)$. Then how to rigorously prove the following two limit statements?




  1. $\lim_{x \to - \infty} F(x) = 0$.


  2. $\lim_{x \to + \infty} F(x) = 1$.


abstract algebra - Finding an isomorphism between these two finite fields

Let $\alpha$ be a root of $x^3 + x + 1 \in \mathbb{Z}_2[x]$ and $\beta$ a root of $x^3 + x^2 + 1 \in \mathbb{Z}_2[x]$. Then we know that $$\mathbb{Z}_2(\alpha) \simeq \frac{\mathbb{Z}_2[x]}{(x^3 + x + 1)} \simeq \mathbb{F}_8 \simeq \frac{\mathbb{Z}_2[x]}{(x^3 + x^2 + 1)} \simeq \mathbb{Z}_2(\beta). $$




I need to find an explicit isomorphism $\mathbb{Z}_2(\alpha) \to \mathbb{Z}_2(\beta). $



I was thinking of finding a basis for $\mathbb{Z}_2(\alpha)$ and $\mathbb{Z}_2(\beta)$ over $\mathbb{Z}_2$.



I let $\left\{1, \alpha, \alpha^2\right\}$ and $\left\{1, \beta, \beta^2\right\}$ be these two bases. Now suppose I have a field morphism $$ \phi: \mathbb{Z}_2(\alpha) \to \mathbb{Z}_2(\beta) $$ which maps $1$ to $1$. how can I show that the image of $\alpha$, i.e. $\phi(\alpha)$, completely determines this map?

calculus - Prove: If $lim_{nrightarrowinfty}|a_n| = 0$, then $lim_{nrightarrowinfty}a_n = 0$




Using the squeeze theorem, prove the following:



If $\lim_{n\rightarrow\infty}|a_n| = 0$, then $\lim_{n\rightarrow\infty}a_n = 0$.




Let $f(x) = |a_n|$. Let $g(x) = a_n$ such that $\forall x : g(x) \leq f(x)$. How do I continue this proof using the squeeze theorem? I want to construct a function $h(x)$ that lies in between those two functions.



Answer



Hint : $-|x|\le x\le |x|{}{}{}{}{}{}$


summation - Sum the series $(1^2+1)1! + (2^2+1)2! + (3^2+1)3! cdots + (n^2+1)n!$

Problem:



Sum the series: $$(1^2+1)1! + (2^2+1)2! + (3^2+1)3! \cdots + (n^2+1)n!$$



Source: A book on algebra.I came across this interesting looking series and decided to tackle it.




My try :



All I have tried is taking the $r^{th}$ term and summing it, but in vain:
$$ T_r = (r^2+1)r!$$
$$T_r = r^2\cdot r! + r!$$
Now I don't know how to sum either of these terms.I'm not familiar with uni level math as I'm a high school student. All help appreciated!

Thursday 27 February 2014

discrete mathematics - Using Direct Proof. $1+2+3+ldots+n = frac{n(n + 1)}{2}$




I need help proving this statement. Any help would be great!


Answer



Here is an approach.



$$ s_n =1+2+3+\dots+(n-1)+n \\
s_n =n+(n-1)+(n-2)+\dots+1 . $$



Adding the above gives




$$2s_n = (1+n)+(2+(n-1))+(3+(n-2))+\dots+(1+n) $$



$$ =(1+n)+(1+n)+\dots+(1+n) $$



The above is nothing but adding $(1+n)$ n times and the result follows



$$ \implies s_n = \frac{n(n+1)}{2}. $$


complex numbers - When does $sqrt{wz}=sqrt{w}sqrt{z}$?



There exists a unique function $\sqrt{*} : \mathbb{C} \rightarrow \mathbb{C}$ such that for all $r \in [0,\infty)$ and $\theta \in (-\pi,\pi]$ it holds that $$\sqrt{r\exp(i\theta)}=\sqrt{r}\exp(i\theta/2),$$



where $\sqrt{r}$ denotes the usual principal square root of a real number $r$.



Lets take this as our definition of the principal square root of a complex number. Thus $i=\sqrt{-1}.$



Now. We know that, for all positive real $w$ and $z$, it holds that $\sqrt{wz}=\sqrt{w}\sqrt{z}$. We also know that this fails for certain complex $w$ and $z$. Otherwise, we'd be allowed to argue as follows:




$$-1 = i\cdot i = \sqrt{-1} \cdot \sqrt{-1} = \sqrt{-1 \cdot -1} = \sqrt{1}=1$$



My question: for which complex $w$ and $z$ does it hold that $\sqrt{wz}=\sqrt{w}\sqrt{z}$?


Answer



The solution to this question requires the definition of the unwinding number. Check the paper The unwinding number by Corless and Jeffrey, SIGSAM Bulletin 116, pp. 28-35.



The unwinding number is defined by
$$\ln(e^z) = z + 2 \pi i \mathcal{K}(z).$$
Obviously, $\mathcal{K}(z) \in \mathbb{Z}$.




For your question, Theorem 5 is the most relevant one, along with the point 1 in the second list in section 5.2:




  1. $\sqrt{zw}$. By theorem (5c) we would expect this to expand to
    $$\sqrt{z}\sqrt{w}e^{\pi i \mathcal{K}(\ln z + \ln w)}$$
    and this would not simplify further unless the assume system knew that $-\pi < \arg z + \arg w \le \pi$, in which case $\mathcal{K}$ would simplify to $0$.



Read the paper for a deeper insight.



analysis - $f:[0,1]rightarrow [0,1]$ and $fcirc f$ are not identically zero but $f(f(f(x)))=0$ for all $xin [0,1]$. Does $f$ exist?




Let $f:[0,1]\rightarrow [0,1]$ be a non-identically zero function such that $f\circ f$ is not identically zero but $f(f(f(x)))=0$ for all $x\in [0,1]$. Does there exist such a function?
I am thinking that such function does not exist but don't know how to start.



If $f$ is continuous, then we know it has a fixed point, say $x_0$. If $x_0\neq 0$, this contradicts the assumption that $f(f(f(x)))=0$. Otherwise it does not give us more information about $f$ (even if $f$ assumed to be continuous).



Please give some hints to solve.



Thanks in advance.


Answer



Hint: Split the domain into three intervals of equal length and construct a piecewise function (there does exist such a function that is continuous).



probability - How to Calculate CDF of Random Variable Depends on $omega$



Suppose that $X$ is a random variable defined as such where $\Omega = (0,1]$



$$X(\omega)=
\begin{cases}

\frac{1}{3}&\, 0 < \omega \leq \frac{1}{3}\\
\omega &\, \frac{1}{3}\leq \omega\leq \frac{2}{3}\\
\omega^2&\, \frac{2}{3} < \omega\leq 1\\
\end{cases}$$



I am not sure how or if it is possible to assign a CDF (or PDF) to this Random Variable. It seem to me that this mapping is not unique? Do we assume that $\omega$ is uniform?



For $E(X)$ I calculated $$E(X)=\int_{0}^{\frac{1}{3}}\frac{1}{3}\space +\int_\frac{1}{3}^\frac{2}{3}x\space + \int_\frac{2}{3}^1x^2$$



Is this correct? Or even going down the right path? Thanks



Answer



First we have to check the continuity. As we can see X is not continuous at 2/3 so CDF of this does not exist. And for expectation you have to integrate Xf(X) not only f(X).


Scaling a part of a function without altering the rest of it



The question rises form the needing of plotting the function



\begin{equation}
I\left(x, y\right) = I_0\operatorname{sinc}^2\left(x\right)\operatorname{sinc}^2\left(y\right).
\end{equation}

where
\begin{equation}
\operatorname{sinc}(x) = \begin{cases}\frac{\sin(\pi x)}{\pi x} &\text{ if } x \neq 0\, \\ 1 &\text{ if } x = 0 \end{cases}
\end{equation}
If you have already seen this function it's highly spiked at the origin of axes and decay very fast for values greater than 2 as you can see in the plot below



enter image description here



Now I would like to give a nice representation of the function by scaling the central spike (or equivalently the rest of the function in order to make the hight of subsequent fringes of interference. Initially I thought to scale the plot in the circle around the origin which contains the central spike, but this does nor really work well since it changes the relative scale in a discontinuous way, which results in a bad behaviour in the plot




enter image description here



So I thought that there could be a continuous function which can do the job I want, something for example like
\begin{equation}
{\rm scalingF}\left(x, y\right) \sim A\left(1-e^{-B\left(x^2 + y^2\right)}\right),
\end{equation}



namely the positive opposite of a gaussian so that the outer part of the plot is stretched out more than the inner part, but unfortunately this method turns out in cutting the highest part, even if the rest of the plot is now precisely how I want it:
\begin{equation}
{\rm scalingF}\left(x, y\right) I\left(x, y\right)

\end{equation}
gives
enter image description here



Does anybody know a way to scale just a part of this function maintaining it continuous and with a much more similar shape to the initial one (meaning that the profiles of the wave should remain the same)?



Thanks for your attention!


Answer



So, I started considering that the gaussian curve is not precisely what I did need. In fact the scaling with this function tends to create an "inverse spike" in the function, rather than scaling with the same height all the point of a function in a certain circular/squared region. This behaviour can be easily visualised by plotting the suggested function from @Théophile with A and B parameters set in order to scale enough the whole graph except from the central spike:
$$I\cdot{\rm scalingF}\left(x, y\right) = I\cdot\frac{(A-1)\left(1-e^{-B\left(x^2 + y^2\right)}\right) + 1}A$$

will produce (e.g for $A\gg1$, because I need a big scaling factor, and $B\approx.1$ as I just need a small region centered around the origin)



enter image description here



That's because it's scaling the center to $1/A$, but the stuff around is not scaled the same nor less, but more.



Then a function which stays the same for and interval and then grows really fast is needed. Plus, the region not around the axes (the blackest ones) must be scaled more than the region along the axes. Now, by knowing that a $\operatorname{rect}(x)$ function will create a bad discontinuity (the first solution tried by me), I thought that the nicest solution was a "continuous" $\operatorname{rect}(x)$ function, which can easily be obtained by thinking as the sum of two $\arctan(x)$ functions "going in opposite directions"



enter image description here




in which the arguments of $\arctan(x)$ has been choosen for cutting intensity in the region $\approx[-3.5, 3.5]$ which is more or less the one in which the original function is having that big spike.
At is point there are two possibilities. The first is to multiply the two "$\operatorname{rectarctan}(x)$" in order to get a distribution of intensity
\begin{equation}
{\rm scalingF1}\left(x, y\right) = \arctan[5 x - 20] - \arctan[5 x + 20] + 1.01 \pi)\cdot (\arctan[5 y - 20] -
\arctan[5 y + 20] + 1.01 \pi)
\end{equation}



enter image description here



where the factor of pies control the proportion between the central spike and the rest of the scaled graph such as if they become too big the scaling quantity on the central spike will become greater than the one on the other part.




Anyway, as it clear to see from figure 3 the path along the axis is not stretched out, so it can be annoying to have the spikes far from the axes bigger than the one along the axes as you can see below



enter image description here



But a simple modification is possible in order to remedy this little bug, by substituting the multiplication in the scaling function by the sum
\begin{equation}
{\rm scalingF2}\left(x, y\right) = \arctan[5 x - 20] - \arctan[5 x + 20] + 1.01 \pi)\cdot (\arctan[5 y - 20] -
\arctan[5 y + 20] + 1.01 \pi)
\end{equation}

which graph is



enter image description here



This function modulates both contents along the axes and contents outside, by cutting high intensities in the center, as it's possible to see in the final output



enter image description here


general topology - Finite union of $k$-dimensional rectangles




Definition



If a set $A\subset\mathbb{R}^k$ satisfying $B\subset A\subset\overline{B}$, then we call $A$ a $k$-dimensional standard rectangle.





Here $B$ is an open $k$-dimensional rectangle, that's to say,



$$B=(a_1,b_1)\times(a_2,b_2)\times\cdots\times(a_k,b_k)$$
We can see



$$\overline{B}=[a_1,b_1]\times[a_2,b_2]\times\cdots\times[a_k,b_k]$$



Using my topology knowledge, I have proved that for a $k$-dimensional standard rectangle $A$, we have $\operatorname{Int}\,\overline{A}=\operatorname{Int}\,A$, and $\overline{\operatorname{Int}\,A}=\overline{A}$.



But if we let $P$ be a finite union of some $k$-dimensional standard rectangles, (I call it a "simple figure") I can only prove $\overline{\operatorname{Int}\,P}=\overline{P}$.




Question: I believe that $\operatorname{Int}\,\overline{P}=\operatorname{Int}\,P$ (or $\partial\overline{P}=\partial P$) is also correct, but I cannot give a strict proof. Any help would be appreciated.


Answer



That is actually untrue. Let us work in $\mathbb {R} $. Take $ P = ]0,1 [ \cup ]1,2 [$. This clearly is a union of open rectangles. Clearly, $\overline{P} = [0,2]$. Thus, $ {Int}\overline {P} = ]0,2 [ \ \neq ]0,1 [ \cup ]1,2 [ = {Int} P $.


complex analysis - De Moivre's Theorem and a related formula?



$$(\cos(\theta)+i\sin(\theta))^n=\cos(n\theta)+i\sin(n\theta), n \in \mathbb{Z}$$



is De Moivre's Theorem. It is useful in calculating integer angle trigonometric identities such as $\cot(4\theta)$ by taking the real part of $(\cos(\theta)+i\sin(\theta))^4$ divided by its imaginary part, both parts obtained using binomial expansion.




However, from another post on this site, I've also seen that you can calculate $\cot(4\theta)$ by taking the real part of $(1+i\tan(\theta))^4$ divided by its imaginary part, likewise both parts obtained using binomial expansion.



So does that mean $(1+i\tan(\theta))^n=1+i\tan(n\theta)$? What's the relation between this formula and De Moivre's theorem?



Does this also mean I can flip the formulae to give me $(\sin(\theta)+i\cos(\theta))^4$ or $(\tan(\theta)+i)^4$ and take the imaginary part divided by the real part this time to still get the same answer for $\cot(4\theta)$?


Answer



Perhaps this may clarify.



$$(1+i \tan \theta)^4 = \sec^4 \theta (\cos \theta + i \sin \theta)^4 = \sec^4\theta (\cos 4\theta + i \sin 4\theta)$$
where we have used a variant of deMoivre's theorem in the last step. This also means, as the ratio of real to complex parts of the RHS is $\cot 4\theta$, you can use binomial theorem on the LHS and find the same ratio there to get an equivalent expression.




It does not imply $(1+i \tan \theta)^4$ is the same as $1 + i \tan 4\theta$.


elementary number theory - Divisibility rule for 43




Let $a_ka_{k-1}\dots a_1a_0$ the decimal expression of number ${n}$. Prove $n$ is divisible by 43 if and only if $a_ka_{k-1}\dots a_1-30a_0$ is divisible by 43.






Proof:



Let $\boldsymbol{x=a_ka_{k-1}\dots a_1}$ and $\boldsymbol{m=x-30a_0}$ then:



\begin{split}
43|n =43 \,|\, 10x+a_0 \Leftrightarrow & 10x&+&a_0 &\equiv 0\ ( \textrm{mod 43)} \\

\Leftrightarrow & 50x&+&5a_0 &\equiv0 \ (\text{mod 43)} \\
\Leftrightarrow & 7x&+&5a_0 &\equiv0 \ (\text{mod 43)} \\
\Leftrightarrow & 42x&+&30a_0 &\equiv0 \ (\text{mod 43)} \\
\Leftrightarrow & x &-& 30a_0& \equiv0 \ (\text{mod 43)} \Leftrightarrow 43 |x-30a_0 \Leftrightarrow 43|m
\end{split}






Is correct my proof ? Is there a better proof?


Answer




You're proof is perfectly fine. Maybe faster way to prove it to multiply everything by $13$ in the first step. So you have:



$$10x + a_0 \equiv 0 \pmod{43} \iff 130x + 13a_0 \equiv 0 \pmod{43} \iff x - 30a_0 \equiv 0 \pmod{43}$$



If you wonder how we came up with $13$ note that $10 \cdot 13 \equiv 1 \pmod {43}$, so $13$ is the multiplicative inverse of $10$ modulo $43$


indeterminate forms - What is undefined times zero?

Einstein's energy equation (after substituting the equation of relativistic momentum) takes this form:
$$E = \frac{1}{{\sqrt {1 - {v^2}/{c^2}} }}{m_0}{c^2}
% $$
Now if you apply this form to a photon (I know this is controversial, in fact I would not do it, but I just want to understand the consequences), you get the following:
$$E = \frac{1}{0}0{c^2}% $$
On another note, I understand that after dividing by zero:





  • If the numerator is any number other than zero, you get an "undefined" = no solution, because you are breaching mathematical rules.

  • If the numerator is zero, you get an "indeterminate" number = any value.



Here it seems we would have an "indeterminate" [if (1/0) times 0 equals 0/0], although I would prefer to have an "undefined" (because I think that applying this form to a photon breaches physical/logical rules, so I would like the outcome to breach mathematical rules as well...) and to support this I have read that if a subexpression is undefined (which would be the case here with gamma = 1/0), the whole expression becomes undefined (is this right and if so does it apply here?).



So what is the answer in strict mathematical terms: undefined or indeterminate?

Wednesday 26 February 2014

functional equations - $g(x+y)=g(x)g(y)$ for all $x,y inmathbb{R}$. If $g$ is continuous at $0$, prove that $g$ is continuous on $mathbb{R}$

$g(x+y)=g(x)g(y)$ for all $x,y \in\mathbb{R}$.



If $g$ is continuous at $0$, prove that $g$ is continuous on $\mathbb{R}$.

real analysis - Constructing a bijection

I need help with this question.



Construct an explicit bijection from [0,1] to (0,1) over the real numbers.



I have one that works over naturals but however I cannot not figure it out over the reals. Any and all help is appreciated.

limits - Dividing by zero results negative infinity



I was reading about limits and stumbled upon this site and saw the author doing something funny:



$$ \lim_{x\rightarrow 0} \frac{x^3-7x}{x^3} $$



It is quite clear that you can apply l'Hospital at this point but the author does something else:




$$ \lim_{x\rightarrow 0} \frac{x\left(x^2-7\right)}{x\left(x^2\right)} $$



$$ \lim_{x\rightarrow 0} \frac{x^2-7x}{x^3} = \quad'' \quad \frac{-7}{0} \quad'' = -\infty $$



I looked up indeterminate forms and could not find anything like this. What is going on here ?



With l'Hospital I dont get much farther either.


Answer



L'Hospital's rule is unnecessary here, as simple algebra gets rid of the indeterminate form. Just cancel an $x$ from the top and bottom as the author does:

$$
\lim_{x\rightarrow 0} \frac{x^3 - 7x}{x^3} = \lim_{x\rightarrow 0} \frac{x^2 - 7}{x^2}.
$$
Now, when the author writes "$-7/0^+$", he's saying that the limit of the numerator is -7, and the limit of the denominator is 0, but is positive on both sides. Whenever you have a finite limit for the numerator and a 0 limit for the denominator, the limit will either be nonexistent (if the signs don't match on opposite sides) or $\pm \infty$ (if they do). In this case the denominator is positive on both sides, so, since the numerator is negative, the limit will be $-\infty$. It is this sense in which $-7/0^+ = -\infty$. Similarly, in this sense $-7/0^- = \infty$, while $-7/0$ remains undefined.


sequences and series - Consider the geometric progression ...



I think I'm on the right track with this but not entirely confident.




$a_1 = 4$ , $a_2 = 4z$ , $a_3 = 4z^2$, ...




  1. The 6th element $a_6$

  2. The sum of the first 7 elements



I'm sure this works differently to arithmetic progression and uses ratios but a little stuck even with Googling like mad.



Thanks in advance.



Answer



Okay, so we need to find out what there is in common( common ratio). You do this by dividing. notice $$4z/4=z, 4z^2/4z=z $$ so it is clear that our formula is $a_n=4z^{n-1}$



So what is $a_6$?



$a_6=4z^{6-1}=4z^{5}$



To sum, we use the formula $ \sum_{k=0}^{n-1}(ar^k)=$a($\frac{1-r^n}{1-r})$



Where a is the first term in the series, and r is common ration. n is what you want to sum up to.




Specifically, for the first 7 elements, we have n=7,



so $\sum_{k=0}^{6}(4z^k)$=$$4(\frac{1-z^7}{1-z})$$


elementary set theory - Bijection between $mathbb{R}$ and $mathbb{R}/mathbb{Q}$




I was wondering if it is possible to produce an explicit bijection $h\colon \mathbb{R} \rightarrow \mathbb{R}/\mathbb{Q}$. If we can produce an explicit injection $i\colon \mathbb{R} \rightarrow \mathbb{R}/\mathbb{Q}$, can the Cantor-Bernstein-Schroeder Theorem be used constructively?



It is clear that the two sets have the same cardinality, so the existence of such a bijection is trivial. What I am really looking for is a nice-looking bijection, or a proof that no such nice-looking bijection exists, for a definition of "nice-looking" which I cannot quite figure out.



One problem which I think makes finding such a bijection difficult is that any natural injection of $\mathbb{R}/\mathbb{Q}$ (ie, those injections in which one representative is chosen from each coset) produces a non-measurable set, specifically a Vitali set.



I hate to ask such a vague question, but I'm really not sure about whether the correct answer is constructive, or whether it is a proof that any such bijection is in some sense "very complicated."



As a final note, the motivation for this question came from this discussion, in which I was somewhat astonished to see such a clear, constructive bijection given between $\mathbb{R}$ and $\mathbb{R} \setminus S$, where $S$ is countable.



Answer



It is not possible.



It is consistent with set theory without choice that $\mathbb R/\mathbb Q$ has strictly larger cardinality that $\mathbb R$. (This looks counter-intuitive, since $\mathbb R/\mathbb Q$ is a quotient.)



This is the case because, using a fact that goes back to SierpiÅ„ski (Sur une proposition qui entraîne l’existence des ensembles non mesurables, Fund. Math. 34, (1947), 157–162. MR0023318 (9,338i)), in any model of $\mathsf{ZF}$ where all sets of reals have the Baire property, it is not even possible to linearly order $\mathbb R/\mathbb Q$.



(Sierpiński argues in terms of Lebesgue measure. The argument in terms of the Baire property is analogous, and has the additional technical advantage of not requiring any discussion of consistency strength issues.)



A couple of years ago, Mike Oliver gave a nice talk on this topic (How to have more things by forgetting where you put them); he is not exactly using $\mathbb R/\mathbb Q$, but the arguments easily adapt. The point of the talk is precisely to give some intuition on why we expect the quotient to be "larger".




[Of course, in the presence of choice, the two sets have the same size. The argument above shows that the use of choice is unavoidable.]


functions - Conditions Equivalent to Injectivity





Let $A$ and $B$ be sets, where $f : A \rightarrow B$ is a function. Show that the following properties are valid equivalent*:




  1. $f$ is injective.

  2. For all $X, Y \subset A$ is valid: $f(X \cap Y)=f(X)\cap f(Y)$

  3. For all $X \subset Y \subset A$ is valid: $f(X \setminus Y)=f(X) \setminus f(Y)$.





I do know what injective is, but I thought number (2.) and (3.) were valid for any kind of function. Just to see if I understood this right:



$f(X\cap Y)$ means, first, make the intersection from $X$ and $Y$ and then map it to $B$ via $f$.



$f(X)\cap f(Y)$ actually means, map all $f(X)$ and $f(Y)$ and intersect both.



Aren't those both properties valid for all functions? I can't think of a counter example. Thanks in advance guys!



*edited by SN.


Answer




No they are not true for any function:



Take a function $f:\{0,1\}\to\{0,1\}$ such that $f(0)=1$ and $f(1)=1$. Then $f[\{0\}\cap\{1\}]=f[\varnothing]=\varnothing$ but $f[\{0\}]\cap f[\{1\}]=\{1\}\cap\{1\}=\{1\}$. This function provides a counterexample for the second case as well: $f[\{0,1\}\setminus\{0\}]=\{1\}$, while $f[\{0,1\}]\setminus f[\{0\}]=\{1\}\setminus\{1\}=\varnothing$.



Note that for any function $f$ it is true that $f[X\cap Y]\subseteq f[X]\cap f[Y]$ and it is also true that $f[X]\setminus f[Y]\subseteq f[X\setminus Y]$.



As for the equivalence, a function $f$ is called injective exactly when $x\neq y$ implies $f(x)\neq f(y)$ (or equivalently $f(x)=f(y)$ implies $x=y$). They are sometimes called $1-1$:



$1\Rightarrow 2$. Let $f:A\to B$ be injective. We just need to show that $f[X]\cap f[Y]\subseteq f[X\cap Y]$. Let $x\in f[X]\cap f[Y]$. Then there is some $a\in X$ and some $b\in Y$ such that $f(a)=f(b)=x$. By the definition of injective functions $a=b$ thus $a\in X\cap Y$ or $x\in f[X\cap Y]$.




$2\Rightarrow 1$. Now let $f[X]\cap f[Y]=f[X\cap Y]$. Let $f(a)=f(b)$. We have that $f[\{a\}\cap\{b\}]=f[\{a\}]\cap f[\{b\}]$. Thus $f[\{a\}\cap\{b\}]$ is not empty (since it is equal to $f[\{a\}]\cap f[\{b\}]$ which is not). Therefore $\{a\}\cap\{b\}$ is not empty, which means that $a=b$.



$1\Rightarrow 3$. Let $f:A\to B$ be injective. We just need to show that $f[X\setminus Y]\subseteq f[X]\setminus f[Y]$. Let $x\in f[X\setminus Y]$. Of course $x\in f[X]$. We have that there is some $a\in X\setminus Y$ such that $f(a)=x$. For every $b\in Y$ we have that $a\neq b$ thus $f(a)\neq f(b)$. Thus $x\notin f[Y]$ and thus $x\in f[X]\setminus f[Y]$.



$3\Rightarrow 1$. Conversely assume that $f[X\setminus Y]= f[X]\setminus f[Y]$. Let $f(a)=f(b)$. Then $f[\{a,b\}\setminus\{b\}]=f[\{a,b\}]\setminus f[\{b\}]$. The second set is empty, thus $f[\{a,b\}\setminus\{b\}]$ is empty. Then $\{a,b\}\setminus\{b\}$ is empty, which means $a=b$.


Tuesday 25 February 2014

sequences and series - Closed form of $sum_{k=0}^nbinom{2k}{k}(-1/4)^k$



Loosely related to my last question, I was trying to find a closed form of the finite sum



$$a_n:=\sum_{k=0}^n\binom{2k}{k}\left(-\frac{1}4\right)^k$$




This is not too different from the well-known expression



$$\sum_{k=0}^n\binom{2k}{k}\left(\frac{1}4\right)^k=\binom{n+\frac12}{n}=\frac{2n+1}{2^{2n}}\begin{pmatrix}2n\\n\end{pmatrix}$$



so



$$
\sum_{k=0}^n\binom{2k}{k}\Big(\frac{-1}4\Big)^k=\sum_{k=0}^n\binom{2k}{k}\Big(\frac{1}4\Big)^k-2\sum_{\substack{k=0\\k\text{ odd}}}^n\binom{2k}{k}\Big(\frac{-1}4\Big)^k\\
=\frac{2n+1}{2^{2n}}\binom{2n}{n}-\frac12\sum_{l=0}^{\lfloor \frac{n-1}2 \rfloor}\begin{pmatrix}4l+2\\2l+1\end{pmatrix}\frac1{16^l}.
$$




However, the second sum does not seem to be easier to handle at first glance. Also trying a similar ansatz $a_n= \binom{2k}{k}p_n/2^{2n}$ led to seemingly unstructured $p_n$ given by (starting from $p=0$):



$$
1,1,\frac73,\frac95,\frac{72}{35},\frac{9}{7},\frac{185}{77},\frac{227}{143},\frac{5777}{2145},\ldots
$$




So I was wondering (a) if there already is a known closed form for the $a_n$ (I've searched quite a bit but sadly wasn't successful) and if not, then (b) what is a good way to tackle this problem - or does a "simple" (sum-free) form maybe not even exist?





Thanks in advance for any answer or comment!






Edit: I am aware of the solution



$$
a_n=(-1)^n 2^{-2n+2}\begin{pmatrix}2n+2\\n+1\end{pmatrix} {}_2F_1(1;n+\frac32;n+2;-1)+\frac1{\sqrt2}
$$




Mathematica presents where ${}_2F_1$ is the hypergeometric function. But as the latter is given by an infinite sum, I was hoping for something simpler by connecting it to the known, non-alternating problem as described above - although I'd also accept if this was not possible.


Answer



$$\frac{(-1)^k}{4^k}\binom{2k}{k}=[x^k]\frac{1}{\sqrt{1+x}}$$
implies that
$$ \sum_{k=0}^{n}\frac{(-1)^k}{4^k}\binom{2k}{k} = [x^n]\frac{1}{(1-x)\sqrt{1+x}}=\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{2}}[x^n]\frac{1}{1+x+\sqrt{2+2x}} $$
hence $a_n$ is positive and convergent to $\frac{1}{\sqrt{2}}$, and the hypergeometric representation is straightforward.
The Maclaurin coefficients of $g(x)=\frac{1}{(1-x)+\sqrt{2}\sqrt{1-x}}$ are positive and they behave like $\frac{1}{\sqrt{2\pi n}}$ for large values of $n$, hence we have the approximate identity
$$ a_n \approx \frac{1}{\sqrt{2}}+\frac{(-1)^n}{2\sqrt{\pi n}} $$
for $n\to +\infty$.


complex numbers - Find the roots of the polynomial? (Cardano's Method)

$y^3-\frac7{12}y-\frac7{216}$



This is part of Cardano's method, so I've gotten my first root to be:




$y_1=\sqrt[3]{\frac7{432}+i\sqrt{\frac{49}{6912}}}+\sqrt[3]{\frac7{432}-i\sqrt{\frac{49}{6912}}}$



I am at a loss on how to evaluate this. I know this is one of three real roots, but I don't know what to do now.

algebra precalculus - A geometric sequence has first term $256$ and ratio $0.75$. Find the smallest $n$ for which the sum of the first $n$ terms exceeds $1000$.


A geometric sequence whose first term = 256 and whose common ratio is 0.75. Find the smallest number of n for which the sum of the first n terms of the sequence exceeds 1000.




My turn:

$$S_n = \frac{256(1-(0.75)^n)}{1-0.75} > 1000$$
$$n \log{0.75} < \log{\frac{3}{128}}$$
$$n < 13.04 $$ then
$$n = 13$$




What is wrong with my solution because 13 does not satisfy the requires but 14 does ?


elementary number theory - Solve congruence equation $3373x equiv 2^{485} pmod{168}$



$$3373x \equiv 2^{485} \pmod{168}$$



Uhm...I don't even know how to start.




$GCD(3373,168)=1$, so solution exists.



Usually I would use extended Euclidean algorithm and get the outcome, but it would require multiplying by $2^{485}$ later, so...well, is there some easier way?


Answer



First of all, $168=2^3\cdot3\cdot7$, so you want to solve the three congruences:



$3373x\equiv2^{485} \pmod 8 \\
3373x\equiv2^{485} \pmod 3 \\
3373x\equiv2^{485} \pmod 7$




The first of these three is really $3373x\equiv_8 0$, which just implies $x\equiv_8 0$



For the second, $3373\equiv_3 1$, and any odd power of $2$ is congruent modulo $3$ to $2$, so we have $x\equiv_32$



Thirdly, $3373\equiv_76$, and $2^3\equiv_71 \Rightarrow 2^{485}\equiv_72^2=4$. Thus, we have $6x\equiv_74$, or $x\equiv_73$



Finally, we use the Chinese Remainder Theorem to find a number satisfying:



$x\equiv_80 \\
x\equiv_32 \\

x\equiv_73$



In this way, we obtain $x\equiv 80 \pmod {168}$.


Monday 24 February 2014

calculus - Proving that a function grows faster than another



I'm told to prove or disprove that $4^{\sqrt{n}}$ grows faster than $\sqrt{4^n}$
As n tends to infinity.




From my Previous years Calculus I know that if I take the derivative of two functions, and one is bigger, than it must grow faster.



$f(n) = 4^{\sqrt{n}}$



$g(n) = \sqrt{4^n}$



if
$f'(n)$ is > $g'(n)$ Then $f(n)$ grows faster than $g(n)$, as $n\to\infty$



My Attempt:




$f'(n) = (\ln(4)\times 4^{\sqrt n)})
/2^{\sqrt n}$



$g'(n) = (\ln(4)\times 4^{n/2})
/2$



But Now I'm not sure if f(n) does grow faster, how would I know that it's first derivative is bigger here, just by plugging in numbers?


Answer



Hint: Look at the ratio $\dfrac{4^{\sqrt{n}}}{\sqrt{4^n}}$. This is equal to

$$\left(\frac{4}{4^{\sqrt{n}/2}}\right)^{\sqrt{n}}.$$


finite fields - Addition and Multiplication in $F_4$



Could anyone explain the example below? Why is $F_4 = $ {$0,1,x,x+1$}? (I was learning that it should be $F_4 = $ {$0,1,2,3$}). And how do we get the two tables?



enter image description here


Answer



$\Bbb{F}_4$ is the finite field of order $4$. It is not the same as $\Bbb{Z}_4$, the integers modulo 4. In fact, $\Bbb{Z}_4$ is not a field. $\Bbb{F}_4$ is the splitting field over $\Bbb{F}_2 = \Bbb{Z}_2$ of the polynomial $X^4 - X$. You get the addition table by observing that $\Bbb{F}_4$ is a 2-dimensional vector space over $\Bbb{F}_2$ with basis $1$ and $x$ where $x$ is either of the roots of $X^4 - X = X(X - 1)(X^2 + X + 1)$ that is not in $\Bbb{F}_2$. You get the multiplication table by using $x^2 + x + 1= 0$ to simplify the expressions for the products of $x$ and $x + 1$.


elementary number theory - Exponent of Prime in a Factorial

I was just trying to work out the exponent for $7$ in the number $343!$. I think the right technique is $$\frac{343}{7}+\frac{343}{7^2}+\frac{343}{7^3}=57.$$ If this is right, can the technique be generalized to $p$ a prime number, $n$ any positive integer, then the exponent of $p$ in $n!$ will be $$\sum_{k=1}^\infty\left\lfloor\frac{n}{p^k}\right\rfloor\quad ?$$ Here, $\lfloor\cdot\rfloor$ denotes the integer less than or equal to $\cdot$ .



Obviously the sum is finite, but I didn't know if it was correct (since its veracity depends on my first solution anyway).

probability - Solving birthday problem without complement

I'm trying to find the probability of at least 2 people in a room of 4 sharing the same birthday (without using complements).



I began by breaking the problem down into 4 cases:



Let E = the event that at least 2 people share the same birthday in a room of 4.



Our sample size: $365^4$




Case 1: 4 people share the same birthday: 365 ways



Case 2: 3 people share the same birthday, 1 distinct birthday: $365 \cdot 364 \cdot C(4,3)$



Case 3: 2 people share a birthday, another 2 people share some other birthday: $365 \cdot 364 \cdot \frac{C(4,2)}{2}$



Case 4: 2 people share same birthday, 2 distinct birthdays: $365 \cdot 364 \cdot 363 \cdot C(4,2) \cdot 2$



After adding up all the cases and dividing by the sample size to find probability the answer had an over-count. I checked my answer by doing $$P(E) = 1- \frac{365 \cdot 364 \cdot 363 \cdot 362}{365^4}$$




Where did I have an over-count? Thank you!






Here is an example that works with n = 3 people and at least 2 people share same birthday.



Case 1: 3 people share same birthday: 365



Case 2: 2 Same birthdays, 1 different: $365 \cdot 364 \cdot \binom{3}{2}$




$$P(E) = \frac{365 + (365 \cdot 364 \cdot \binom{3}{2})}{365^3} \equiv 1 - \frac{365 \cdot 364 \cdot 363}{365^3}$$



Those are both equivalent answers because in the complement we're subtracting away the event that all birthdays are distinct.

real analysis - Indeterminate Solutions




I was watching an old video by Mathologer talking about various problems involving 0 and $\infty$, but at the end of the video, at roughly 11:40, he concludes




...if you want to make sense of $\frac30$ you also do this by sneaking up on $0$ and of course you know that things explode magnitude-wise...In higher level calculus it actually makes sense to treat infinity like a number and to actually write equations like $\frac30=\infty$, and you really mean it...$3$ as a number divided by infinity as a number is equal to infinity.




He then goes on to say




In other branches of mathematics, you sometimes find it actually does make sense to set $\frac00$ equal to 1





He said he would make a video elaborating on these last claims, but after digging through his playlists I don't think he has. The things said here sound like mathematical heresy according to what I've been told by math professors. My first guess right now is to not take his exact words at face value, but consider the gist of what he's trying to say is analogous to say, $=$ signs having different meanings in different contexts, i.e., a regularized sum vs assigned sum. Maybe what he's saying really is true. But I want to make sure, considering I don't think he ever made a follow-up.



So, can anybody confirm that he is wrong or that I'm just missing some crucial context?


Answer



Obviously in real analysis you can not claim that $\frac {1}{0} =\infty$ because as $x\to 0$, $\frac {1}{x}$ takes extremely large positive and extremely large negative (in magnitude) values.



The story of $\frac {1}{\infty}=0$ is a different one. Because as $x\to \infty$, ${1/x}\to 0$




The story of $\frac {0}{0} $ is the most interesting one because that is what we do when we take derivative of a function at a point.



We divide $f(x+h)-f(x)$ by $h$ and let $h\to 0$ so we are trying to find the limit of the difference quotient as both top and bottom tend $0$



As you know not every derivative is $1$ so saying that in advanced courses we may define $\frac {0}{0}=1$ does not make sense.



Word of Wisdom:



Listen to your professors and do not believe the video stuff.


Expectation of minimal value in a k element subset



The problem statement is as follows-



Compute $f(n,k)$ which is the mathematical expectation of the minimal element among all $k$-element subsets of a set $[1,2,...n]$.



My approach is as follows-



$1$ is the minimal element in $\binom{n-1}{k-1}$ and $2$ is the minimal element in $\binom{n-2}{k-1}$ and so on.




It can be proved that $\sum_{i=1}^{n+1-k}\binom{n-i}{k-i}=\binom{n}{k}.$



So according to me the expected value should be $\frac{\sum_{i=1}^{n+1-k}i.\binom{n-i}{k-1}}{k}$



I multiply each term with $\frac{1}{k}$ because the probability of choosing the minimal element is $\frac{1}{k}$.



The answer happens to be $\frac{n+1}{k+1}$. I do not know why my logic is wrong. As far as I am aware, this is the definition of expectation.



This is part of a problem on http://codeforces.com/problemset/problem/840/A.


Answer




I'm not too sure about the statement where you say that $i$ is the minimal element in $\binom{n-i}{k-i+1}$ ways. Shouldn't it be $\binom{n-i}{k-1}$ ways? Because how I see it, you choose the remaining $k-1$ elements of your desired set from exactly the $n-i$ terms that are greater than $i$. Then the expectation becomes
\begin{equation}
\mathbb{E}[X]=\frac{\sum^{n-k+1}_{i=1}\binom{n-i}{k-1}i}{\binom{n}{k}}
\end{equation}

You have the right expression for the numerator, so maybe I misunderstood you. But you are essentially counting the number of ways to do get a set with $i$ as the minimal element, in which case you must divide by the total number of ways; essentially, each term in the sum in the numerator (barring the factor of $i$) gives you the number of ways to do this, and dividing by the total number of ways gives you the probability.


Sunday 23 February 2014

calculus - Confirmation of Proof: $f(x)= x^5 - x - 16$ has at most three real roots.



I've been struggling to find a way to solve this question for a while now.



I can prove that there is at least one real root using IVT but have no idea how to prove that there can exist more.




Currently my idea is to prove that there cannot be $5$ real roots somehow and then I can say that as complex numbers come with conjugates, there cannot only be one complex root and so there can be at most $3$ real roots.



I had another idea to assume that there are at most $2$ real roots. Then use the fact that complex conjugates come in pairs to say there must be at most 3 roots as we have already shown there are at least $2$. But then there would be nothing to account for if there are $5$ real roots.



Am I just overthinking this and there's an easier way to go about it?


Answer



We can actually do this directly. Note that between each pair of adjacent but distinct real roots, the polynomial must have a local maximum or a local minimum, since it has to turn around to get back to zero. Thus between each pair of adjacent and distinct real roots, the derivative of $f$ has a real root. So let's look at $f'(x)=5x^4-1$. We can see immediately that this has the four fourth roots of $\frac{1}{5}$ as its roots: $\frac{1}{\sqrt[4]{5}}$, $\frac{i}{\sqrt[4]{5}}$, $\frac{-1}{\sqrt[4]{5}}$, $\frac{i}{\sqrt[4]{5}}$. Since only two of these are real, $f$ can only have at most three distinct real roots.



All that remains is to check that $f$ has no repeated roots. We can do this by checking that $f$ and $f'$ are relatively prime. Dividing $f$ by $f'$, we get $f(x)=f'(x)(\frac{1}{5}x) -\frac{4}{5}x - 16$.

Then since the remainder is linear, it divides $f'$ if and only if they share a root, but the root of $-\frac{4}{5}x-16$ is the same as the root of $x+20$, and we know $-20$ is not a root of $f'(x)$. Thus $f$ and $f'$ are relatively prime, so $f$ has no repeated roots.



Hence $f$ has at most three real roots.


Problem on modular arithmetic




I can't quite figure out how to title this question, but since this isn't quite a proof, I'm not sure if there's a better way to title this.



This is a problem from Norman Bigg's Discrete Mathematics textbook that I can't manage to figure out.



"Show that the equation $x = x^{-1}$ in $\mathbb{Z}_p$ implies that $x^2 - 1 = 0$, and deduce that $1$ and $-1$ are the only elements of $\mathbb{Z}_p$ that are equal to their own inverse."



First, $\mathbb{Z}_p$ elsewhere in the text refers to the integers modulo (some prime), though the author always took care to specify that $p$ was a prime, though that isn't the case here. This is throwing me off quite a bit, as I just worked out another example in $\mathbb{Z}_8$ where $5$ was its own inverse (as $5 \cdot 5 = 25 \equiv 1 \ \text{(mod $8$)}$. So, this doesn't hold with $p = 8$, so it seems that it must be the case that there are some restrictions on $p$, most likely that it is prime. (If I'm wrong on this, and this could apply to any $p \in \mathbb{N}$, please let me know.)



From here, I can't quite seem to string together these equations. I've been able to piece together, from our assumptions, these facts:




(a) $xx^{-1} \equiv 1 \ \text{(mod $p$)}$ since $x$ and $x^{-1}$ are inverses.



(b) $x^2 \equiv 1 \ \text{(mod $p$)}$ since $x = x^{-1}$ by assumption.



(c) By the definition of modulus, $x^2 - 1$ is a multiple of $p$, so $\exists$ some $m \in \mathbb{N}$ such that $x^2 - 1 = mp$.



(d) We need to somehow deduce that $mp$ must equal $0$. The only possible way I could think to do this is to assume that $p$ is prime and conclude that $x^2 -1$ clearly cannot be prime as it's a difference of squares. But, even then, it could certainly be a multiple of a prime, if I'm not mistaken. This doesn't seem to make much sense.



From here, it seems to me that the second part of the problem would follow. If equality in $\mathbb{Z}_p$ implies $x^2 - 1 = 0$, and we can factor the left-hand side, a difference of squares, into $(x+1)(x-1)$, then clearly $x = 1$ or $x = -1$. Assuming this is correct--please tell me if it isn't--the only real issue seems to be deducing that $x^2 - 1 = 0$.




Any helpful thoughts and hints would be very appreciated.


Answer



You reasoning is on the right track. Since $(x-1)(x+1) = x^2 -1 \equiv 0 \pmod{p}$, $p$ divides $(x-1)(x+1)$ and since $p$ is prime (as you noted, otherwise the result can be false), either $p | (x+1)$ or $p|(x-1)$, that is, either $x \equiv 1 \pmod{p}$ or $x \equiv -1 \pmod{p}$.



If you've taken $x$ to be on $\{-1,\dots,p-2\}$, then from here you can conclude that $x^2 -1 = 0$ since $x = 1$ or $x = -1$. If not, that's not necessarily true. For example, by the above calculation $p+1$ verifies $(p+1)^2-1 \equiv 0 \pmod{p}$ but $(p+1)^2 -1 \neq 0$ as integers.



As a side note, $[x] \in \mathbb{Z}_n$ has an inverse if and only if $x$ is coprime with $n$. To talk about inverses in a general setting, one must first assure their existence. One way of doing that is taking $n$ prime, so that any non zero element has an inverse (in other words, $\mathbb{Z}_p$ is a field if $p$ prime).


elementary number theory - Comparing the Least Common Multiple for $n!$ to the square root of $n!$



I have been thinking about the Least Common Multiple and how it compares to $n!$




For $n=\{1,2,3,4,5,6\}$, the lcm for $n!$ is greater than $\sqrt{n!}$.





Is this always the case?



I would be interested in understanding how to analyze this.



Here's the thinking about this that I've done to this point:




Let $v(p,n)$ be the highest power of $p$ that is less or equal to $n$.




The least common multiple for $n!$ is:



$$\prod\limits_{p \le n}p^{v(p,n)}$$




I find it is much easier to reason about $\frac{(x+n)!}{x!}$ where I find in all cases:




$$\frac{(x+n)!}{x!\prod\limits_{p \le n}p^{v(p,x+n)}} \le (n-1)!$$





But applying this to the case of $n!$ is not interesting:




$$\frac{n!}{\prod\limits_{p \le n}p^{v(p,n)}} \le (n-1)!$$





  • the lcm of $n!$ increases each time a higher power or a prime is encountered.

  • Primes get rarer as $n$ increases.


  • Larger powers of primes also get rarer as $n$ increases.



What method would be used to compare the lcm of $n!$ with $\sqrt{n!}$? Under what conditions would the lcm of $n!$ be less than $\sqrt{n!}$?


Answer



Let $L_n=\text{lcm}(1,2,\ldots,n)$. Then $\ln L_n=\psi(n)$ where
$$\psi(n)=\sum_{p\le n}\left\lfloor\frac{\ln n}{\ln p}\right\rfloor\ln p.$$
It is well-known that the Prime Number Theorem implies $\psi(n)\sim n$.
By Stirling $\log\sqrt{n!}$ grows faster. Eventually $L_n<\sqrt{n!}$
for all large $n$.



real analysis - Example of function $f:mathbb Rto mathbb R$ which is differentible and bijective but its inverse is not differentible.


Example of function $f:\mathbb R\to \mathbb R$ which is differentible and bijective but its inverse is not differentible.





First of all do not know is above is true as for inverse function to be not differentible , there exist some point at which $f'(x)=0$ which is not possible due to bijective ness .



Where I am missing ?



Any Help will be appereciated

How to change the order of summation?

I have stumbled upon, multiple times, on cases where I need to change
the order of summation (usally of finite sums).



One problem I saw was simple
$$
\sum_{i=1}^{\infty}\sum_{j=i}^{\infty}f(i,j)=\sum_{j=1}^{\infty}\sum_{i=1}^{j}f(i,j)
$$



and I can go from the first sum to the second by noting that the

constraints are
$$
1\leq i\leq j<\infty
$$
so the first double sum does not constrain on $i$ and constrains
$j$ to $j\geq i$. The second double summation doesn't put any constrains
on $j$ but constrains $i$ relative to $j$ $(1\leq i\leq j)$.



While this approach works for simple examples such as this. I am having
problems using it where the bounds are more complicated.




The current problem interchanges the following
$$
\sum_{i=1}^{n-1}\sum_{k=2}^{n-i+1}\to\sum_{k=2}^{n}\sum_{i=1}^{n+1-k}
$$



I started by writing
$$
k\leq n-i+1
$$




and got
$$
i\leq n-k+1
$$



but all other bounds are not clear to me..



the problem is that I can't use this technique since I can't write
the inequalities in the same form of

$$
1\leq i\leq f(j)\leq n
$$



where $n$ is some bound (possibly $\infty$).



My question is how to approach the second example by a technique that
should be able to handle similar cases

probability theory - Finite expectation of a random variable

$X \geq 0$ be a random variable defined on $(\Omega,\mathcal{F},P)$. Show that $\mathbb{E}[X]<\infty \iff \Sigma_{n=1}^\infty P(X>n) < \infty $.



I got the reverse direction but I am struggling with the $"\implies"$ direction. So far, I have the following worked out:




$\mathbb{E}[X]<\infty$



$\implies \int_0^\infty (1-F(x)) dx < \infty$ (where $F$ is the distribution function of the random variable X)



$\implies \int_0^\infty (1-P(X\leq x)) dx < \infty$



$\implies \int_0^\infty P(X>x) dx < \infty$



Consider $\int_0^\infty P(X>x) dx$




$= \Sigma_{n=1}^\infty \int_{n-1}^n P(X>x) dx$



This is the point I am stuck at. Any help will be deeply appreciated!

Saturday 22 February 2014

algebra precalculus - Prove that $2sqrt{n}sqrt{n+1} < 2n + 1$ for all positive integers.



I've been testing this with many values and it seems to always be true. I've been trying to rework the inequality into a form where it's much more obvious that the left hand side is always less than the right, but can't seem to do it. Can anyone help me out here?




Thanks.


Answer



$$
2\sqrt{n}\sqrt{n+1}=\sqrt{4n^2+4n}<\sqrt{4n^2+4n+1}=2n+1.
$$
Note. This inequality holds for every non-negative real $n$ (not only integer.)


group theory - What is $operatorname{Aut}(mathbb{R},+)$?




I was solving some exercises about automorphisms. I was able to show that $\operatorname{Aut}(\mathbb{Q},+)$ is isomorphic to $\mathbb{Q}^{\times}$. The isomorphism is given by $\Psi(f)=f(1)$, but when I try to do the same thing with $\operatorname{Aut}(\mathbb{R},+)$ I got stuck.



My question is: What is $\operatorname{Aut}(\mathbb{R},+)$?



I would appreciate your help.


Answer



First note that for every $\alpha\in\mathbb R^\times$ we have that $x\mapsto\alpha\cdot x$ is an automorphism of $(\mathbb R,+)$.



Also note that if $f(x+y)=f(x)+f(y)$ then $f(2)=f(1)+f(1)$, and by induction $f(n)=n\cdot f(1)$ for $n\in\mathbb N$, equally $f(k)=k\cdot f(1)$ for $k\in\mathbb Z$. This carries to rationals as well, so $f\left(\frac{p}{q}\right)=\frac{p}{q}\cdot f(1)$.




Now, if $f$ is continuous then for every $x\in\mathbb R$ we have $f(x)=x\cdot f(1)$. So setting $\alpha=f(1)$ gives us that the continuous solutions are the solutions defined by $\mathbb R^\times$, therefore $\mathrm{Aut}(\mathbb R,+)\cap\{f\in\mathbb R^\mathbb R\mid f\text{ continuous}\}\cong(\mathbb R^\times,\cdot\ )$.



The existence of non-continuous solutions requires some axiom of choice, since such solutions generate Lebesgue non-measurable sets. So if we assume that every set is Lebesgue measurable (e.g. Solovay's model of models of Determinacy) then indeed there are no other solutions.



However, assuming the axiom of choice we can generate a basis for the vector space $\mathbb R$ over $\mathbb Q$. Note that every permutation of this basis can be extended to an automorphism of the vector space, namely $(\mathbb R,+)$.



The cardinality of such basis (known as Hamel basis) is $2^{\aleph_0}$, we have $2^{2^{\aleph_0}}$ many non-continuous solutions if we assume that such basis exists.



For further reading:





  1. Is there a non-trivial example of a $\mathbb Q-$endomorphism of $\mathbb R$?

  2. Horst Herrlich, The Axiom of Choice. Springer, 2006. (In particular section 5.1)


calculus - Dirichlet integral.




I want to prove $\displaystyle\int_0^{\infty} \frac{\sin x}x \,\mathrm{d}x = \frac \pi 2$, and $\displaystyle\int_0^{\infty} \frac{|\sin x|}x \,\mathrm{d}x \to \infty$.




And I found in wikipedia, but I don't know, can't understand. I didn't learn differential equation, laplace transform, and even inverse trigonometric functions.



So tell me easy, please.


Answer



About the second integral: Set $x_n = 2\pi n + \pi / 2$. Since $\sin(x_n) = 1$ and
$\sin$ is continuous in the vicinity of $x_n$, there exists $\epsilon, \delta > 0$ so that $\sin(x) \ge 1 - \epsilon$ for $|x-x_n| \le \delta$. Thus we have:
$$\int_0^{+\infty} \frac{|\sin x|}{x} dx \ge 2\delta\sum_{n = 0}^{+\infty} \frac{1 - \epsilon}{x_n} = \frac{2\delta(1-\epsilon)}{2\pi}\sum_{n=0}^{+\infty} \frac{1}{n + 1/4} \rightarrow \infty $$


real analysis - "Prove that $sup{sqrt2,sqrt{2+sqrt2},cdots}=2$": How to show that the set is bounded above?

It is clear that $\sup\{\sqrt2,\sqrt{2+\sqrt2},\cdots\}=2$. However, I am struggling to prove this without using limits or sequences. I know that I need to show that the set bounded above, and then we can determine that it has a supremum which I can then show is equal to $2$. How can I show that this set is bounded above in an elementary way? We have not yet covered sequences, limits, or convergence in my analysis class, nor have we covered the trigonometric functions.

logic - Is Euclidean Geometry complete and unique

Please help me understand this concept of completeness of geometry and set me on the right path.



This is my context: From wikipedia, a formal system is complete if every tautology is also a theorem. For me this means that a system of axioms is complete if every statement that is true can be proved from the axioms (the question is tautologies in what kind of logic, first order, second order,...?). Anyway, it seems then that the concept of completeness depend on the system of axioms.



So, in the case of Euclidean Geometry, its completeness depends on their axioms (For example Euclid's Axioms, Hilbert Axioms, Tarki's Axioms,etc). Comparing for example the axioms of Hilbert and the axioms given by Tarski, I can see that they are essentially different in that Hilbert uses second order logic and Tarski's only first order logic. Also Tarski proved that his system of axioms is complete.



Now, the incompleteness theorem of Gödel stated that the system of axioms of arithmetic is incomplete. So, when we consider $R^2$ and $R^3$ as a representation of Euclidean Geometry we are talking about a system that is not complete because we are using the real numbers. So, are we speaking of different Euclidean Geometries? I'm totally confused.

Friday 21 February 2014

calculus - Convergence of $sumlimits_{n=1}^infty left(e^frac{1}{n} -1 -frac{1}{n}right) $


I've got in my assignment to show if the following series converges or diverges.
$$\sum_{n=1}^\infty \left(e^\frac{1}{n} -1 -\frac{1}{n}\right) $$





Attempt:
\begin{align*}
\sum_{n=1}^\infty \left(e^\frac{1}{n} -1 -\frac{1}{n}\right) &=\sum_{n=1}^\infty \left( 1 + \frac{1}{n} + \frac{1}{2!n^2}+\frac{1}{3!n^3}+...-1-\frac{1}{n}\right)\\
&=\sum_{n=1}^\infty \left(\frac{1}{2!n^2}+\frac{1}{3!n^3}+...\right)\\
&=\sum_{n=1}^\infty \left(\frac{1}{n} + \frac{1}{2!n^2}+\frac{1}{3!n^3}+...\right)
\end{align*}



At this point I'm lost. I tried using D'Alambert as follows:




$$\lim \frac{a_{n+1}}{a_n}=\lim\frac{\sqrt[n+1]{e}-1-\frac{1}{n+1}}{\sqrt[n]{e}-1-\frac{1}{n}}$$



which I tried to simplify with basic limit laws (hopefully correctly):



$$\lim\frac{\sqrt[n+1]{e}-1}{\sqrt[n]{e}-1} = 1$$



I don't know where to go from here. Thank you for all your help in advance.

real analysis - Sufficient conditions for this function being linear





Let $f$ be a real-valued function for which, for every real $x,y$:



$$f(x+y) = f(x)+f(y)$$



Does this imply that $f$ is a linear function ($f(x)=a\cdot x$)?




  1. If $f$ is differentiable, I think the answer is yes. Take the derivative of $f(x+y)$ by $x$: $f'(x+y)=f'(x)$. This is true for all $y$, hence $f'$ is a constant function. Additionally, $f(0)=f(x+0)-f(x)=0$. Hence $f(x)=ax$. (is this correct?)


  2. If $f$ is continuous, I think the answer is also yes. First, note that $f(0)=0$. Let $a=f(1)$. By addition, for every integer $x$, $f(x)=a\cdot x$. This also must be true for every rational $x$. Hence, by continuity, it must be true for all $x$. (is this correct?)


  3. What is the answer for general $f$?




Answer



It is quite easy to prove for a continuous function, as you pointed out, and in fact also holds true for measurable functions. This is the most general setting.


elementary number theory - Find the least $n$ such that the fraction is reducible




So I have this type of question I've never seen before. It smells like Number Theory to me, and I've never studied Number Theory, but I know a very few, very basic Number Theory facts. For instance I know Fermat's Little Theorem, that for any two integers their GCD is some linear combination of the numbers, and I have some basic facility with manipulating and reasoning with integer modules (Or are the called "residues"? I've heard both words, not super clear on the difference.).



Anyway, the problem is to find the least positive $n$ such that $\dfrac{n-13}{5n+6}$ is a positive reducible fraction. I can see that this means finding the least $n$ such that $\gcd(n-13, 5n+6)>1$ but I don't know what to do with that. I've been trying to think about it as: the numerator "starts" at 1 (for $n=14$ to get a positive numerator) and the denominator starts at 76. From there the numerator increments by 1 and the denominator increments by 5, as $n$ increments by 1. Again, though, since I want to stop searching when I've found any factor, I'm not sure what to do with this either.



I know I can get an upper bound on the number by choosing $n$ a factor of both 13 and 6 which would have $\operatorname{lcm}(6,13) = 78$. But checking manually to find a number between 14 and 78 to see if there's anything lower seems like it's not the kind of thing we should be doing. But I don't see any guarantee that there wouldn't be some other solution in that interval.



Any thoughts or suggestions please?


Answer



Euclidean division is the key. Since
$$5n+6 = 5(n-13) + 71$$

We see that $\text{gcd}(5n+6, n-13)$ must divide 71, which is prime. As you say, we want $\text{gcd}(5n+6, n-13)$ to be greater than $1$, so it must be $71$. In particular
$$n-13 = 71k \quad \Longrightarrow \quad n = 13+71k$$
The least value is $n=84$ for $k=1$, and therefore
$$
\frac{n-13}{5n+6} = \frac{84-13}{5\cdot 84 + 6} =\frac{71}{426}
$$
which is reducible because
$$\frac{71}{426} = \frac{1}{6}$$


real analysis - Proof that limit of convergent sequence lies in [a,infinity)



I have a sequence $(a_n)_{n=1}^\infty$, which converges, and has $a_n \in [a,\infty)$. I need to prove that the limit of $a_n$ lies in $[a,\infty)$.



I thought that doing this by contradiction would be the best way to go. I said that $\lim_{n\rightarrow \infty}a_n = L$. $L$ should be in $[a,\infty)$ (with all the other elements of $a_n$).



For the contradiction, I'm saying that $L \notin [a,\infty)$. If this is the case, then $L \in (-\infty, a)$



So I figured to get the contradiction, I would use this 'fact' about L, and then find that the sequence does not converge if that's the case, but I'm not sure how to do that. I've tried using the $\epsilon - N$ definition of a limit but that didn't really get me anywhere.




Thanks for your time


Answer



Yes, that's the way to go. Take $\varepsilon=a-L$. This makes sense, because $a-L>0$. Now, let $N\in\mathbb N$. Then there's a $n\geqslant N$ such that $|a_n-L|\geqslant\varepsilon$. Indeed, you can take $n=N$, because$$L

reference request - Dummit and Foote as a First Text in Abstract Algebra

I'm wondering how Dummit and Foote (3rd ed.) would fair as a first text in Abstract Algebra. I've researched this question on this site, and found a few opinions, which conflicted. Some people said it is better as a reference text, or something to read after one has a fair deal of exposure to the main ideas of abstract algebra, while others have said it is fine for a beginner. Is a text such as Herstein's Topics in Algebra, Artin's Algebra, or Fraleigh's A First Course in Algebra a better choice?



Here's a summary of the parts of my mathematical background that I presume are relevant. I've covered most of Spivak's famed Calculus text (in particular the section on fields, constructing $\mathbf{R}$ from $\mathbf{Q}$, and showing the uniqueness of $\mathbf{R}$ which is probably the most relevant to abstract algebra) so I am totally comfortable with rigorous proofs. I also have a solid knowledge of elementary number theory; the parts that I guess are most relevant to abstract algebra are that I have done some work with modular arithmetic (up to proving fundamental results like Euler's Theorem and the Law of Quadratic Reciprocity), the multiplicative group $(\mathbf{Z}/n\mathbf{Z})^{\times}$ (e.g. which ones are cyclic), polynomial rings such as $\mathbf{Z}[x],$ and studying the division algorithm and unique factorization in $\mathbf{Z}[\sqrt{d}]$ (for $d \in \mathbf{Z}$). I have only a little bit of experience with linear algebra (about the first 30 pages or so of Halmos' Finite Dimensional Vector Spaces and a little bit of computational knowledge with matrices) though.




With this said, I don't have much exposure to actual abstract algebra. I know what a group, ring, field, and vector space are but I haven't worked much with these structures (i.e. I can give a definition, but I have little intuition and only small lists of examples). I have no doubt that Dummit and Foote is comprehensive enough for my purposes (I hope to use it mostly for the sections on group theory, ring theory, and Galois Theory), but is it a good text for building intuition and lists of examples in abstract algebra for someone who has basically none of this? Will I, more than just learning theorems and basic techniques, develop a more abstract and intuitive understanding of the fundamental structures (groups, rings, modules, etc.)? It is a very large and supposedly dense text, so will the grand "picture" of group theory, for example, be lost? I've heard it is a book for people who have some basic intuition in group and ring theory, and I hesitate to put myself in this category given my description of my relevant knowledge in the paragraph above. Do you think the text is right for me, or would I be more successful with one of the three texts I mentioned in the first paragraph?



Thanks for reading this (lengthy) question. I look forward to your advice!

Thursday 20 February 2014

Is this function bijective, surjective and injective?

$\lfloor\cdot\rfloor: Q \rightarrow\mathbb Z$ with $\lfloor x\rfloor :=$ floor of $x$.



I know a function is injective by using $f(x_1)=f(x_2) \Rightarrow x_1=x_2$
and a function is surjective if each element of the codomain, $y\in Y$, is the image of some element in the domain $x\in X$,
and bijective if the function is both injective and surjective.



I don't know what floor of $x$ is.

calculus - Negation of the Definition of Limit of a Function



Question



Suppose we are dealing with real valued functions $f(x)$ of one real variable $x$. We say that the limit of $f(x)$ at point $a$ exists and equals to $L$ if and only if




$$\exists L:\left( {\forall \varepsilon > 0,\exists \delta > 0:\left( {\forall x,0 < \left| {x - a} \right| < \delta \implies \left| {f(x) - L} \right| < \varepsilon } \right)} \right)$$



and show this by the symbolism



$$\mathop {\lim f(x)}\limits_{x \to a} = L$$



Now, what do we say when we want to state that the limit of function $f(x)$ does not exist at $a$? In fact, what is the negation of the above statement? I am interested to obtain the negation with a step by step approach using tautologies in logic. To be specific, I want to start from



$$\neg \left[ {\exists L:\left( {\forall \varepsilon > 0,\exists \delta > 0:\left( {\forall x,0 < \left| {x - a} \right| < \delta \implies \left| {f(x) - L} \right| < \varepsilon } \right)} \right)} \right]$$




and then go through it to get the final form of negation (See the example below).






My Thought



I just wrote down the two following negations without going through a step by step approach.



$$\forall L,\exists \varepsilon > 0:\left( {\forall \delta > 0,\exists x:0 < \left| {x - a} \right| < \delta \implies \left| {f(x) - L} \right| \ge \varepsilon } \right)$$




or



$$\nexists L:\left( {\forall \varepsilon > 0,\exists \delta > 0:\left( {\forall x,0 < \left| {x - a} \right| < \delta \implies \left| {f(x) - L} \right| < \varepsilon } \right)} \right)$$



I want to know weather these are true or not.






Example




I will give an example of what I mean by a step by step approach. Consider the following statement



$${P}\implies R$$



I want to take a step by step approach to obtain the negation of the above statement. Here is what they usually do in logic



\begin{align}
\, \neg \left( {P \implies R} \right) &\iff \neg \left( {\neg P \vee R} \right) & \text{Conditional Disjunction} \\
\qquad \qquad \quad &\iff \neg \neg P \wedge \neg R & \text{Demorgan's Law} \\

\qquad \qquad \quad &\iff P \wedge \neg R & \text{Double Negation}
\end{align}


Answer



The first negation is almost completely right. You forgot to negate the implication at the end.



Remember that the negation of an "if, then" statement is not an "if, then" statement. $A \implies B$ has negation "$A \land \neg B$" (read: $A$ and not $B$).



So, "if the sky is blue, then I love cheese" has negation "the sky is blue and I do not love cheese."



We say $\lim \limits_{x \to a} f(x) = L$ if $$\forall \epsilon > 0\text{, }\exists \delta > 0 \text{ such that }\forall x\text{, }|x - a| < \delta \implies |f(x) - L| < \epsilon.$$ Then the negation of this is: $$\exists \epsilon > 0\text{ such that }\forall \delta > 0\text{, }\exists x\text{ such that }|x - a| < \delta \text{ **and** }|f(x) - L| \geq \epsilon.$$







UPDATE Here is how to negate the following statement step by step




Negation of "$\lim \limits_{x \to a} f(x)$ exists", i.e., $$\exists L\forall \epsilon > 0\exists \delta > 0\forall x:(|x - a| < \delta \implies |f(x) - L| \geq \epsilon).$$




We say $\lim \limits_{x \to a} f(x)$ does not exist if:




$\neg[\exists L\forall \epsilon > 0\exists \delta > 0\forall x:(|x - a| < \delta \implies |f(x) - L| < \epsilon)]$



$\forall L \neg[\forall\epsilon > 0\exists \delta > 0\forall x:(|x - a| < \delta \implies |f(x) - L| < \epsilon)]$



$\forall L\exists \epsilon > 0\neg[\exists \delta > 0\forall x:(|x - a| < \delta \implies |f(x) - L| < \epsilon)]$



$\forall L\exists \epsilon > 0\forall \delta > 0\neg[\forall x:(|x - a| < \delta \implies |f(x) - L| < \epsilon)]$



$\forall L\exists \epsilon > 0 \forall \delta > 0\exists x: \neg[|x - a| < \delta \implies |f(x) - L| < \epsilon]$




$\forall L\exists \epsilon > 0 \forall \delta > 0\exists x: |x - a| < \delta \land \neg(|f(x) - L| < \epsilon)$ (Negation of implication)



$\forall L\exists \epsilon > 0 \forall \delta > 0\exists x: |x - a| < \delta \land |f(x) - L| \geq \epsilon$


soft question - 'Obvious' theorems that are actually false




It's one of my real analysis professor's favourite sayings that "being obvious does not imply that it's true".



Now, I know a fair few examples of things that are obviously true and that can be proved to be true (like the Jordan curve theorem).



But what are some theorems (preferably short ones) which, when put into layman's terms, the average person would claim to be true, but, which, actually, are false
(i.e. counter-intuitively-false theorems)?



The only ones that spring to my mind are the Monty Hall problem and the divergence of $\sum\limits_{n=1}^{\infty}\frac{1}{n}$ (counter-intuitive for me, at least, since $\frac{1}{n} \to 0$
).




I suppose, also, that $$\lim\limits_{n \to \infty}\left(1+\frac{1}{n}\right)^n = e=\sum\limits_{n=0}^{\infty}\frac{1}{n!}$$ is not obvious, since one 'expects' that $\left(1+\frac{1}{n}\right)^n \to (1+0)^n=1$.



I'm looking just for theorems and not their (dis)proof -- I'm happy to research that myself.



Thanks!


Answer



Theorem (false):




One can arbitrarily rearrange the terms in a convergent series without changing its value.




geometry - How to get center coordinates of circles on edge bigger circle?



I want to draw 8 smaller circles on the edge of a big circle. I know the distance between all small circles should be 20. How can I find the center coordinates of the circles? I already know the center coordinates of the first small circle.




What I know:




  • Center coordinates of big circle is (100,100)

  • Big circle radius is 200

  • There are 8 small circles

  • Distance between edges of small circles is 20

  • Center coorindates of first small circle

  • Size of all small circles is equal




What I want to know:




  • Center coordinates of small circles



Sketch problem:




enter image description here


Answer



You already have a formula for finding the center of one of the
small circles:



x = xBigCircle + Math.round(200 * Math.cos(phi));
y = yBigCircle + Math.round(200 * Math.sin(phi));


Since you want the small circles all to be the same size and each one

is the same distance from each of its neighbors, they will be evenly
spaced around the circle. Since one full turn around the circle is
the angle 2 * Math.PI, you want one eighth of that, which is
0.25 * Math.PI. Stepping by that angle around the circle eight
times, starting at the first circle, gets you back to the first circle
while finding seven other equally-spaced points.



The centers of the small circles should be at



x = xBigCircle + Math.round(200 * Math.cos(phi + n * 0.25 * Math.PI));

y = yBigCircle + Math.round(200 * Math.sin(phi + n * 0.25 * Math.PI));


where n ranges from $0$ through $7$, inclusive
($0 \leq n < 8$).
The value $n=0$ is just the center of the first small circle,
which you already know.



To make a "gap" of size $20$ between each pair of small circles,
just set the radius of the small circles accordingly.

The distance between centers is 200 * 2 * Math.sin(Math.PI/8),
subtract $20$ from that for the desired gap, then divide by $2$
to get the desired radius.


Can one eigenvalue have two different eigenvectors?

I think the answer is no, but to be precise, is it correct to assume that if we have one eigenvalue that is the same, then the eigenvectors for these have to be the same too?




For example,



$$\begin{gathered}
T(1,0,0) = (0, - 2,0) \hfill \\
T(0,1,0) = (0,0.5,0) \hfill \\
T(0,0,1) = (0,1.5,0) \hfill \\
\end{gathered}$$



The transformation matrix $T$ has two eigenvalues that are zero, but this cannot be the case? The other one is $0.5$.

Wednesday 19 February 2014

number theory - Solving $f(x)f(y) = f(x + y)$





I am a little lost trying to derive what form $f(x)$ must have if we know $f(x)f(y) = f(x + y)$ for real inputs $x, y$.



My attempt so far:



Set $y=0$ and we have $f(x)f(0) = f(x)$ meaning either $f(x) = 0$ or $f(0) = 1$. Not sure what to do with this.



What about setting $y=x$? Then $f(x)^2 = f(2x)$. Multiply both sides by $f(x)$ and then $f(x)^3 = f(2x)f(x) = f(2x + x) = f(3x)$ and so on, so $f(x)^n = f(nx)$ for some integer $n \geq 2$. But it's also true for $n=1$ because $f(x)^1 = f(1 \cdot x) = f(x)$ and it's also true for $n=0$ (if we assume $f(0) = 1$) since $f(x)^0 = f(0 \cdot x) = f(0) = 1$, so $f(x)^n = f(nx)$ holds for integer $n \geq 0$.



For $n > 0$: raise both sides to $1/n$ and we get




$f(x) = f(nx)^{1/n}$



I don't really know where I am going with this or if it's even the right track. Am I even allowed to do that in the first place? Am I supposed to be assuming $f(x)$ is real? Or complex? Or positive? Or something? Should I be assuming $x$ and $y$ are complex? I don't really know what assumptions to make exactly. I'm just trying to prove/show that this all implies $f(x)$ has some exponential form but pretending I don't know that yet.



Could use any corrections or a push in the right direction.


Answer



You're doing fine. So far you've managed to identify that either




  1. $f$ is everywhere zero, or



  2. $f(0) = 1$, in which case $f(nx) = f(x)^n$ for every positive integer $n$.




You can probably also manage to show that for every positive integer $k$, you have $f(x/k) = f(x)^{1/k}$, and then combine these to conclude that for any rational number $r$, $f(rx) = f(x)^r$.



A good next place to look is to say "let's say $f(1) = A$." Then we can work out $f(2), f(3), \ldots$ and $f(1/2), f(1/3), \ldots$, and maybe even $f(r)$ for every rational number $r$ with a little cleverness.



But what about irrationals? To say anything useful there, I believe you need an added assumption like "$f$ is continuous".



Post-comment additions




For things like this problem, it can be really helpful to write down everything in detail, rather than just as notes. You could, for instance, say this:



I'm studying the functional equation
$$
f(x + y) = f(x)f(y), \tag{1}
$$

which I'll assume is defined for $x$ a real number, and that the values taken by $f$ are also real, i.e., that I have
$$
f: \Bbb R \to \Bbb R : x \mapsto f(x)

$$



Lemma 1: If $f(0) = 0$, then $f(x) = 0$ for all $x \in \Bbb R$.



Proof: From equation 1, we have $f(x) = f(x + 0) = f(x) f(0) = f(x)\cdot 0 = 0.



Lemma 2: Assuming $c = f(0) \ne 0$, we have $f(0) = 1$.
Proof: $f(0) = f(0 + 0) = f(0)^2$, so $c = c^2$, hence $c - c^2 = c(1-c) = 0$, when $c = 0$ or $c = 1$. We've assumed $c \ne 0$, hence $c = 1$. QED.



Henceforth we'll assume $f(0) = 1$ and ignore the always-zero solution.




Lemma 3: For any $x\in \Bbb R$, $f(2x) = f(x)^2; f(3x) = f(x)^3$.



Proof: $f(2x) = f(x + x) = f(x) f(x)$ by equation 1. Similarly, breaking up $f(3x) = f(2x) + f(x)$ establishes the second claim.



Lemma 4: For any positive integer $n$, $f(nx) = f(x)^n$.
Proof, by induction: Let $P(m)$ be the statement that for the positive integer $m$, and for every real number $x$, $f(mx) = f(x)^m$. We know that for any real $x$, $f(1x) = f(x) = f(x)^1$, so $P(1)$ is true. Suppose that for some integer $k$, we know $f(kx) = f(x)^k$ (this is our induction hypothesis $P(k)$). Then let's examine $f((k+1) x)$:
\begin{align}
f((k+1)x)
&= f(kx + x) \\

&= f(kx)f(x) & \text{By equation 1} \\
&= f(x)^kf(x) & \text{By the induction hypothesis}\\ &= f(x)^{k+1}.
\end{align}

We see that $P(k)$ implies $P(k+1)$; combining this with the fact that $P(1)$ is true, we find (by induction) that $P(n)$ is true for all positive integers $n$.



...and you continue in this vein. It really helps to know what assumptions you're making in each step.


elementary number theory - Help with indirect proof $gcd(9k+4,2k+1)=1$



Show: $\gcd(9k+4,2k+1)=1 ~~ \forall k\in \mathbb Z$




Indirect proof.



If  $1\neq d=\gcd(9k+4,2k+1)~\exists k\in \mathbb Z$,
then $d$ has to be of the form $2m+1$ for an integer $m$.



That somehow throws me back at the beginning.


Answer



Please excuse me for this messed up question,
but I've come up with an answer as compensation:



Just apply the Euclidean algorithm, and in 3 lines you get 1 as GCD.


probability - throwing a dice repeatedly so that each side appear once.

Pratt is given a fair die. He repeatedly
throw the die until he get
at least each number (1 to 6).




Define the random variable $X$ to be the total number of trials that
pratt throws the die. or
How many times he has to throw a die so that each side of the die appears at least once.
Determine the expected value $E(X)$.

trigonometry - Deriving sine from cosine

If $\theta$ is an angle lying between $90^{\circ}$ and $180^{\circ}$ and $\cos(\theta) = - \frac{4}{5}$, why does this mean that $\sin(\theta) = \frac{3}{5}$?

limits - Why does $limlimits_{xto0}sinleft(left|frac{1}{x}right|right)$ not exist?



Can someone explain, in simple terms, why the following limit doesn't exist?



$$\lim \limits_{x\to0}\sin\left(\left|\frac{1}{x}\right|\right)$$



The function is even, so the left hand limit must equal the right hand limit. Why does this limit not exist?



Answer



The function is indeed even, as we will see, this does not prove the existence of the limit.



Assume that the limit exists.



$$ \lim_{x\to 0^+} \sin\left(\frac1x \right) $$



Which is the same as your limit because the left-hand limit and right-hand limit will be equal (if the limit exists). Then since $x > 0$, I removed the absolute value sign.



$$ \lim_{u\to\infty} \sin u $$




Using $u = 1/x$, we see that the limit certainly does not exist.


real analysis - Show that a certain function is continuous

Suppose that $S^1$ is the unit circle in $\mathbb{C}$ and suppose that $g\colon [0,2\pi]\rightarrow \mathbb{C}$ is continuous such that $g(2\pi) = g(0)$.



I have to show that $h\colon S^1 \rightarrow \mathbb{C}$, $x \mapsto g(t(x))$ is continuous. where $t(x)$ is the number such that $x = e^{it(x)}$.



I guess that I have to show that if $y \in B(x,\epsilon)$, then $y \in B(t(x),\epsilon)$ (if $\epsilon$ is small enough). This directly implies the result. Can anyone help me to show this?

Tuesday 18 February 2014

real analysis - Proving Injectivity




The problem is to show the function $f:\mathbb{R}^2\rightarrow\mathbb{R}^2$ given by



$$f(x,y)=(\tfrac{1}{2}x^2+y^2+2y,\,x^2-2x+y^3)$$ is injective on the set



$$M=\{(x,y)\in\mathbb{R}^2:|x-1|+|y+1|<\tfrac{1}{6}\}.$$



My idea is to consider the following map (here, $u,v\in\mathbb{R}^2$):



$$\phi_v(u)=u-f(u)+v,\quad v\in M$$




If I manage to show that




  1. $\phi_v:D\rightarrow D$ is well-defined for some closed sets


  2. $\phi_v$ is a contraction (Lipschitz constant $<1$) on $D$




then by the Contraction Mapping Theorem, $\phi_v$ has a unique fixed point. Hence, $v$ has a unique preimage $u$ for each $v$. i.e. $f$ is injective as desired.




But I ran into troubles when I attempted to find a suitable closed set $D$. Obviously it depends on the domain $M$. $M$ given here is really weird so I am not too sure how to proceed.


Answer



I'll present an approach along the lines of my comment. First, I'll normalize the derivatives at $(1,-1)$ by dividing the second component by $3$:
$$\tilde f(x,y) = (x^2/2+y^2+2y, (x^2-2x+y^3)/3)$$
This is done so that the Jacobian matrix of $\tilde f$ at $(1,-1)$ is the identity. Now split $\tilde f=L+g$ where $L(x,y)=(x-3/2,y+1/3)$ is the linear part and $g$ is the rest. If we can show that $g$ is Lipschitz with a constant less than 1, we are done.



The first component of $g$ is $g_1(x,y)=x^2/2+y^2+2y-x+3/2$, with the gradient $\nabla g_1=\langle (x-1),2(y+1)\rangle$. We estimate the gradient by $|\nabla g_1|< \sqrt{5}/6<1/2$.



The second component of $g$ is $g_2(x,y)= (x^2-2x+y^3-3y-1)/3$, with the gradient $\nabla g_2=\langle 2(x-1)/3,y^2-1\rangle$. Since $|y^2-1|\le (|y+1|+2)|y+1|<13/36$, we obtain $|\nabla g_2|\le \sqrt{1/81+(13/36)^2}<1/2$.




Since both components have Lipschitz constant $<1/2$, the map $g$ has Lipschitz constant $<1$. (Of course a more precise bound can be given, but this suffices.)


Integration by substitution limits confusion



If I have the integral: $\displaystyle\int_{0}^{\infty}t^{-\frac{1}{2}}e^{-t} dt$



Am I allowed to make the substitution $t=x^2$, because I am then not sure what the limits of integration would be as for $t$ positive $x$ could be negative or positive?


Answer



Hint. You may rather perform the change of variable $x=\sqrt{t}>0$, giving $dx=\frac{1}{2}t^{-\frac{1}{2}}dt$ to get
$$
\int_{0}^{\infty}t^{-\frac{1}{2}}e^{-t} dt=2\int_{0}^{\infty}e^{-x^2} dx
$$ then you may use the standard gaussian result.



real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...