Sunday 30 June 2013

elementary set theory - Saving a failed bijection between $mathbb{N}$ and $mathbb{Q^+}$



I just tried to have some fun on a late evening and create a bijection between $\mathbb{N}$ and $\mathbb{Q^+}$ (after finding a challenge in a text I read).




The very first thing I did, and which is the elementary mistake I guess, was looking for a way to generate the sequence $(1, \frac{1}{2}, \frac{1}{3}, \frac{2}{3}, \frac{1}{4}, \frac{2}{4}, \frac{3}{4}, \dots)$.



I then proceeded to find two functions, $d,n : \mathbb{Z} \cap [1, \infty) \to \mathbb{Z} \cap [1, \infty)$, such that $d$ generates the sequence $(2, 3, 3, 4, 4, 4, 5, 5, 5, 5 \dots )$ and $n$ generates the sequence $(1, 1, 2, 1, 2, 3, 1,2, 3, 4 \dots)$, which in the end gives the function



$$h(x) =
\begin{cases}
x & \text{if}\> x = 0 \>\text{or}\>x=1 \\
\frac{n(x)}{d(x)} & \text{else}
\end{cases}$$




The functions $n$ and $d$ are given by:



$$
d(x) = \left\lceil\frac{1}{2}(\sqrt{8(x-1)+1}-1) +1\right\rceil\\
n(x) = (x-1) - \frac{(d(x) - 1)(d(x)-2)}{2}
$$



Some quick programming verified that these are indeed the functions I am looking for. However, to my horror I realised a tad too late that this is in fact not a bijection between $\mathbb{N}$ and $\mathbb{Q^+}$, as the sequence I used as starting point is not the (positive) rational numbers, but instead a subset of all ordered pairs $(x,y)$. For instance $\frac{1}{2}$, and $\frac{2}{4}$ are the same rational number. Consequently, if $h : \mathbb{N} \to \mathbb{Q^+}$, then it is not a bijection as it is not injective.




First of all: Does my functions seem reasonable? Is this in fact a bijection between $\mathbb{N}$ and the ordered pairs? Very open question I guess, but I often fear I've done some silly mistake here or there.



Secondly: Is there any fairly easy way to "save" my function $h$ such that it indeed becomes a bijection between $\mathbb{N}$ and $\mathbb{Q^+}$.



Lastly: I assume there exists some bijection from the set of all ordered pairs to $\mathbb{Q^+}$? How would such a bijection look? And how would a bijection from my subset of ordered pairs to $\mathbb{Q^+}$?


Answer



First your function doesn't seem to even be a bijection between $\mathbb Z^+$ and $\mathbb Z^+\times\mathbb Z^+$. What's missing is that the first component never is allowed to be larger than the second. What you need is to ensure this. This is normally done by walking diagonals in $\mathbb Z^+\times\mathbb Z^+$, ie the sequence first diagonal: $(1,1)$, then second: $(2,1)$, $(1,2)$, then third: $(3,1)$, $(2,2)$, $(1,3)$ etc.



This map means that you can construct a surjection from $\mathbb Z^+$ to $\mathbb Q^+$.




Related is the Schröder-Bernstein theorem which says that if theres an injection both ways there's an bijection (and the proof shows how to actually construct one). This is related since given a surjection you can create a reverse injection by just (arbitrarily) selecting one of the elements that maps to. For example the surjection above is $q(1)=1/1$, $q(2)=2/1$, $q(3)=1/2$, $q(4)=3/1$, $q(5)=2/2$, $q(6)=1/3)$ giving the reverse that we can choose $r(1) = 1$ (since $q(1)=1/1$, we could have chosen $r(1)=5$ as $q(5)=2/2=1$), we have $r$ being a injection from $\mathbb Q^+$ to $\mathbb N^+$ and of course that the identity map is an injection the other way - and consequently there exists a bijection between.



To be concrete we can save the day for a bijective map between $\mathbb N^+\times \mathbb N^+$ and $\mathbb Q^+$, but it's probably easier to work with the original intention - mapping between $\mathbb N^+$ and $\mathbb Q^+$ directly. What you do is that you have an enumeration $q_j$ ($1/1$, $2/1$, $1/2$, $3/1$, $2/2$, $1/3$, ...) such covering all positive rational number. What you do is traverse this and only keep the rational numbers that have not appeared earlier in the sequence (for example you would skip $2/2$ which have appeared earlier leaving $1/1$, $2/1$, $1/2$, $3/1$, $1/3$ etc).



Another approach would be to use prime number factorizations of natural numbers and noting that they're are possible and unique to for rational numbers (given that we're allowed to have negative exponents). That is for each natural $n$ number we have a unique sequence $n_j$ (of non-negative integers) such that $n = \prod \pi_j^{n_j}$ (where $\pi_j$ is the $j$th prime). And for each rational number $q$ we have a unique sequence $q_j$ of integers such that $q = \prod \pi_j^{q_j}$. Now let



$$q_j(n_j) = \begin{cases} n_j/2 & \mbox{if }n_j\mbox{ is even} \\
-(n_j+1)/2 & \mbox{if }n_j\mbox{ is odd} \end{cases}$$



this would define a bijection between $\mathbb Z^+$ and $\mathbb Q^+$



calculus - What is the limit of the sequence n!/4^n?




I am trying to find the limit of the sequence by using root test and I don't understand why the limit is not zero?
(the answer is inf).


Answer



By the root test:




$$\begin{array}{rcl}
\displaystyle \limsup_{n\to\infty} \sqrt[n]{a_n} &=& \displaystyle \limsup_{n\to\infty} \sqrt[n]{\dfrac{n!}{4^n}} \\
&=& \displaystyle \dfrac14 \limsup_{n\to\infty} \sqrt[n]{n!} \\
&=& \displaystyle \dfrac14 \limsup_{n\to\infty} \sqrt[n]{\exp\left(n \ln n - n\right)} \\
&=& \displaystyle \dfrac14 \limsup_{n\to\infty} \exp\left(\ln n - 1\right) \\
&=& \displaystyle \dfrac1{4e} \limsup_{n\to\infty} n \\
&=& \infty
\end{array}$$




Hence the sequence diverges to infinity.


real analysis - Evaluating $int_0^1 frac{log x log left(1-x^4 right)}{1+x^2}dx$



I am trying to prove that



$$\int_0^1 \frac{\log x \log \left(1-x^4 \right)}{1+x^2}dx = \frac{\pi^3}{16}-3G\log 2 \tag{1}$$



where $G$ is Catalan's Constant.




I was able to express it in terms of Euler Sums but it does not seem to be of any use.



$$\int_0^1 \frac{\log x \log \left(1-x^4 \right)}{1+x^2}dx = \frac{1}{16}\sum_{n=1}^\infty \frac{\psi_1 \left(\frac{1}{4}+n \right)-\psi_1 \left(\frac{3}{4}+n \right)}{n} \tag{2}$$



Here $\psi_n(z)$ denotes the polygamma function.



Can you help me solve this problem?


Answer



I tried substitutions and the differentiation w.r.t a paramater trick like the other posters. Another partial result, or a trail of breadcrumbs to follow, is the following. We try a series expansion,
$$

\frac{\log\left(1-x^4\right)}{1+x^2} = \displaystyle \sum_{k=1}^{\infty} x^{4k}\left(x^{2} -1\right)H_k,
$$
where $H_k$ are the Harmonic numbers. Then
\begin{align}
\int_0^1 \frac{\log x \log \left(1-x^4 \right)}{1+x^2}\ \mathrm{d}x &=\displaystyle \sum_{k=1}^{\infty}\, H_k\int_0^1 x^{4k}\left(x^{2} -1\right)\log x \ \mathrm{d}x \\
&=\displaystyle \sum_{k=1}^{\infty} \, \frac{H_k}{(4k+1)^2}-\displaystyle \sum_{k=1}^{\infty} \, \frac{H_k}{(4k+3)^2}.
\end{align}
These sums look very similar to the ones evaluated in this post, in which they are transformed into alternating sums. Using the same techniques, or perhaps working back from the answers, we can hopefully show that
$$
\displaystyle \sum_{k=1}^{\infty} \, \frac{H_k}{(4k+1)^2} = -G\left(\frac{\pi}{4}+\frac{\log 8}{2} \right) +\frac{7}{4}\zeta(3) +\frac{\pi^3}{32} - \frac{\pi^2}{16}\log 8,

$$
$$
\displaystyle \sum_{k=1}^{\infty} \, \frac{H_k}{(4k+3)^2} = -G\left(\frac{\pi}{4}-\frac{\log 8}{2} \right) +\frac{7}{4}\zeta(3) -\frac{\pi^3}{32} - \frac{\pi^2}{16}\log 8,
$$
Subtracting the second from the first gives us
$$
\frac{\pi^3}{16}-G\log 8.
$$


sequences and series - Is the sum of all natural numbers $-frac{1}{12}$?




My friend showed me this youtube video in which the speakers present a line of reasoning as to why
$$
\sum_{n=1}^\infty n = -\frac{1}{12}
$$



My reasoning, however, tells me that the previous statement is incorrect:
$$

\sum_{n=1}^\infty n = \lim_{k \to \infty} \sum_{n=1}^k n = \lim_{k \to \infty}\frac{k(k+1)}{2} = \infty
$$



Furthermore, how can it be that the sum of any set of integers is not an integer. Even more, how can the sum of any set of positive numbers be negative? These two ideas lead me to think of inductive proofs as to why the first statement is incorrect.



Which of the two lines of reasoning is correct and why? Are there any proven applications (i.e. non theoretical) which use the first statement?


Answer



It is a matter of definition. We normally say that a series such as $1-1+1-1+\cdots$ does not converge. It has no value in the limit. If you change the definition of convergence by assigning the value of $1/2$ to that series, then you can expect to get very odd result. Is it useful to do that? Evidently the answer is yes in some applications.


Why are the elements of a galois/finite field represented as polynomials?



I'm new to finite fields - I have been watching various lectures and reading about them, but I'm missing a step. I can understand what a group, ring field and prime field is, no problem.



But when we have a prime extension field, suddenly the elements are no longer numbers, they are polynomials. I'm sure there is some great mathematical tricks which show that we can (or must?) use polynomials to be able to satisfy the rules of a field within a prime extension field, but I haven't been able to find a coherent explanation of this step. People I have asked in person don't seen to know either, it's just assumed that that is the way it is.



So I have two questions:




What is a clear explanation of "why polynomials?".



Has anyone tried using constructs other than polynomials to satisfy the same field rules?



Thanks in advance.


Answer



In any ring, finite or infinite, we have two operations: $+$ and $\cdot$. The idea of a ring extension is this: let $R$ be a ring and $x$ be some element that we want to add to $R$ (maybe $R\subset S$ and $x\in S$, or $x$ is some formal element).





We need $R[x]$ to be closed under addition and multiplication.




This means that any element that can be formed by manipulating elements of the set $R\cup\{x\}$ by $+$ and $\cdot$ must be an element of $R[x]$.




Polynomials are a general way of expressing such manipulations.




An arbitrary polynomial in $x$ is a completely general manipulation of $x$ using only the operations valid in a ring. Moreover, any element of a ring can be expressed as a polynomial in terms of the generators of the ring.




Let's see how this works: start with some element $a\in R$. We can add or multiply by any other element of $R$, but this just gives us some $a'\in R$. Or we can multiply by $x$ any number of times to get $a'x^n$ for some $n$. And given different elements of this form, we can add them together to get a polynomial.



In many rings, because the elements $x$ satisfy non-trivial relations (e.g. in $\mathbb C=\mathbb R[i]$, $i^2+1=0$), there are neater ways to express elements, and we can avoid polynomials, even though they lurk in the background. In finite fields, polynomials happen to be the easiest and most intuitive way to express what's going on.


real analysis - Lebesgue integral of non-negative measurable sequence of functions (not monotone)



Suppose that $f, (f_n)$ are nonnegative measurable functions, that $f_n \to f$ pointwise, and that $f_n \leq f$ for all n. Prove that:



$\int f = \lim_{n \to \infty} \int f_n$



My attempt




One direction seems fairly obvious.
Since $f_n \leq f$ for all n, then:
$\int f_n \leq \int f$ for all n.



So we should have:
$\lim_{n \to \infty} \int f_n \leq \int f$



In the other direction, use Fatou’s Lemma to see that:



$\int f \leq \lim_{n \to \infty} \inf \int f_n$




However, it’s not actually clear that $\lim_{n \to \infty} \int f_n$ is well-defined, so it doesn’t necessarily make sense to get there from the $\lim_{n \to \infty} \inf$.



As a concept, my idea would then (or maybe instead?) create a subsequence from $(f_n)$ that is monotone and then invoke MCT? But I am not sure how to go about this.


Answer



Fatou's Lemma gives
\begin{align*}
\int f\leq\liminf\int f_{n}.
\end{align*}
From

\begin{align*}
f_{n}\leq f,
\end{align*}
we get
\begin{align*}
\int f_{n}\leq\int f,
\end{align*}
and hence
\begin{align*}
\limsup\int f_{n}\leq\int f.

\end{align*}
We conclude that
\begin{align*}
\int f\leq\liminf\int f_{n}\leq\limsup\int f_{n}\leq\int f,
\end{align*}
so the limit exists and
\begin{align*}
\lim\int f_{n}=\int f.
\end{align*}


calculus - By using the definition of limit only, prove that $lim_{xrightarrow 0} frac1{3x+1} = 1$




By using the definition of a limit only, prove that




$\lim_{x\rightarrow 0} \dfrac{1}{3x+1} = 1$




We need to find $$0<\left|x\right|<\delta\quad\implies\quad\left|\dfrac{1}{3x+1}-1\right|<\epsilon.$$
I have simplified $\left|\dfrac{1}{3x+1}-1\right|$ down to $\left|\dfrac{-3x}{3x+1}\right|$



Then since ${x\rightarrow 0}$ we can assume $-1No sure if the solution is correct


Answer



If we focus on $-1


Rather than focusing on $-1 < x < 1$, focus on a smaller interval, for example $|x| < \frac14$. Hence $\delta < \frac14$.



if $-\frac14 < x < \frac14$, $$-\frac34+1 < 3x+1< \frac34+1$$.



$$\frac14 < 3x+1< \frac74$$



$$\left|\frac{1}{3x+4} \right| < 4$$



Hence $$12\delta < \epsilon$$



Saturday 29 June 2013

probability - Expectation value of symmetric random variable



Can someone please help me answer this question?



enter image description here




From the definition of symmetry it follows that: $P(X≥b+x)=P(X≤b-x)$. What I did:
$E[X]$$=$$∑$$xi$$P(X=xi)=$$∑$$(b+(xi-b))$$P(X=xi)$$=$$∑$$b$$P(X=xi)$$+$$∑$$(xi-b)$$P(X=xi)$$=$$b⋅1$ $+$ $(a-b)$$P(X=a)$ $+$ $(b-a)$$P(X=2b-a)$ $=$ $b$ due to the symmetry of the random varible $X$ since it follows that $P(X=2b-a)$$=$$P(X=a)$. Is this correct?



Thanks in advance!


Answer



We have $a<0. Thus $$\mathbb{P}(X=a)=\mathbb{P}(Xand
$$\mathbb{P}(X=2b-a)=\mathbb{P}(X>b)$$



By symmetry of $X$ about $b$:




$$\mathbb{P}(X\geq b)=\mathbb{P}(X\leq b)=1-q$$



for some $q\in(0,1)$



Thus $$\begin{align*}\mathbb{P}(Xb)&=q\\\mathbb{P}(X=b)&=1-2q\end{align*}$$



It follows that



$$\mathbb{E}[X]=qa+q(2b-a)+(1-2q)b=b.$$


summation - Induction proof concerning a sum of binomial coefficients: $sum_{j=m}^nbinom{j}{m}=binom{n+1}{m+1}$

I'm looking for a proof of this identity but where j=m not j=0



http://www.proofwiki.org/wiki/Sum_of_Binomial_Coefficients_over_Upper_Index



$$\sum_{j=m}^n\binom{j}{m}=\binom{n+1}{m+1}$$

sequences and series - Converges or diverges $sum_{n=1}^infty frac{ln n}{sqrt n}$?



I was trying to find if the series $\sum_{n=1}^\infty \frac{\ln n}{\sqrt n}$converges or diverges. First, I tried ratio test and got the limit as 1. I tried Limit Comparison Test's and I only got 0's and $\infty$'s. Then I tried using $n\geq \ln n$ for Direct Comparison Tests, but I could not find a result. Can you help me to see what am I missing?


Answer



Note that $\sum 1/n^p$ diverges for all $p<1$.



So, the summand is lower bounded by $1/\sqrt{n}$ and is non-negative, so by the comparison test to $\sum 1/\sqrt{n}$ it diverges.


calculus - Prove that f(x) can have any value between A and B



I need to prove the following statement:
Let f be a bounded and continuous function in the interval (a,b). if $\lim_{x\to a+}f(x)=A$ and $\lim_{x\to b-}f(x)=B$ and $A\neq B$ then f can get any value between and A and B in the interval (a,b).



What I did:
Let $ g(x) =
\begin{cases}
f(x), & \text{$aA, & \text{x=a} \\

B, & \text{x=b} \\
\end{cases}$
. We can see that $g(x)$ is a continuous function in the the interval [a,b]. Therefore, I can use the Intermediate value theorem to prove that for any value between A and B, there exists $a

My Questions:



1)Can i define a function that is divided to 3 cases?



2)I didn't use the fact that f(x) is bounded, and that seems weird. what is wrong with my proof?




3) I am sure there must be a better way to prove it because the next statement i need to prove the same just for $A=\infty$ and $B=-\infty$(f(x) isn't bounded) and my trick won't work there, that's why I have no idea how to prove it.


Answer



1) Yes, you can, if $A$ and $B$ are finite. You have just extended $f$ to a continuous function on $[a,b]$



2) You have implicitly used it in assuming that $A$ and $B$ are finite. In fact, $|A|, |B|<+\infty$ if and only if $f$ is bounded (why?)



3) If $A=+\infty, B = -\infty$, then for any real $\alpha\in \mathbb{R}$, there is $a'>a, b'<$ such that $f(a') < \alpha < f(b')$. So you apply intermediate value theorem to $[a',b']$


Friday 28 June 2013

complex analysis - Is the infinite product of $-1 times -1 times -1 timesdots = -i$?

So I woke up this morning and I was thinking about the infinite product $-1 \times -1 \times -1 \times\dots$, and what it equals. I came to the conclusion that it equals $-i$. Alternatively stated,



$
{\displaystyle \prod_{i}^{\infty} (-1)} = -i
$



Here's how I reached this:



$${\prod_{i}^{\infty} (-1)} = e^{\ln({\displaystyle \prod_{i}^{\infty} (-1)})} = e^{\displaystyle \sum_{i}^{\infty}{\ln(-1)}}= e^{\displaystyle \sum_{i}^{\infty}{i\pi}}=e^{i\pi\displaystyle \sum_{i}^{\infty}{1}}$$




Now, here's where I'm a little hesitant. I want to say that, from $\zeta(0)=-\frac{1}{2}$, we can conclude that



$$e^{i\pi\displaystyle \sum_{i}^{\infty}{1}} = e^{-\frac{1}{2}i\pi} = -i$$.



I have been told before that the sum $\displaystyle \sum_{i}^{\infty}{1}$ is not actually $-\frac{1}{2}$, but I'm not really sure why. It would seem that if this is the case, then my product would in fact not be $-i$. Though, I must say that $-i$ sort of makes sense, because multiplying complex numbers is essentially rotating them, and so rotating by $180$ every time will get you $180+180+180+...$ is the same as $180*(1+1+1+...)$ which is (if my premise is right) $180*(-\frac{1}{2})=-90$. $-90$ degrees on the complex plane turns out to be $-i$.



So my question is, is there a hole in my logic? I know what not accounting for $\zeta(0)=-\frac{1}{2}$, the sum $1+1+1+...$ is divergent, but taking that into account, can I say with confidence that $-1 \times -1 \times -1 \times\dots = -i$?

radicals - Is $n^{th}$ root of $2$ an irrational number?







Will every $n^{th}$ root of $2$ be an irrational number? If yes, how can I prove that?

Find the limit as n approaches infinite




We have the following function:



$$U_n = \sin \dfrac{1}{3} n \pi$$



What is the limit of this function as n approaches infinity?



I first tried to use my calculator as help, for n I chose some arbitrary large numbers, such as 100 and 1000. Then I I just took $n = 10^{50}$ and it gave me an error.



So the correct answer is it doesn't have one, but why? Why does this function have a solution for $n = 10^2, 10^3$ and not for bigger numbers such as $n=10^{50}$?


Answer




As David hinted in his comment, try using the fact that
$$
\sin(x + 2\pi k) = \sin(x)
$$
whenever $k$ is an integer. Or, if you just write out the first few values of $U_n$ (compute these using the unit circle) and you should notice a pattern.


Thursday 27 June 2013

linear algebra - Are the following two matrices similar?

Suppose that



1) we do not know the theory of lambda-matrix;



2) we can not see the transition matrix directly (use elementary tranformations).



How can we deduce that the following two matrices are similar? (What we only know is that they have the same eigenvalues, eigenvectors, rank, determinant)




$$ \begin{pmatrix} 1&1&0\\ 0&0&1\\ 0&0&0 \end{pmatrix},\quad \begin{pmatrix} 1&1&1\\ 0&0&1\\ 0&0&0 \end{pmatrix} $$

analysis - Understanding the definition of surjective function




From Wikipedia, I've seen that the definition of surjective function is the following:




If
$$ f \colon X \rightarrow Y$$ then, $f$ is said to be surjective if
$$ \forall y \in Y\, \exists x \in X\, \text{s.t. } f(x) = y \text{.}$$




On the other hand, we can define surjectiveness by saying that $f$ is surjective iff the codomain of $f$ is equal to its image, $f(X)$, i.e. if:

$$ \text{codomain of } f \text{ } = f(X) = \{y \colon y \in Y \wedge (\exists x \in X \colon f(x)= y)\} \text{.} $$



Why these two definition are the same? And, in general, how can i prove it?


Answer



In general the image of a function $f\colon X \rightarrow Y$ is denoted with $f(X)$, and by definition it is a subset of $Y$, so $f(X) \subseteq Y$. If $f$ is surjective, then $Y \subseteq f(X)$, because for every $y \in Y$ you can find $x \in X$ such that $y = f(x)$, so $y \in f(X)$. To sum up, $f(X) \subseteq Y$ and $Y \subseteq f(X)$, which by double inclusion means $f(X) = Y$.


algebra precalculus - Why does $ 1+2+3+cdots+p = {(1⁄2)}cdots(p+1) $

I saw this from Project Euler, problem #1:



If we now also note that $ 1+2+3+\cdots+p = {(1/2)} \cdot p\cdot(p+1) $



What is the intuitive explanation for this? How would I go about deriving the latter from the former? It has the summation express which I am not sure how to deal with, so unfortunately I am not even sure how I would begin to show these are equivalent.

elementary number theory - How to use the Chinese Remainder theorem to solve a system of congruences

In my class the exact test of the Chinese Remainder Theorem we learned stated
\begin{align*}
x &\equiv a_1 \pmod{n_1}\\
x &\equiv a_2 \pmod{n_2}\\
...\\
x &\equiv a_L \pmod{n_L}.
\end{align*}
has a unique solution modulo the product $n_1n_2...n_L$ if all the $n$'s are pairwise relatively prime.




How does this help us solve the following question. Other sources say to use a product $M$ along with $M_1,M_2,...$ all of which are absent in the text of my CRT



Find the smallest positive integer $x$ so that
\begin{align*}
x &\equiv 3 \pmod{7}\\
x &\equiv 12 \pmod{11}\\
x &\equiv 5 \pmod{13}.
\end{align*}

algebra precalculus - What is the term for a factorial type operation, but with summation instead of products?

(Pardon if this seems a bit beginner, this is my first post in math - trying to improve my knowledge while tackling Project Euler problems)



I'm aware of Sigma notation, but is there a function/name for e.g.




$$ 4 + 3 + 2 + 1 \longrightarrow 10 ,$$



similar to $$4! = 4 \cdot 3 \cdot 2 \cdot 1 ,$$ which uses multiplication?



Edit: I found what I was looking for, but is there a name for this type of summation?

elementary set theory - Question about cardinal arithmetic

Let $|I|\leq \aleph _\alpha$, and let $ \left\{ \beta _i \right\} _{i\in I}$ be cardinals with $\beta _i \leq \aleph _{\alpha}$.



Is it true that $\aleph_{\alpha +1} > \sum _{i\in I}\beta _i $?



Using choice, I think it's possible to write $\aleph _{\alpha+1}=\sum _{i\in I}\aleph_{\alpha +1}$, and I think
$$\sum _{i\in I}\aleph_{\alpha +1} > \sum _{i\in I}\beta _i $$ holds because $|I|\leq \aleph _\alpha$, but I'm not sure whether this is true nor how to prove it.

real analysis - A proof about uniformly continuous function on an open interval and its extension



I try to prove:




If $g$ is a continuous function on $(a,b)$ then $g$ is uniformly continuous on $(a,b)$ if and only if it is possible to define values $g(a)$ and $g(b)$ at the endpoints so that the extended function $g$ is continuous on $[a,b]$.




Please can you check my proof?




If there are $g(a)$ and $g(b)$ such that $g:[a,b]\to \mathbb R$ is continuous then because $g:[a,b]\to \mathbb R$ is uniformly continuous, so is $g:(a,b)\to \mathbb R$.



Let $g$ be uniformly continuous on $(a,b)$ and let $x_n \to a$ and $y_n \to b$. Because $x_n$ and $y_n$ are Cauchy and $g$ is uniformly continuous, $g(x_n)$ and $g(y_n)$ are Cauchy. Define $g(a) = \lim g(x_n)$ and $g(b) = \lim g(y_n)$. Now this extended function $g$ is continuous at $a$ and $b$. It is enough to show it for one of the two points because the proof is similar. Let $\varepsilon >0$. The goal is to find $\delta$ such that $|x-a|<\delta$ implies that $|g(x) -g(a)|<\varepsilon$. By definition, $g(x_n) \to g(a)$ hence there is $N$ such that for $n>N$ it holds that $|g(x_n) - g(a)| < \varepsilon$. Now let $\delta = |a-x_{N+1}|$.






I now know why my proof is false, thanks to the answer but I can't understand one part of the answer. The bounty is for extension of answer not for more answers.


Answer



To show the extended function $g$ is continuous at $a$ let $\varepsilon > 0$.




Note that $|x-x_n|\le |x-a| + |a-x_n|$. Let $\delta$ be such that $|x-y|<\delta \implies |g(x)-g(y)|<{\varepsilon \over 2}$.



Let $N$ be such that $n>N$ implies that $|a-x_n|<{\delta \over 2}$ and let $N'$ be such that $n>N'$ implies $|g(x_n)-g(a)|<{\varepsilon \over 2}$.



Now if $|x-a|<{\delta \over 2}$ and $n>\max(N,N')$ then $|x-x_n|\le |x-a| + |a-x_n|<\delta$ and hence



$$ |g(x)-g(a)| \le |g(x)-g(x_n)| + |g(x_n) - g(a)| < \varepsilon$$


Wednesday 26 June 2013

number theory - Positive integer solutions of $a^3 + b^3 = c$




Is there any fast way to solve for positive integer solutions of $$a^3 + b^3 = c$$ knowing $c$?
My current method is checking if $c - a^3$ is a perfect cube for a range of numbers for $a$, but this takes a lot of time for larger numbers. I know $c = (a+b)(a^2-ab+b^2)$, but I can't think of a way this would speed up the process. Factoring numbers can be done fairly quickly but then the values of $a$ and $b$ have to be chosen. This is for a previous question I had about Fibonacci numbers. Any relevant literature is appreciated.


Answer



A very fast way to see that a positive integer $c$ is not the sum of two cubes are modular constraints. The equation $a^3+b^3=c$ has no integer solutions if $c$ satisfies one of the following congruences:



$$
(1) \quad c\equiv 3,4 \mod 7
$$




$$
(2) \quad c\equiv 3,4,5,6 \mod 9
$$



$$
(3) \quad c\equiv 3,4,5,6,10,11,12,13,14,15,17,18,21, 22, 23, 24, 25, 30, 31, 32, 33, 38, 39, 40,
41, 42, 45, 46, 48, 49, 50, 51, 52, 53, 57,
58, 59, 60 \mod 63
$$




On the other hand, there are intrinsic properties of $c$ known, such that $c$ is the sum of two squares:



Theorem (Broughan 2003): Let $c$ be a positive integer. Then the equation $c=a^3+b^3$ has solutions in positive integers if and only if the following conditions are satisfied:



1.) There exists a divisor $d\mid c$ with $c^{1/3}\le d\le 2^{2/3}c^{1/3}$ such that



2.) for some positive integer $l$ we have $d^2-c/d=3l$ such that



3.) the integer $d^2-4l$ is a perfect square.


functions - Proof verification: system of functional equation

A problem in Putnam Competition 1992(?). The question asked:




Prove that, the only solution of the system of functional equation with respect to $f:\mathbb Z\to\mathbb Z$:$$
\begin{cases}
f(f(n))=n\\
f(f(n+2)+2)=n\\
f(0)=1
\end{cases}

$$
is $f(n)=1-n$.




I know that the usual way is to construct another solution, and show that the difference between the two solutions is zero. But I want to use equivalence condition this time. (i.e. Prove that the system is equivalent to say that $f(n)=1-n$).



Now, we have
$$f(f(n))=f(f(n+2)+2)$$
Take $f$ on both sides.
$$f(f(f(n)))=f(f(f(n+2)+2))$$

Again by the first equation,
$$f(n)=f(n+2)+2$$
And another useful thing is that,
$$f(1)=f(f(0))=0$$



So, the original system is equivalent to say the recurrent relation
$$
\begin{cases}
f(n)=f(n+2)+2\\
f(0)=1\\

f(1)=0
\end{cases}
$$
The sequence-like function satisfying above is unique, trivially (as for all integers, we could find an unique image of it). So our proof is actually done already as $f(n)=1-n$ is simply a solution. Q.E.D.



In my proof, I used a lot of equivalence condition. I know that my proof is valid if the conditions are actually equivalent. I think so for myself, but I want peer reviews also. Thanks in advance.

linear algebra - Let $A$ be a real $ntimes n$ such that the diagonal entries are positive, the off diagonal entries are negative, and the row sums are positive.




Let $A$ be a $n\times n$ matrix over the reals such that the diagonal entries are all positive, the off-diagonal entries are all negative, and the row sums are all positive. Show that $\det A \neq 0$.




To show that $\det A\neq 0$, it would be sufficient to show that the system $AX = 0$ does not have a nontrivial solution. Then I suppose that to show that, one could assume that it has a nontrivial solution and then obtain a contradiction. But I'm stuck on actually doing that part.



Any suggestions? I'm also interested in where I can find questions similar to this one to practice.


Answer




Suppose there exists a vector $x \in \mathbb{R}^n \setminus \{\vec{0}\}$ such that $Ax = \vec{0}$.



Pick an index $k$ such that $|x_k| = \displaystyle\max_{j = 1,\ldots,n}|x_j|$. So we have $|x_j| \le |x_k|$ for all $j = 1,\ldots,n$.



If $x_k = 0$, then we would have $x_j = 0$ for all $j = 1,\ldots,n$, i.e. $x = \vec{0}$, which isn't possible.



If $x_k > 0$, then $x_j \le |x_j| \le |x_k| = x_k$ for all $j$. Then, since $A_{k,k} > 0$, $A_{k,j} < 0$ for all $j \neq k$, and $\displaystyle\sum_{j = 1}^{n}A_{k,j} > 0$, we have $$0 = (Ax)_k = \sum_{j = 1}^{n}A_{k,j}x_j = A_{k,k}x_k + \sum_{j \neq k}A_{k,j}x_j \ge A_{k,k}x_k + \sum_{j \neq k}A_{k,j}x_k = x_k\sum_{j = 1}^{n}A_{k,j} > 0,$$ a contradiction.



Similarly, if $x_k < 0$, then $x_j \ge -|x_j| \ge -|x_k| = x_k$ for all $j$. Then, since $A_{k,k} > 0$, $A_{k,j} < 0$ for all $j \neq k$, and $\displaystyle\sum_{j = 1}^{n}A_{k,j} > 0$, we have $$0 = (Ax)_k = \sum_{j = 1}^{n}A_{k,j}x_j = A_{k,k}x_k + \sum_{j \neq k}A_{k,j}x_j \le A_{k,k}x_k + \sum_{j \neq k}A_{k,j}x_k = x_k\sum_{j = 1}^{n}A_{k,j} < 0,$$ a contradiction.




Therefore, there is no nonzero vector $x$ such that $Ax = \vec{0}$. Hence, $A$ is invertible, and thus, $\det A \neq 0$.


Tuesday 25 June 2013

calculus - $dyover dx$ is one things but why in integration we can treat it as 2 different terms

when i am learning differentiation, my lectuer tell us that the deriative $dy\over dx$ is one things, it is not the ration between dy and dx. However when i learn

about integrating, sometime we need to do substitution, like integrating $\int_{0}^{1}2xdx$ when substituting $y=2x$, we can substitute $dy=2dx$, but why in this case it can be treated as 2 different terms instead of 1 term??

algebra precalculus - Sum of roots rational but product irrational



Suppose that $x_1,x_2,x_3,x_4$ are the real roots of a polynomial with integer coefficients of degree $4$, and $x_1+x_2$ is rational while $x_1x_2$ is irrational. Is it necessary that $x_1+x_2=x_3+x_4$?



For example, the polynomial $x^4-8x^3+18x^2-8x-7$ has roots $$x_1=1-\sqrt{2},x_2=3+\sqrt{2},x_3=1+\sqrt{2},x_4=3-\sqrt{2}.$$
It holds that $x_1+x_2$ is rational while $x_1x_2$ is irrational, and we have $x_1+x_2=x_3+x_4$.


Answer



Suppose that all the roots are non zero. Put $x_1+x_2=u$, $a=x_1x_2$, $x_3+x_4=v$, $b=x_3x_4$, we suppose that $u\in\mathbb{Q}$, then it is also the case for $x_3+x_4$, as the sum of all the roots is rational, and that $a,b$ are irrationals (as the product $ab$ is rational, if $a$ is irrational, then it is also the case for $b$). The polynomial with roots $x_1,x_2,x_3,x_4$ is
$$P(x)=(x^2-ux+a)(x^2-vx+b)=x^4-(u+v)x^3+(a+uv+b)x^2-(av+bu)x+ab$$




By hypothesis, we get that $a+b+uv$, $av+bu$, and $ab$ are in $\mathbb{Q}$. Writing



$av+bu=(a+b)v-bv+bu$, we see that $b(u-v)$ is rational. As $b$ is not, this imply $u=v$.



If now $x_3x_4=0$, then it is not possible that $x_3=x_4=0$,(in this case $P(x)=x^4-ux^3+ax^2$) as $a\not \in \mathbb{Q}$, and if $x_4=0$ and $x_3\not = 0$, we get that $x_3\in \mathbb{Q}$, and $x_3\in \mathbb{Q}$ is not possible again, as $P(x)=(x-x_3)(x^2-ux+a)=x⁴+..-x_3ax$.


Are there more quadratics with real roots or more with complex roots? Or the same?

Consider all quadratic equations with real coefficients:




$$y=ax^2+bx+c \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,, a,b,c \in \mathbb{R}, \, a≠0 $$



I was wondering whether if more of them have real roots, more have complex roots, or a same number of each?



I thought about this graphically, and wlog considered only the ones with positive $a$. By categorising the quadratics by their minimum point, there are an equal number of possible minima below the $x$-axis as above, so it seems that there are an equal number of quadratics with real roots as complex ones.



However, I then considered this problem algebraically by using the discriminant:



$$b^2-4ac$$




If $a$ and $c$ have opposite signs, then the discriminant will be positive, and so the quadratic will have real roots. This represents $50\%$ of the quadratic equations possible.



However, if $a$ and $c$ have the same sign, then some of these quadratics have real roots, whilst others have complex roots depending on whether $4ac$ is bigger than $b^2$ or not.



This suggests that there are more quadratics with real roots which contradicts the above answer.



Is the reason I've reached contradicting answers something to do with infinites, and that I can't really compare them in the way I've done above?

Monday 24 June 2013

Complex analysis on real integral



Use complex analysis to compute the real integral



$$\int_{-\infty}^\infty \frac{dx}{(1+x^2)^3}$$





I think I want to consider this as the real part of




$$\int_{-\infty}^\infty \frac{dz}{(1+z^2)^3}$$



and then apply the residue theorem. However, I am not sure how that is the complex form and the upper integral is the real part and how to apply.


Answer



Define a contour that is a semicircle in the upper half plane of radius R. Plus the real line from $-R$ to $R$



Then let R get to be arbitrarily large.



There is one pole at $z = i$ inside the contour




Cauchy integral formula says:



$$f^{(n)}(a) = \frac {n!}{2\pi i}\oint \frac {f(a)}{(z-a)^{n+1}}\ dz$$



$$\oint \frac {\frac {1}{(z+i)^3}}{(z-i)^3}\ dz = \pi i \frac{d^2}{dz^2} \frac {1}{(z+i)^3} \text{ evaluated at } z=i.$$



Next you will need to show that the integral along contour of the semi-cricle goes to $0$ as $R$ gets to be large.



$$z = R e^{it}, dz = iR e^{it}\\

\displaystyle\int_0^{\pi} \frac {iRe^{it}}{(R^2 e^{2it} + 1)^3} \ dt\\
\left|\frac {iRe^{it}}{(R^2 e^{2it} + 1)^3}\right| < R^{-5}\\
\left|\int_0^{\pi} \frac {iRe^{it}}{(R^2 e^{2it} + 1)^3} \ dt\right| < \int_0^{\pi} R^{-5}\ dt\\
\lim_\limits{R\to \infty}\int_0^{\pi} R^{-5}\ dt = 0$$


combinatorics - Proof for a certain binomial identity



Let $1\leq m\leq n$ be positive integers. I will appreciate any help proving the following identity
$$
\sum_{k=n}^{n+m}(-1)^k\binom{k-1}{n-1}\binom{n}{k-m}=0
$$

Thanks!


Answer




Here we have Chu-Vandermonde's Identity in disguise.




We obtain for $1\leq m\leq n$
\begin{align*}
\color{blue}{\sum_{k=n}^{n+m}}&\color{blue}{(-1)^k\binom{k-1}{n-1}\binom{n}{k-m}}\\
&=\sum_{k=0}^m(-1)^{k+n}\binom{n+k-1}{n-1}\binom{n}{k+n-m}\tag{1}\\
&=\sum_{k=0}^m(-1)^{k+n}\binom{n+k-1}{k}\binom{n}{m-k}\tag{2}\\
&=(-1)^n\sum_{k=0}^m\binom{-n}{k}\binom{n}{m-k}\tag{3}\\
&=(-1)^n\binom{0}{m}\tag{4}\\

&\,\,\color{blue}{=0}
\end{align*}




Comment:




  • In (1) we shift the index to start with $k=0$.


  • In (2) we apply the binomial identity $\binom{p}{q}=\binom{p}{p-q}$ twice.


  • In (3) we apply the binomial identity $\binom{-p}{q}=\binom{p+q-1}{q}(-1)^q$.



  • In (4) we finally apply the Chu-Vandermonde identity.



elementary number theory - Find all positive integers $n$ such that $n+2008$ divides $n^2 + 2008$ and $n+2009$ divides $n^2 + 2009$



I wrote
$$
\begin{align}
n^2 + 2008
&= (n+2008)^2 - 2 \cdot 2008n - 2008^2 + 2008 \\
&= (n+2008)^2 - 2 \cdot 2008(n+2008) + 2008^2 + 2008 \\
&= (n+2008)^2 - 2 \cdot 2008(n+2008) + 2008 \cdot 2009

\end{align}
$$
to deduce that $n+2008$ divides $n^2 + 2008$ if and only if $n+2008$ divides $2008 \cdot 2009$.



Similarly, I found that $n+2009$ divides $n^2 + 2009$ if and only if $n+2009$ divides $2009 \cdot 2010$.



I can't seem to get anywhere from here. I know that $n=1$ is an obvious solution, and I think it's the only one, although I can't prove/disprove this. I know that any two consecutive integers are coprime but I haven't been able to use this fact towards the solution.



Could anyone please give me any hints?




For the record, this is a question from Round 1 of the British Mathematical Olympiad.


Answer



Here another approach



Since $(n+2008)|( n^{2} + 2008)$ and $(n+2009) |( n^{2} + 2009)$ then



$(n+2008)k = n^2 + 2008$ and $(n+2009)m = n^2 + 2009$ for some $k,m \in \mathbb{Z}$



$\textbf{Case 1:}$ $k=1$ or $m=1$




$(n+2008) = n^2 + 2008$ or $(n+2009) = n^2 + 2009$



in any case leads to $n^2-n=0$ that is $n=1$ and $n=0$



$\textbf{Case 2:}$ $m>k>1$



$$n^2+2009=(n+2009)m = (n+2008)m+m > (n+2008)k + m > n^2+2009$$ which is impossible



$\textbf{Case 3:}$ $k>m>1$




$$n^2+2008=(n+2009-1)k = (n+2009)k - k \ge (n+2009)(m+1) - k = (n+2009)m +(n+2009-k) \ge n^2+2009 $$ also impossible, the last inequality holds because $k< n+2009$, suppose not



$k \ge n+2009 >n+2008 \Rightarrow n^2 + 2008 =(n+2008)k > (n+2008)^2 > n^2 + 2008$ a contradiction



$\textbf{Case 4:}$ $k=m>1$



$$(n+2008)k+1=(n+2009)k \Longrightarrow k =1 \Longrightarrow n=1$$



So the solutions are $\boxed{n=1}$ and $\boxed{n=0}$


integration - Integral $small int_0^infty frac{(pi x - 2log{x})^3}{left(log^2{x} + frac{pi^2}{4}right)(1+x^2)^2} text{d}x$




Prove $$\int_0^\infty \frac{(\pi x - 2\log{x})^3}{\left(\log^2{x} + \frac{\pi^2}{4}\right)(1+x^2)^2} \text{d}x = 8\pi$$





I tried using a modified version of the integrand and an origin-indented semicircular contour on the positive real half of the complex plane.



Despite my best efforts, I am unable to retrieve the desired integral, as the negative factor from the negative imaginary axis is preventing me from being able to simplify the integrals.



On top of that, the residue doesn't come anywhere close to the exact result.



Are there better methods that I am missing?


Answer



Before we start I should mention that we will use some instances of Schroder's Integral which evaluates Gregory coefficients, namely:

$$\sf (-1)^{n-1}G_n=\int_0^\infty\frac{1}{(\pi^2+\ln^2 t)(1+t)^n}dt$$






But let's adjust things first.
$$\sf \color{orange}{I=\int_0^\infty \frac{(\pi x - 2\log{x})^3}{\left(\log^2{x} + \frac{\pi^2}{4}\right)(1+x^2)^2} dx} \overset{x^2=t}=2\int_0^\infty \frac{(\pi\sqrt t-\ln t)^3}{(\pi^2+\ln^2 t)(1+t)^2}\frac{dt}{\sqrt t}$$
$$\sf =2\pi^3\int_0^\infty \frac{t}{(\pi^2+\ln^2 t)(1+t)^2}dt-6\pi^2 \int_0^\infty \frac{\sqrt t\ln t}{(\pi^2+\ln^2 t)(1+t)^2}dt$$
$$\sf +6\pi \int_0^\infty \frac{\ln^2 t}{(\pi^2+\ln^2 t)(1+t)^2}dt-2\int_0^\infty \frac{\ln^3 t}{(\pi^2+\ln^2 t)(1+t)^2}\frac{dt}{\sqrt t}$$
$$\sf =2\pi^3 I_1-6\pi^2I_2+6\pi I_3-2 I_4$$







Evaluation of $I_1$.



$$\tt \color{blue}{I_1=\int_0^\infty \frac{(1+t)-1}{(\pi^2+\ln^2 t)(1+t)^2}dt=\int_0^\infty \frac{1}{(\pi^2+\ln^2 t)(1+t)}dt-\int_0^\infty \frac{1}{(\pi^2+\ln^2 t)(1+t)^2}dt}$$



$$\tt \color{blue}{=G_1+G_2=\frac12-\frac{1}{12}=\frac{5}{12}}$$







Evaluation of $I_2$.



I have solved this integral here.
$$\tt \color{red}{I_2=\int_0^\infty \frac{\sqrt t\ln t}{(\pi^2+\ln^2 t)(1+t)^2}dt\overset{t=\frac{1}{x}}=-\int_0^\infty \frac{\ln x}{(\pi^2+\ln^2 x)(1+x)^2}\frac{dx}{\sqrt x}=\frac{\pi}{24}}$$






Evaluation of $I_3$.



$$\tt \color{purple}{I_3= \int_0^\infty \frac{(\pi^2 +\ln^2 t)-\pi^2}{(\pi^2+\ln^2 t)(1+t)^2}dt=\int_0^\infty \frac{1}{(1+t)^2}dt-\pi^2\int_0^\infty \frac{1}{(\pi^2+\ln^2 t)(1+t)^2}dt}$$

$$\tt \color{purple}{=1+\pi^2 G_2=1-\frac{\pi^2}{12}}$$






Evaluation of $I_4$.



Write $\ln^3 t= \ln t((\pi^2+\ln^2 t ) -\pi^2 )$ to get:



$$\tt \color{green}{I_4=\int_0^\infty \frac{\ln^3 t}{(\pi^2+\ln^2 t)(1+t)^2}\frac{dt}{\sqrt t}=\int_0^\infty \frac{\ln t}{(1+t)^2}\frac{dt}{\sqrt t}-\pi^2 \int_0^\infty \frac{\ln t}{(\pi^2+\ln^2 t)(1+t)^2}\frac{dt}{\sqrt t}}$$
$$\tt \color{green}{=-\pi+\pi^2 I_2=-\pi +\frac{\pi^3}{24}}$$







$$\require{cancel} \sf \Rightarrow \color{orange}I=\cancel{2\pi^3\color{blue}{\frac{5}{12}}} -\cancel{6\pi^2\color{red}{\frac{\pi}{24}}}+6\pi \left(\color{purple}{1-\cancel{\frac{\pi^2}{12}}}\right)-2\left(\color{green}{-\pi +\cancel{\frac{\pi^3}{24}}}\right)=\color{orange}{8\pi}$$



We actually only used $G_1$ and $G_2$ and I'm sure that we can evaluate them using elementary methods, atleast the first one is pretty easy, but for the second one we might have to strive a little.


Proof by induction, induction step

I am trying to prove



$$
\sum_{k=1}^n k2^{k-1} = 1+(n-1)2^n
$$



I proved the base case with $n = 1$. I am having trouble proving the induction step.




I know I need to prove for $n = n +1$ so I got



$$
\sum_{k=1}^n k2^{k-1}+(k+1)2^{(k+1)-1} = 1+[(n+1)-1]2^{n+1}
$$



suppose $n = i$



$$
1 + [(i+1)-1] 2^{i+1}

$$



I am not sure if I am on the right path and how to simplify from here onwards? Could someone point me in the right direction?

complex analysis - $int_{0}^{infty}frac{dx}{1+x^n}$



My goal is to evaluate $$\int_{0}^{\infty}\frac{dx}{1+x^n}\;\;(n\in\mathbb{N},n\geq2).$$



Here is my approach:



Clearly, the integral converges.




Denote the value of the integral by $I_n$.



Now let $\gamma_R$ describe the section of a circle which goes from the origin to $R$ to $Re^{2\pi i/n}$ and back to the origin.



If we let $C_R$ denote the relevant circular arc, then
$$\left|\int_{C_R}\frac{dz}{1+z^n}\right|\leq \left(\frac{2\pi R}{n}\right)\left(\frac{1}{R^{n}-1}\right)\rightarrow0\;\;\;as\;\;R\rightarrow\infty.$$



Furthermore, $$\int_{[R,Re^{2\pi i/n}]}\frac{dz}{1+z^n}=\int_{R}^{0}\frac{e^{2\pi i/n}dr}{1+r^n}.$$



Hence $$\lim_{R\rightarrow\infty}\int_{\gamma_R}\frac{dz}{1+z^n}=\lim_{R\rightarrow\infty}\int_{[0,R]}\frac{dx}{1+x^n}+\int_{[R,Re^{2\pi i/n}]}\frac{dx}{1+x^n}+\int_{C_R}\frac{dx}{1+x^n}=(1-e^{2\pi i/n})I_n\;\;\;(1).$$




Thus if we can obtain the value of $\int_{\gamma_R}\frac{dz}{1+z^n}$ we can evaluate $I_n$.



Now the zeroes of $1+z^n$ are of the form $z=e^{i\pi/n+2\pi i m/n}\;\;(m\in\mathbb{N})$ from which it is clear that the only zero which lies within the contour occurs at $z=e^{i\pi/n}$ with multiplicity 1.
So all that remains to be done is to evaluate the residue of $\frac{1}{1+z^n}$ at $z=e^{i\pi/n}$.



However, if $z=e^{i\pi/n}u$ and $u\neq1$, we have
$$\frac{z^n+1}{z-e^{i\pi/n}}=\frac{1-u^n}{-e^{i\pi/n}(1-u)}
=-e^{-i\pi/n}\sum_{m=0}^{n-1}u^m\;\;\;(2).$$




In particular, (2) implies $$Res_{z=e^{i\pi/n}}\frac{1}{1+z^n}=-\frac{e^{i\pi/n}}{n}\;\;\;(3).$$



Finally, (1) and (3) imply
$$I_n=\frac{2\pi i (Res_{z=e^{i\pi/n}}\frac{1}{1+z^n})}{1-e^{2\pi i/n}}=\frac{-2\pi ie^{i\pi/n}}{n(1-e^{2\pi i/n})}=\frac{\pi/n}{\sin(\pi/n)}.$$



I have three questions:



One, is my method correct?



Two, is there a simpler/different method to evaluate the integral?




Three, is there an easier way to evaluate the residue of $\frac{1}{1+z^4}$ at $z=e^{i\pi/n}$?


Answer



Here is a different way. Lets more generally find the Mellin Transform.



Consider $$I(\alpha,\beta)=\int_{0}^{\infty}\frac{u^{\alpha-1}}{1+u^{\beta}}du=\mathcal{M}\left(\frac{1}{1+u^{\beta}}\right)(\alpha)$$ Let $x=1+u^{\beta}$ so that $u=(x-1)^{\frac{1}{\beta}}$. Then we have $$I(\alpha,\beta)=\frac{1}{\beta}\int_{1}^{\infty}\frac{(x-1)^{\frac{\alpha-1}{\beta}}}{x}(x-1)^{\frac{1}{\beta}-1}dx.$$ Setting $x=\frac{1}{v}$ we obtain $$I(\alpha,\beta)=\frac{1}{\beta}\int_{0}^{1}v^{-\frac{\alpha}{\beta}}(1-v)^{\frac{\alpha}{\beta}-1}dv=\frac{1}{\beta}\text{B}\left(-\frac{\alpha}{\beta}+1,\ \frac{\alpha}{\beta}\right).$$



Using the properties of the Beta and Gamma functions, this equals $$\frac{1}{\beta}\frac{\Gamma\left(1-\frac{\alpha}{\beta}\right)\Gamma\left(\frac{\alpha}{\beta}\right)}{\Gamma(1)}=\frac{\pi}{\beta\sin\left(\frac{\pi\alpha}{\beta}\right)}.$$



Your question is the case where $\alpha =1$.




Also see Chandru's answer on a different thread. It is another nice solution, along the lines of what you did above. (See this previous question, where both solutions can be found)



Hope that helps,


abstract algebra - Are algebraic numbers analogous to group elements with finite order?




Would you say that the "elements with finite order" in group theory are analogous to "algebraic numbers" in field theory?



I thought this is the case since requiring an algebraic number $\alpha$ to be the root of a polynomial (i.e. requiring a finite combination of terms in $\alpha$ using $+$ and $\times$ to give the identity zero) is like the two-operation equivalent of an element $g$ in group theory having finite order (where there is only one operation $\times$ and we require a term in $g$ which is required to give the identity 1).



However I don't think the analogy is quite complete because in the case of the polynomial we are allowed to also multiply powers of $\alpha$ by other elements of the field to achieve the identity.


Answer



There is a general model-theoretic notion of algebraic elements, see here.



If $L/K$ is a field extension, then $a \in L$ is algebraic over $K$ in the usual sense iff it is algebraic in the sense of model theory (applied to the structure $(L,+,*,0,1)$ and the subset of $K$).




If $G$ is a group, then $a \in G$ is algebraic over $\emptyset$ in the sense of model theory iff there is some $n \in \mathbb{Z}$ with $g^n=1$ and $\{h \in G : h^n=1\}$ is finite. Thus, if $G$ is finite, then $n=0$ works and every element is algebraic. This concept is more interesting when $G$ is infinite. Then every algebraic element is torsion. The converse probably does not hold.


linear algebra - Why does the inverse of the Hilbert matrix have integer entries?




Let $A$ be the $n\times n$ matrix given by



$$A_{ij}=\frac{1}{i + j - 1}$$



Show that $A$ is invertible and that the inverse has integer entries.





I was able to show that $A$ is invertible. How do I show that $A^{-1}$ has integer entries?



This matrix is called the Hilbert matrix. The problem appears as exercise 12 in section 1.6 of Hoffman and Kunze's Linear Algebra (2nd edition).


Answer



Be wise, generalize (c)



I think the nicest way to answer this question is the direct computation of the inverse - however, for a more general matrix including the Hilbert matrix as a special case. The corresponding formulas have very transparent structure and nontrivial further generalizations.







The matrix $A$ is a particular case of the so-called Cauchy matrix with elements
$$A_{ij}=\frac{1}{x_i-y_j},\qquad i,j=1,\ldots, N.$$
Namely, in the Hilbert case we can take
$$x_i=i-\frac{1}{2},\qquad y_i=-i+\frac12.$$
The determinant of $A$ is given in the general case by
$$\mathrm{det}\,A=\frac{\prod_{1\leq i Up to an easily computable constant prefactor, the structure of (1) follows from the observation that $\mathrm{det}\,A$ vanishes whenever there is a pair of coinciding $x$'s or $y$'s. (In the latter case $A$ contains a pair of coinciding raws/columns). For our $x$'s and $y$'s the determinant is clearly non-zero, hence $A$ is invertible.




One can also easily find the inverse $A^{-1}$, since the matrix obtained from a Cauchy matrix by deleting one row and one column is also of Cauchy type, with one $x$ and one $y$ less. Taking the ratio of the corresponding two determinants and using (1), most of the factors cancel out and one obtains
\begin{align}
A_{mn}^{-1}=\frac{1}{y_m-x_n}\frac{\prod_{1\leq i\leq N}(x_n-y_i)\cdot\prod_{1\leq i\leq N}(y_m-x_i)}{\prod_{i\neq n}(x_n-x_i)\cdot\prod_{i\neq m}(y_m-y_i)}.\tag{2}
\end{align}



For our particular $x$'s and $y$'s, the formula (2) reduces to
\begin{align}
A_{mn}^{-1}&=\frac{(-1)^{m+n}}{m+n-1}\frac{\frac{(n+N-1)!}{(n-1)!}\cdot
\frac{(m+N-1)!}{(m-1)!}}{(n-1)!(N-n)!\cdot(m-1)!(N-m)!}=\\
&=(-1)^{m+n}(m+n-1){n+N-1 \choose N-m}{m+N-1 \choose N-n}{m+n-2\choose m-1}^2.

\end{align}
The last expression is clearly integer. $\blacksquare$


Sunday 23 June 2013

number theory - Least non negative residue



Find the least non negative residue of 17x18(mod35)




I was under the impression that you'd take 17 ≡ -18 and 18 ≡ -17 but obviously
(-17)(-18) is greater than 35 so this cannot be the answer



Using a calculator, compute the residue of 63433 x 723239(mod123433437)



I changed to the number theory course recently so have missed lectures on this and the lecture notes are very brief and rather unhelpful with solving these problems, any help would be greatly appreciated.


Answer




  1. Note that $2\times 18 \equiv 36 \equiv 1 \pmod {35},\,$ so you get

    $17\times 18 \equiv 16\times 18 + 18 \equiv 8(2\times 18) + 18 \equiv 8 + 18 \equiv 26 \pmod {35}$


  2. Compute $a = 63433\times 723239 = 45877219487\;$ and $a/123433437 \approx 371.68\;$ and therefore
    $a \equiv 45877219487-123433437\times 371 \equiv 83414360 \pmod {123433437}.$
    This is a special case of
    $a \bmod b = a -\lfloor a/b \rfloor b$



calculus - Proving that $lim_{hto 0 } frac{b^{h}-1}{h} = ln{b}$



Is there a formal proof of this fact without using L'Hôpital's rule? I was thinking about using a proof
of this fact:

$$
\left.\frac{d(e^{x})}{dx}\right|_{x=x_0} = e^{x_0}\lim_{h\to 0} \frac{e^{h}-1}{h} = e^{x_0}\cdot 1=e^{x_0}
$$
that I have to help prove:
$$
\lim_{h\to 0} \frac{b^{h}-1}{h} = \ln{b}
$$
Is there a succinct proof of this limit? How does one prove this rigorously?


Answer



It is easy to see that $$\lim_{x\to 0}\frac{\log_a(1+x)}{x}=\log_a e$$ Now set $y=a^x-1$. So $a^x=1+y$ and then $$x=\log_a(1+y)$$ Clearly, when $x\to0$; $y\to 0$ so $$\lim_{x\to 0}\frac{a^x-1}{x}=\lim_{y\to 0}\frac{y}{\log_a(1+y)}=\log_e a$$



real analysis - Proving a function is discontinuous by constructing a sequence

Let $f : [0, 1] \to \mathbb{R}$ be a function that maps rationals to irrationals and irrationals to rationals. I could show that any such $f$ can't be continuous by showing that its range is at most countable and going from there. We also know that for a function $g$ to be continuous, for all sequences $x_n$ that converge to $x$, $g(x_n)$ has to converge to $g(x)$. So since $f$ is discontinuous, there exists a sequence for which this does not hold. Is it possible to construct it without assuming anything further about $f$? In other words, is it possible to prove that $f$ is discontinuous by constructing such a sequence?

calculus - Feynman technique of integration for $int^infty_0 expleft(frac{-x^2}{y^2}-y^2right) dx$

I've been learning a technique that Feynman describes in some of his books to integrate. The source can be found here:



http://ocw.mit.edu/courses/mathematics/18-304-undergraduate-seminar-in-discrete-mathematics-spring-2006/projects/integratnfeynman.pdf



The first few examples are integrals in $x$ and $y$ variables, and I can't see a good way to simplify them using differentiation, particularly the example:




$$\int^\infty_0 \exp\left(\frac{-x^2}{y^2}-y^2\right) dx$$

Collect 'unusual' examples of real-valued functions

I am trying to collect research papers, articles and books which contain 'weird' or 'unusual' examples of real-valued function.



An example of research paper is



A Counterexample and an Exact Versoin of Fatou's Lemma in Infinite Dimension



An example of article is



Some counterexamples on the behaviour of real-valued
functions and their derivatives




Examples of books are



Surprises and Counterexamples in Real Function Theory



Analysis in examples and counterexamples. An introduction to the theory
of real functions



Counterexamples in Analysis





Question: Can someone provide me more articles and books which are similar to the above? My aim is to learn as many function construction techniques as possible.


linear algebra - Can two matrices with the same characteristic polynomial have different eigenvalues?



The polynomial is $-\lambda^3+3\lambda -2$



which factorizes into ($\lambda-1$)($\lambda +1$)($\lambda -2$)



A matrix A has the above characteristic polynomial, and so its eigenvalues are 1, -1, and 2.




However, another matrix B, already in diagonal form, has the same characteristic polynomial, but with eigenvalues 1,1,-2, i.e., diagonal entries 1,1,-2.



Is this possible? Or have I gone wrong in my computations?



The problem statement does ask to show that the characteristic polynomials are the same but that the matrices A and B are not similar. So, perhaps I have found exactly what I needed, but it just seems weird...



Thanks,


Answer



$-\lambda^3+3\lambda - 2 = -(\lambda-1)^2(\lambda+2) \neq -(\lambda-1)(\lambda+1)(\lambda-2)$.


functional equations - Finding all continuous functions such that $f^2(x + y) − f^2(x − y) = 4f(x)f(y)$



I've been working on the following homework problem:





Find all continuous functions $f : \mathbb{R} → [0,∞)$ such that $f^2(x + y) − f^2(x − y) = 4f(x)f(y)$ for all real numbers $x, y$.




The first problem I have is I am not sure what is meant by "finding all continuous functions". Can someone show me what the ending statement should look like?



Secondly, I'm not sure what to do with the relations I've derived. The most helpful are:



$f(0) = 0$




$f(x) = f(-x)$



$f(2x) = 2f(x)$



I also know that:



$f(2) = 2f(1)$



$f(3) = \frac{3}{2}f(2)$




$f(4) = 2f(2) = 4f(1)$



This leads me to believe that $\frac{f(a)}{f(b)} = \frac{a}{b}$, but I don't know (a) if this is helpful and (b) how to prove it.



Can someone clarify what exactly the object is, and point me towards the next step?


Answer



The condition $f(x)\geq 0$ seems to kill all non-trivial possibilities: since $f(0)=0$ we let $y=-x:$



$$-f^2(2x)=4f(x)f(-x)$$




However $f(x)\geq 0$, so $f(x)=0$. Are you absolutely sure of the $\mathbb{R}\to [0,\infty)$ bit?


Saturday 22 June 2013

real analysis - Is there a continuous bijection between an interval $[0,1]$ and a square: $[0,1] times [0,1]$?



Is there a continuous bijection from $[0,1]$ onto $[0,1] \times [0,1]$?
That is with $I=[0,1]$ and $S=[0,1] \times [0,1]$, is there a continuous bijection

$$
f: I \to S?
$$



I know there is a continuous bijection $g:C \to I$ from the Cantor set $C$ to $[0,1]$.
The square $S$ is compact so there is a continuous function
$$
h: C \to S.
$$
But this leads nowhere.
Is there a way to construct such an $f$?




I ask because I have a continuous functional $F:S \to \mathbb R$.
For numerical reason, I would like to convert it into the functional
$$
G: I \to \mathbb R, \\
G = F \circ f ,
$$
so that $G$ is continuous.


Answer



No, such a bijection from the unit interval $I$ to the unit square $S$ cannot exist. Since $I$ is compact and $S$ is Hausdorff, a continuous bijection would be a homeomorphism (see here). But in $I$ there are only two non-cut-points, whereas in $S$ each point is a non-cut-point.


Sum of the series $sinθsin2θ + sin2θsin3θ + sin3θsin4θ + sin4θsin5θ + cdots+sin nthetasin(n+1)theta$ terms

The series is given:
$$\sum_{i=1}^n \sin (i\theta) \sin ((i+1)\theta)$$
We have to find the sum to n terms of the given series.
I could took out the $2\sin^2\cos$ terms common in the series. But what to do further, please guide me.

sequences and series - How to evaluate $int_0^1frac{log^2(1+x)}xmathrm dx$?

The definite integral



$$\int_0^1\frac{\log^2(1+x)}x\mathrm dx=\frac{\zeta(3)}4$$



arose in my answer to this question. I couldn't find it treated anywhere online. I eventually found two ways to evaluate the integral, and I'm posting them as answers, but they both seem like a complicated detour for a simple result, so I'm posting this question not only to record my answers but also to ask whether there's a more elegant derivation of the result.



Note that either using the method described in this blog post or substituting the power series for $\log(1+x)$ and using



$$\frac1k\frac1{s-k}=\frac1s\left(\frac1k+\frac1{s-k}\right)\;$$




yields



$$
\int_0^1\frac{\log^2(1+x)}x\mathrm dx=2\sum_{n=1}^\infty\frac{(-1)^{n+1}H_n}{(n+1)^2}\;.
$$



However, since the corresponding identity without the alternating sign is used to obtain the sum by evaluating the integral and not vice versa, I'm not sure that this constitutes progress.

calculus - Integrate $int zsqrt{9-z^2}dz$ by trigonometric subsitution



I want to integrate $\int z\sqrt{9-z^2}$ by trigonometric subsitution. Here's what I did:




let $z = 3\sin\theta\implies dz = 3\cos\theta d\theta$



$$\int z\sqrt{9-z^2}dz = \int 3\sin \theta\sqrt{9-3^2\sin^2\theta} \ \ 3\cos\theta \ \ d\theta = 9\int \sin\theta\cos\theta\sqrt{9(1-\sin^2\theta)} \ \ d\theta = \\9\int\sin\theta\cos\theta\ 3|\cos\theta| d\theta = 27\int\sin\theta\cos^2\theta \ d\theta$$
let $u = \cos\theta\implies du = \sin\theta d\theta$



$$27\int\sin\theta\cos^2\theta \ d\theta = -27\int u^2\ du = -27\frac{u^3}{3} = -27\frac{\cos^3\theta}{3}$$
Since $z=3\sin\theta$ then $z^2 = 9\sin^2\theta\implies z^2 = 9(1-\cos^2\theta)\implies z^2 = 9-9\cos^2\theta \implies 9\cos^2\theta = 9-z^2\\\implies \cos^2\theta = \dfrac{9-z^2}{9}$



But how to find $\cos^3(\theta)$ in terms of $z$?




ps: I know this integral is a lot easier to integrate with simple substitution.


Answer



$$I=\int z\sqrt{9-z^2}{\rm d}z$$
Let $z=3\sin\theta$:
$$I=\int 3\sin\theta.3\cos\theta3\cos\theta.{\rm d}\theta\\=27\int\cos^2\theta\sin\theta{\rm d}\theta\\=-27\int\cos^2\theta{\rm d}(\cos \theta)\\=-27\frac{\cos^3\theta}3\\=-9(1-(z/3)^2)^{3/2}+c$$
Because $\sin^2\theta+\cos^2\theta=1\implies \cos\theta=\sqrt{1-\sin^2\theta}=(1-(z/3)^2)^{1/2}$
Hope you can cube it now?


Friday 21 June 2013

abstract algebra - Determine the irreducible polynomial for $alpha=sqrt{3}+sqrt{5}$ over $mathbb{Q}(sqrt{10})$



I've already found that the irreducible polynomial of $\alpha$ over $\mathbb{Q}$ is $x^4-16x^2+4$. I've also found that $\mathbb{Q}(\sqrt{3}+\sqrt{5})=\mathbb{Q}(\sqrt{3},\sqrt{5})$ and that $\mathbb{Q}(\sqrt{3},\sqrt{5},\sqrt{10})=\mathbb{Q}(\sqrt{2},\sqrt{3},\sqrt{5})$. Since $[\mathbb{Q}(\sqrt{10}):\mathbb{Q}]=2$ and $[{\mathbb{Q}(\sqrt{3},\sqrt{5}):\mathbb{Q}}]=4$, $[\mathbb{Q}(\sqrt{2},\sqrt{3},\sqrt{5}):\mathbb{Q}(\sqrt{3},\sqrt{5})]$ must be either 1 or 2.



I know it's 2 but I'm having a hard time proving that $\mathbb{Q}(\sqrt{3},\sqrt{5})\neq\mathbb{Q}(\sqrt{2},\sqrt{3},\sqrt{5})$. I'm trying to show that $\sqrt{2}\not\in\mathbb{Q}(\sqrt{3},\sqrt{5})$ but I'm not having much luck.




The solution in the link below uses a theorem of Galois theory we haven't covered yet so I don't feel comfortable using it. Here is what we have covered that I suspect is relevant but haven't figured out how to use yet:




Let $K$ and $K^\prime$ be extensions of the same field $F$. An isomorphism $\varphi:K\to K^\prime$ that restricts to the identity on $F$ is an isomorphism of field extensions.



Let $F$ be a field and $\alpha$ and $\beta$ be elements of field extensions $K/F$ and $L/F$. Suppose $\alpha$ an $\beta$ are algebraic over $F$. There is an isomorphism of fields $\sigma:F(\alpha)\to F(\beta)$ that is the identity on $F$ and that sends $\alpha\leadsto\beta$ if and only if the irreducible polynomials for $\alpha$ and $\beta$ over $F$ are equal.



Let $\varphi:K\to K^\prime$ be an isomorphism of field extensions of $F$, and let $f$ be a polynomial with coefficients in $F$. Let $\alpha$ be a root of $f$ in $K$, and let $\alpha^\prime=\varphi(\alpha)$ be its image in $K^\prime$. Then $\alpha^\prime$ is also a root of $f$.





If I start by assuming that $\mathbb{Q}(\sqrt{3},\sqrt{5})=\mathbb{Q}(\sqrt{2},\sqrt{3},\sqrt{5})$ then I suspect the three statements above will lead to a contradiction somewhere. I just don't have a good firm grasp of how to put them into practice yet.



Any help is appreciated. Thanks,


Answer



Suppose by contradiction that



$$\sqrt{2}=a+b\sqrt{3}+c\sqrt{5}+d \sqrt{15} \,.$$



Squaring both sides and using the fact that $1, \sqrt{3}, \sqrt{5}, \sqrt{15}$ are linearly independent over Q you get




$$2=a^2+3b^2+5c^2+15d^2 \,.$$
$$ab+5cd =0 \,.$$
$$ac+3bd=0 \,.$$
$$ad+bc=0 \,.$$



From the last two equations we get



$$3bd^2=-acd=bc^2 \,.$$



Since $3d^2=c^2$ has no rational roots we get that $b=0$.




It follows that



$$2=a^2+5c^2+15d^2 \,.$$
$$5cd =0 \,.$$
$$ac=0 \,.$$
$$ad=0 \,.$$



From the last two equations it follows that two of $a,c,d$ must be $0$, and then you get from the first equation that $x^2 \in \{ 2, \frac{2}{5}, \frac{2}{15} \}$ where $x$ is the nonzero one... Contradiction with $x \in Q$.


Symbol for reflecting complex numbers in the imaginary axis

A bar or star symbol is used for reflecting in the real axis (i.e. complex conjugate).



Is there a commonly (or not so commonly) used symbol for inverting the sign of the real part (i.e. reflecting in the Im-axis) ?



I'm thinking of a situation in which reflections in the axes are the main concepts. Let's use ~z and z*. In this situation -z wouldn't be a main concept but if needed could be written as -z=~z*. Rather than just make up ~z, I wanted to know if there was already a symbol for this.

definite integrals - Evaluating $int_{0}^{infty} e^{-(x^{n}+x)}, mathrm{d}x,quad n>0$

I know from Discussing the Integral of $\exp(-x^n)$ that
$$\int_{0}^{\infty} e^{-x^{n}}\mathrm{d}x=\Gamma(1+1/n),\quad n>0.$$



But how to evaluate
$$\int_{0}^{\infty}e^{-(x^{n}-x)}\,\mathrm{d}x,\quad n>0?$$



The only substitution i found is
$$\text{Let}\quad x=\ln u, \quad \text{then}\quad e^{x}=u, \quad \text{and} \quad \mathrm{d}x=\frac{1}{u}\mathrm{d}u.$$
Then
$$\int_{0}^{\infty}e^{-(x^{n}-x)}\,\mathrm{d}x=\int_{1}^{\infty}e^{-(\ln u)^{n}}\,\mathrm{d}u$$

But after this, I am stuck.



Thank you!

sequences and series - Geometric progression — Sum of terms, sum of terms' cubes, sum of terms' squares



Consider the infinite geometric progression $a_1, a_2, a_3, \dots$. Given the sum of its terms, as well as the sum of the terms' cubes



$a_1 + a_2 + a_3 + \cdots = 3\\
a_1^3 + a_2^3 + a_3^3 + \cdots = \frac{108}{13}$



find the sum of the terms' squares




$a_1^2 + a_2^2 + a_3^2 + \cdots$






This is what I have so far:



$a_1 + a_2 + a_3 + \cdots = 3\\
\Longrightarrow 1 + q + q^2 + \cdots = \frac{3}{a_1}$



$a_1^3 + a_2^3 + a_3^3 + \cdots = \frac{108}{13}\\

\Longrightarrow 1 + q^3 + q^6 + \cdots = \frac{108}{13a_1}$



$a_1^2 + a_2^2 + a_3^2 + \cdots\\
= \frac{a_1^3}{a_1} + \frac{a_2^3}{a_2} + \frac{a_3^3}{a_3} + \cdots\\
= \frac{a_1^3}{a_1} \left(1 + \frac{q^3}{q} + \frac{q^6}{q^2} + \cdots\right)$



Any help will be much appreciated.


Answer



HINT:




Let the first term be $a$ and the common ratio be $r$



For convergence we need $|r|<1$



So, the sum of the first series $\sum_{0\le n<\infty}a\cdot r^n=\frac a{1-r}=3\ \ \ \ (1)$



So, the sum of the second series $\sum_{0\le n<\infty}a^3\cdot (r^3)^n=\frac {a^3}{1-r^3}=\frac{108}{13}\ \ \ \ (2) $



Cube the first & divide by $(2)$ to get $r$ and then get $a$ from $(1)$




We need $$\sum_{0\le n<\infty}a^2\cdot (r^2)^n=\frac{a^2}{1-r^2}$$


Prime Generating Irrational Number



Does there exist an irrational number such that every time it is multiplied by 100 its integer part gives a prime number?



$$ \phi= 0,a_0a_1a_2a_3\cdots$$
$$ \lfloor 10^{2n}\phi \rfloor \in \mathbb P,\quad \forall n \in \mathbb N$$



Or in a more general way multiply by $10^{p.n}$, where p is a fixed prime number.




For example, let $\phi = 0,1163\cdots$
$$\lfloor 10^2 \phi \rfloor = 11$$
And $$\lfloor 10^4 \phi \rfloor = 1163$$
While 11 and 1163 are primes, 63 by itself is not. So, $a_{2i}a_{2i+1}$ is not necessarily prime for $i \in \mathbb N$


Answer



This is similar to right-truncatable primes, but removing two decimal digits at a time.



Equivalently, we are looking for right-truncatable primes in base $100$. http://oeis.org/A076586 tells us that there are exactly $9823399067$ such primes. Hence there is no infinite example and no such irrational number.



For the more general question of multiplying by $10^{p\cdot n}$, we can note that the natural density of primes is $0$, and that arbitrarily large prime gaps exists, to conclude that the number of right-truncatable primes in any given base is almost certainly finite. However, I don't believe a proof exists. See for example Proof for finite number of truncatable primes .



calculus - What is the limit of the sequence $ (frac{3-4n}{1+n})(1+frac1n)^n $?




I am trying to find the limit of the sequence
$$
\left(\frac{3-4n}{1+n}\right)\left(1+\frac1n\right)^n
$$
I am aware that if one sequence converges and another sequence converges then the multiplication of two sequences also converge. The limit of the first sequence is $-4$. However I do not know how to calculate the limit of the second sequence.


Answer



Here is one approach
$$ \left( 1+\frac{1}{n}\right)^{n}=e^{ n \ln(1+\frac{1}{n}) } = e^{ n (\frac{1}{n}-\frac{1}{2 n^2}+\dots) } = e^{1-\frac{1}{2n}+\dots}\underset{\infty}{\longrightarrow} e $$


Thursday 20 June 2013

elementary number theory - Totient function inequality



I am not quite sure how to approach this problem:




If a and n are such natural numbers that a divides n, then $n-ϕ(n)\ge a-ϕ(a)$



This is my thought process so far: Obviously the fact that $n=n*a$ should be used. If n is prime, then $n-ϕ(n)=a-ϕ(a)=1$ because only n and 1 could then possibly be a. If n is not prime, but a is, then $n-ϕ(n)>1$ and $a-ϕ(a)=1$ so $n-ϕ(n)>a-ϕ(a)=1$. Yet I am not sure what happens if both a and n are not prime.



Any help is appreciated. Thank you.


Answer



The number $\varphi(m)$ counts the numbers from $1$ to $m$ that are relatively prime to $m$. It follows that $m-\varphi(m)$ counts the numbers from $1$ to $m$ that are not relatively prime to $m$.



Let $n=ka$. If $1\le x\le a$, and $x$ is not relatively prime to $a$, then $x$ is a number between $1$ and $n$ that is not relatively prime to $n$. The result follows.



real analysis - Solution of Cauchy functional equation which has an antiderivative



Let $f\colon\mathbb R\to\mathbb R$ be a function such that
$$f(x+y)=f(x)+f(y)$$
for any $x,y\in\mathbb R$
i.e., it fulfills Cauchy functional equation.




Additionally, suppose that $F'=f$ for some function $F\colon\mathbb R\to\mathbb R$, i.e., $f$ has a primitive function.



How can I show that every such function must by of the form $f(x)=cx$ for some constant $c\in\mathbb R$?






I have seen an exercise in a book on real analysis, where I would be able to use this fact. I could use the argument that every derivative belongs to the first Baire class and consequently it is measurable. Every measurable solution of Cauchy functional equation is a linear function, nice proof is given, for example, in Herrlich's Axiom of Choice, p.119.



The fact that derivative is Baire function was mentioned in the book before the chapter with this exercise. But measurability is done in this book only later. For this reason (and also out of curiosity) I wonder whether there is a proof not using measurability of $f$.


Answer




By the functional equation, it suffices to prove that $f$ is continuous at one point.



The fact that $f$ is of first Baire class is very straightforward:
$$
f(x) = \lim_{n \to \infty} \frac{F(x+1/n)-F(x)}{1/n}
$$
is a pointwise limit of continuous functions.



Now a function of first Baire class has a comeager $G_\delta$-set of points of continuity. Done.







Indeed, enumerate the open intervals with rational endpoints as $\langle I_n \mid n \in \omega\rangle$. Then
$$
f \text{ is discontinuous at }x \iff \exists n\in \omega : x \in f^{-1}[I_n] \setminus \operatorname{int}f^{-1}[I_n]
$$
Since $f$ is of first Baire class, $f^{-1}[I_n]$ is an $F_\sigma$ and so is $f^{-1}[I_n] \setminus \operatorname{int} f^{-1}[I_n]$. Therefore we can write
$$
f^{-1}[I_n] \setminus \operatorname{int} f^{-1}[I_n] = \bigcup_{k \in \omega} F_{k}^{n}
$$

for some sequence $\langle F_{k}^n \mid k \in \omega\rangle$ of closed sets. Observe that $F_{k}^n$ has no interior, so the set of points of discontinuity of $f$ is
$$
\bigcup_{n \in \omega} f^{-1}[I_n] \setminus \operatorname{int}f^{-1}[I_n] = \bigcup_{n\in\omega} \bigcup_{k\in\omega} F_{k}^n,
$$
a countable union of closed and nowhere dense sets.


sequences and series - Closed form for $sum_{n=1}^inftyfrac{(-1)^n n^a H_n}{2^n}$



Is there a closed form for the sum $$\sum_{n=1}^\infty\frac{(-1)^n n^a H_n}{2^n},$$ where $H_n$ are harmonic numbers: $$H_n=\sum_{k=1}^n\frac{1}{k}=\frac{\Gamma'(n+1)}{n!}+\gamma.$$



This is a generalization of my previous question that was just a special case for $a=4$.


Answer



Replace $n^a$ by

$$n^a=\frac{1}{\Gamma(-a)}\int_0^{\infty}t^{-a-1}e^{-nt}dt,$$
and also use the trick decribed here to transform your sum into
\begin{align}\frac{1}{\Gamma(-a)}\int_0^{\infty}t^{-a-1}\sum_{k=1}^{\infty}\sum_{n=k}^{\infty}\frac{1}{k}\left(-\frac{e^{-t}}{2}\right)^n dt
&=\frac{1}{\Gamma(-a)}\int_0^{\infty}\frac{t^{-a-1}}{1+\frac12 e^{-t}}\sum_{k=1}^{\infty}\frac{1}{k}\left(-\frac{e^{-t}}{2}\right)^k dt=\\
&=\frac{-1}{\Gamma(-a)}\int_0^{\infty}\frac{t^{-a-1}}{1+\frac12 e^{-t}}\ln\left(1+\frac12 e^{-t}\right) dt.\end{align}
I dont't think, however, this can be simplified further. One can compute this integral for $a$ given by negative integers and the answer should be given by polylogarithms of increasing order. I can hardly imagine a nice function that would interpolate such values.



In fact, for negative integer $a$ one can write an explicit general formula for the sum in the form
$$\sum_{n=1}^{\infty}\frac{H_n}{n^{N-1}}x^n=\gamma\, \mathrm{Li}_{N-1}(x)+\left[\frac{\partial}{\partial s}\left\{x\Gamma(1+s)\cdot {}_{N+1}F_{N}\left[\begin{array}{c}1,\ldots,1,1+s\\ 2,\ldots,2\end{array};x\right]\right\}\right]_{s=1},$$
which follows from the series representation for $_pF_q$ and your last formula $H_n=\gamma+\psi(n+1)$.



combinatorics - Probability that a random 13-card hand contains at least 3 cards of every suit?




A random 13-card hand is dealt from a standard deck of cards. What is the probability
that the hand contains at least 3 cards of every suit? (Introduction to Probability, p.36)




My solution:





  • There are $\binom{52}{13}$ possible hands.

  • Because there are 13 cards for the hand, to obtain at least three cards of one suit per hand, we need to have exactly three cards of one suit per hand plus one additional card of any suit, thus $\binom{13}{3}^4 * 4 \binom{10}{1}$

  • Result: $\frac{40*\binom{13}{3}^4}{\binom{52}{13}} = 0.4214$



However, simulating it in R yields:



deck <- rep(1:4, 13)
out <- replicate(1e5,{
hand <- sample(deck, size=13, replace=FALSE)

all(table(hand) >= 3)
})
mean(out)
> 0.14387


Can anybody tell me what is wrong?



EDIT




I'm afraid, the correct code should be.



deck <- rep(1:4, 13)
out <- replicate(1e5,{
hand <- sample(deck, size=13, replace=FALSE)
length(table(hand))==4 & all(table(hand) >= 3 )
})
mean(out)
> 0.10639


Answer



We count the "favourables," the 4-3-3-3 hands. The suit in which we have $4$ cards can be chosen in $\binom{4}{1}$ ways. For each of these ways, the actual $4$ cards in that suit can be chosen in $\binom{13}{4}$ ways. For each of these ways, the cards in the other three suits can be chosen in $\binom{13}{3}^3$ ways, for a total of $\binom{4}{1}\binom{13}{4}\binom{13}{3}^3$.



Remark: Your counting procedure results in multiple counting. Think, for example, of the 4-3-3-3 hand that has the K, Q, J, 10 of hearts, and some specific cards for the rest of the hand. Your calculation views, for example, K, Q, J of hearts, and later 10 of hearts, as different from K, J, 10 of hearts, and then Q of hearts.


divisibility - Prime Numbers. Show that if $a mid 42n + 37$ and $a mid 7n +4$, for some integer $n$, then $a = 1$ or $a = 13$



Show that if $a \mid 42n + 37$ and $a \mid 7n +4$, for some integer $n$, then $a = 1$ or $a = 13$




I know most of the rules of divisibility and that any integer number can be expressed as the product of primes.



Given $a = \prod_{i=1}^\infty p_i^{\alpha_i}$ and $b = \prod_{i=1}^\infty p_i^{\beta_i}$



$a \mid b$ if and only of $\alpha_i \le \beta_i$ for every $i = 1, 2, 3, \ldots$



Even though i have this information, I cannot prove the statement. Help me please.


Answer



Hint $\bmod a\!:\,\ \color{#0a0}{ 37} \equiv -42n \equiv \overbrace{6(\color{#c00}{-7n}) \equiv (\color{#c00}{4})6}^{\Large \color{#c00}{-7n\ \ \equiv\ \ 4}} \equiv \color{#0a0}{24}\ \Rightarrow\ a\mid 13 = \color{#0a0}{37-24}$




Remark $\ $ If you are familiar with modular fractions then it can be written as



$\quad\ \ \ \ \bmod a\!:\ \ \dfrac{37}{42}\equiv -n \equiv \dfrac{4}{7}\equiv \dfrac{24}{42}\,\Rightarrow\, 37\equiv 24\,\Rightarrow\, 13\equiv 0$



Update $\ $ A downvote occurred. Here is a guess why. I didn't mention why those fractions exist. We prove $\,(7,a)=1$ so $\,7^{-1}\bmod n\,$ exists. If $\,d\mid 7,a\,$ then $\,d\mid a\mid 7n\!+\!4\,\Rightarrow\,d\mid 4\,$ so $\,d\mid (7,4)=1.\,$ Similarly $\,(42,a)=1\,$ by $\,(42,37)=1$.


Wednesday 19 June 2013

elementary set theory - Bijection between $[mathbb Nto {0,1}]$ and $[mathcal P(mathbb N) to {0,1}]$

In ZFC (edit : and other axiomatic systems), does there exists a bijection between $[\mathbb N\to \{0,1\}]$ and $[\mathcal P(\mathbb N) \to \{0,1\}]$?



Extrapolating from https://en.wikipedia.org/wiki/Algebraic_normal_form it seems that there is exactly $2^{2^n}$ functions from a set where elements are described with n bits to $\{0,1\}$




So by imagining what the limit would be for all possible integer size, it seems that an infinite string of $\{0,1\}$ is exactly the description of one particular function from N to $\{0,1\}$



But then with the same argument, an infinite string of $\{0,1\}$ would also be the description of exactly one particular function from a subset of $\mathbb N$ (which also happens to be a function from $\mathbb N$ to $\{0,1\}$ in that view expressed here) to $\{0,1\}$



Hence i would tend to conclude that $[\mathbb N\to \{0,1\}]$ and $[\mathcal P(\mathbb N) \to \{0,1\}]$ are the same thing ..



What is the view of ZFC (and other axiomatic systems) on the matter, do both set of functions have the same cardinality (which i take is equivalent to ask for the existence of a bijection between the two ) ?



Edit : i would like to extend the questions to other axiomatic systems than ZFC

real analysis - Uniform convergence of $f_n(x) = left(1 + frac{x}{n}right)^n$ when calculating limit



Calculate$$

\lim_{n \rightarrow \infty} \int_0^1 \left(1 + \frac{x}{n}\right)^ndx
$$



My attempt - if
$$
f_n(x) = \left(1 + \frac{x}{n}\right)^n
$$

converged uniformly for all $x \in [0,1]$ then I could swap integral with limes and solve it:
$$
\lim_{n \rightarrow \infty} \int_0^1 \left(1 + \frac{x}{n}\right)^ndx =

\int_0^1 \lim_{n \rightarrow \infty}\left(1 + \frac{x}{n}\right)^ndx =
\int_0^1 e^x dx = e^x|_{0}^{1} = e - 1
$$



I must then prove that $f_n(x)$ is indeed uniformly convergent. I already know that
$f_n(x) \rightarrow e^x$. If $f_n(x)$ converges uniformly then for each epsilon the following statement must hold
$$
\sup_{x \in [0,1]} \left|f_n(x) - f(x)\right| < \epsilon
$$




How can I prove this?


Answer



Alternative approach (without uniform convergence): let $t= \frac{x}{n}$ then
$$\begin{align*}\int_0^1 \left(1 + \frac{x}{n}\right)^ndx&=
n\int_0^{1/n} (1+t)^ndt\\
&=n\left[\frac{(1+t)^{n+1}}{n+1}\right]_0^{1/n}
\\&=
\frac{n}{n+1}\left(\left(1+\frac{1}{n}\right)^{n+1}-1\right)\\&\to e-1.
\end{align*}$$


linear algebra - Let $A,B$ be $m times n$ and $n times m$ matrices, respectively. Prove that if $m > n$, $AB$ is not invertible



We haven't done anything about rank or dimensions or linear dependence / basis or determinants. Possible related facts :




  1. A matrix is invertible iff it is bijective as a linear transformation.


  2. An invertible matrix is row-equivalent to the identity matrix.



  3. A matrix has a right inverse iff it has a left inverse.




Also, invertability is only defined for square matrices.


Answer



Since $A$ is an $m\times n$ matrix and $B$ is an $n\times m$ matrix, the product $AB$ is an $m\times m$ matrix. Suppose $m>n$ then the operator associated to left multiplication by $B$ is not injective because there are more columns than rows, so the kernel is always nontrivial (i.e. there are more column vectors than there are entries in those column vectors, so they must be linearly dependent). Said another way: the linear operator $T:\Bbb R^m\to\Bbb R^n$ with $T(v)=Bv$ is not injective because $m>n$, so the domain has higher dimension than the codomain. So there exists vectors $v,w\in\Bbb R^m$ such that $Bv=Bw$ but $v\neq w$.



Spoiler:





Thus, $$Bv=Bw\implies A(Bv)=A(Bw)\implies (AB)v=(AB)w$$ but $v\neq w$. Hence, the operator associated with left multiplication by $AB$ is not injective.



abstract algebra - Primitive Element theorem, permutations



Let $K = \mathbb{Q}(\alpha_1,\alpha_2,...\alpha_n)$, where the $\alpha_i$ are the roots of some irreducible polynomial (and hence they are pairwaise distinct since the polynomial is separable). Then $K/\mathbb{Q}$ is a finite extension. By the primitive element theorem there exists a $\alpha$ such that $\mathbb{Q}(\alpha) = K$. Galois ("Sur les conditions de resolubilite des equations par radicaux", Lemme II; see here) was able (without proof) to choose $\alpha = u_1 \alpha_1 + \cdots + u_n \alpha_n$ with $u_i \in \mathbb{Q}$ such that all the elements $\sigma(\alpha) := u_1 \alpha_{\sigma(1)} + \cdots + u_n \alpha_{\sigma(n)}$ are distinct for every permutation $\sigma$ of the symmetric group. Distinct in this sense means that $\sigma(\alpha) \neq \tau(\alpha)$ for different $\sigma, \tau \in S_n$. Is this always true and if so, does somebody have a reference for this?


Answer



Yes, it is true. The following more general fact is true:




Theorem 1. Let $\mathbf{k}$ be an infinite field. Let $V$ be a
$\mathbf{k}$-vector space. Let $v_{1},v_{2},\ldots,v_{n}$ be finitely many
distinct elements of $V$. Then, there exists some $\left( a_{1},a_{2}
,\ldots,a_{n}\right) \in\mathbf{k}^{n}$
such that the $n!$ elements
$a_{\sigma\left( 1\right) }v_{1}+a_{\sigma\left( 2\right) }v_{2}
+\cdots+a_{\sigma\left( n\right) }v_{n}$
, for $\sigma$ ranging over the
symmetric group $S_{n}$, are pairwise distinct.



[Notice that my notations are different from yours. My $\mathbf{k}$, $V$,

$v_{i}$ and $a_{i}$ correspond to your $\mathbb{Q}$, $K$, $\alpha_{i}$ and
$u_{i}$, respectively (but of course, my setting is more general).]



The main tool for proving Theorem 1 is the following theorem, which doubles as
a well-known exercise:



Theorem 2. Let $\mathbf{k}$ be an infinite field. Let $V$ be a
finite-dimensional $\mathbf{k}$-vector space. Then, $V$ cannot be written as a
union of finitely many proper subspaces of $V$. (A proper subspace of $V$
means a $\mathbf{k}$-vector subspace of $V$ distinct from $V$.)




Theorem 2 is proven in many places; for example, see
A finite-dimensional vector space cannot be covered by finitely many proper subspaces? or (for a stronger statement)
If a field $F$ is such that $\left|F\right|>n-1$ why is $V$ a vector space over $F$ not equal to the union of $n$ proper subspaces of $V$ or (also for a stronger statement)
A vector space over $R$ is not a countable union of proper subspaces or https://mathoverflow.net/q/26/ .



Proof of Theorem 1. Let $G$ be the set $\left\{ \left( \sigma,\tau\right)
\in S_{n}\times S_{n}\ \mid\ \sigma\neq\tau\right\} $
. Clearly, the set $G$
is finite (since $S_{n}$ is finite).




The $\mathbf{k}$-vector space $\mathbf{k}^n$ is finite-dimensional.
Hence, Theorem 2 (applied to $\mathbf{k}^n$ instead of $V$)
shows that $\mathbf{k}^n$ cannot be written
as a union of finitely many proper subspaces of $\mathbf{k}^n$.
In other words, any union of finitely many proper subspaces of
$\mathbf{k}^n$ must be a proper subset of $\mathbf{k}^n$.



For every $\sigma\in S_{n}$, we define a map $v_{\sigma}:\mathbf{k}
^{n}\rightarrow V$
as follows: For any $\left( a_{1},a_{2},\ldots
,a_{n}\right) \in\mathbf{k}^{n}$
, we set $v_{\sigma}\left( a_{1}

,a_{2},\ldots,a_{n}\right) =\sum\limits_{i=1}^{n}a_{\sigma\left( i\right)
}v_{i}$
. This map $v_{\sigma}$ is $\mathbf{k}$-linear.



Now, let $\left( \sigma,\tau\right) \in G$. We shall show that
$\operatorname*{Ker}\left( v_{\sigma}-v_{\tau}\right) $ is a proper subspace
of $\mathbf{k}^n$.



Indeed, the map $v_{\sigma}-v_{\tau}$ is $\mathbf{k}$-linear (since
$v_{\sigma}$ and $v_{\tau}$ are $\mathbf{k}$-linear), and thus
$\operatorname*{Ker}\left( v_{\sigma}-v_{\tau}\right) $ is a $\mathbf{k}

$
-vector subspace of $\mathbf{k}^{n}$.



Moreover, $\sigma\neq\tau$ (since $\left( \sigma,\tau\right) \in G$). Assume
(for the sake of contradiction) that $\operatorname*{Ker}\left( v_{\sigma
}-v_{\tau}\right) =\mathbf{k}^{n}$
. Thus, $v_{\sigma}-v_{\tau}=0$, so that
$v_{\sigma}=v_{\tau}$.



Let $g\in\left\{ 1,2,\ldots,n\right\} $.



We shall use the notation $\delta_{u,v}$ for the element $

\begin{cases}
1, & \text{if }u=v;\\
0, & \text{if }u\neq v
\end{cases}
\in\mathbf{k}$
whenever $u$ and $v$ are two objects. For every permutation
$\pi\in S_{n}$, we have



$v_{\pi}\left( \delta_{1,g},\delta_{2,g},\ldots,\delta_{n,g}\right)
=\sum\limits_{i=1}^{n}\underbrace{\delta_{\pi\left( i\right) ,g}}
_{=\delta_{i,\pi^{-1}\left( g\right) }}v_{i}$
(by the definition of $v_{\pi

}$
)



(1) $=\sum\limits_{i=1}^{n}\delta_{i,\pi^{-1}\left( g\right) }
v_{i}=v_{\pi^{-1}\left( g\right) }$
.



Applying (1) to $\pi=\sigma$, we obtain



(2) $v_{\sigma}\left( \delta_{1,g},\delta_{2,g},\ldots,\delta
_{n,g}\right) =v_{\sigma^{-1}\left( g\right) }$
.




Applying (1) to $\pi=\tau$, we obtain $v_{\tau}\left( \delta_{1,g}
,\delta_{2,g},\ldots,\delta_{n,g}\right) =v_{\tau^{-1}\left( g\right) }$
.
Since $v_{\sigma}=v_{\tau}$, this rewrites as $v_{\sigma}\left( \delta
_{1,g},\delta_{2,g},\ldots,\delta_{n,g}\right) =v_{\tau^{-1}\left( g\right)
}$
. Comparing this with (2), we obtain $v_{\sigma^{-1}\left( g\right)
}=v_{\tau^{-1}\left( g\right) }$
. Since $v_{1},v_{2},\ldots,v_{n}$ are
distinct, this shows that $\sigma^{-1}\left( g\right) =\tau^{-1}\left(
g\right) $
.



Now, let us forget that we fixed $g$. We thus have shown that $\sigma

^{-1}\left( g\right) =\tau^{-1}\left( g\right) $
for every $g\in\left\{
1,2,\ldots,n\right\} $
. In other words, $\sigma^{-1}=\tau^{-1}$. In other
words, $\sigma=\tau$. This contradicts $\sigma\neq\tau$. This contradiction
proves that our assumption (that $\operatorname*{Ker}\left( v_{\sigma
}-v_{\tau}\right) =\mathbf{k}^{n}$
) was wrong. Hence, $\operatorname*{Ker}
\left( v_{\sigma}-v_{\tau}\right) \neq\mathbf{k}^{n}$
. Since
$\operatorname*{Ker}\left( v_{\sigma}-v_{\tau}\right) $ is a $\mathbf{k}
$
-vector subspace of $\mathbf{k}^{n}$, this yields that $\operatorname*{Ker}
\left( v_{\sigma}-v_{\tau}\right) $
is a proper subspace of $\mathbf{k}^{n}$.




Now, let us forget that we fixed $\left( \sigma,\tau\right) $. Thus, for
every $\left( \sigma,\tau\right) \in G$, the set $\operatorname*{Ker}\left(
v_{\sigma}-v_{\tau}\right) $
is a proper subspace of $\mathbf{k}^{n}$. Hence,
$\bigcup_{\left( \sigma,\tau\right) \in G}\operatorname*{Ker}\left(
v_{\sigma}-v_{\tau}\right) $
is a union of finitely many proper subspaces of
$\mathbf{k}^n$ (since $G$ is finite).
Therefore, $\bigcup_{\left( \sigma,\tau\right)
\in G}\operatorname*{Ker}\left( v_{\sigma}-v_{\tau}\right) $
must be a
proper subset of $\mathbf{k}^n$ (since any union of finitely many proper
subspaces of $\mathbf{k}^n$ must be a proper subset of $\mathbf{k}^n$).

In other words, there exists some $a\in \mathbf{k}^n$ such that
$a\notin\bigcup_{\left( \sigma,\tau\right) \in G}
\operatorname*{Ker}\left( v_{\sigma}-v_{\tau}\right) $
. Consider this $a$.



We have $a\notin\bigcup_{\left( \sigma,\tau\right) \in G}\operatorname*{Ker}
\left( v_{\sigma}-v_{\tau}\right) $
. In other words,



(3) $a\notin\operatorname*{Ker}\left( v_{\sigma}-v_{\tau}\right) $ for
every $\left( \sigma,\tau\right) \in G$.




Now, let $\sigma$ and $\tau$ be two distinct elements of $S_{n}$. Thus,
$\left( \sigma,\tau\right) \in S_{n}\times S_{n}$ and $\sigma\neq\tau$. In
other words, $\left( \sigma,\tau\right) \in G$. Hence, (3) shows that
$a\notin\operatorname*{Ker}\left( v_{\sigma}-v_{\tau}\right) $. In other
words, $\left( v_{\sigma}-v_{\tau}\right) \left( a\right) \neq0$. Hence,
$0\neq\left( v_{\sigma}-v_{\tau}\right) \left( a\right) =v_{\sigma}\left(
a\right) -v_{\tau}\left( a\right) $
, so that $v_{\sigma}\left( a\right)
\neq v_{\tau}\left( a\right) $
.



Let us forget that we fixed $\sigma$ and $\tau$. We thus have shown that

$v_{\sigma}\left( a\right) \neq v_{\tau}\left( a\right) $ for any two
distinct elements $\sigma$ and $\tau$ of $S_{n}$. In other words,



(4) the $n!$ elements $v_{\sigma}\left( a\right) $, for $\sigma$ ranging
over the symmetric group $S_{n}$, are pairwise distinct.



Now, let us write $a$ in the form $\left( a_{1},a_{2},\ldots,a_{n}\right) $.
Then, for every $\sigma\in S_{n}$, we have



$v_{\sigma}\left( a\right) =v_{\sigma}\left( a_{1},a_{2},\ldots

,a_{n}\right) =\sum\limits_{i=1}^{n}a_{\sigma\left( i\right) }
v_{i}=a_{\sigma\left( 1\right) }v_{1}+a_{\sigma\left( 2\right) }
v_{2}+\cdots+a_{\sigma\left( n\right) }v_{n}$
.



Hence, the statement (4) rewrites as follows: The $n!$ elements
$a_{\sigma\left( 1\right) }v_{1}+a_{\sigma\left( 2\right) }v_{2}
+\cdots+a_{\sigma\left( n\right) }v_{n}$
, for $\sigma$ ranging over the
symmetric group $S_{n}$, are pairwise distinct. This proves Theorem 1.
$\blacksquare$


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...