Thursday 31 July 2014

sequences and series - How can I evaluate $sum_{n=1}^{infty} n^{3}x^{n-1}$

How do i evaluate :



$$\sum_{n=1}^{\infty} n^{3}x^{n-1}$$




The answer is supposed to be: (according to wolfram alpha)



$$ \frac{x^2+4x+1}{(x-1)^4} $$



I have only learned to to this for simpler geometric sums.

Axiom of choice in set theory




Just as the title stated, what is the main point of axiom of choice? I do not quite understand what is written in the axiom. The axiom that I know is:




Given any collection of non-empty sets, there exists a choice function such that $$f:I \rightarrow \bigcup_i{S_i}\quad f(i)\in S_i \quad\text{for all }\; i\in I.$$



Answer



The point of the Axiom of Choice is that oftentimes a mathematician finds him- or herself at a point where infinitely many specific objects must be gathered up all at once in order to continue a proof/construction.



In the statement as given, we have a family of nonempty sets $\{ A_i : i \in I \}$, and we want to pick a unique representative from each $A_i$, this is our function $f : I \to \bigcup_{i \in I} A_i$ ($f$ "picks" $f(i)$ to be the unique representative from $A_i$). The Axiom of Choice says that this is unproblematic, and we can always do this.




Of course, there are certain specific instances where one can do this without appealing to the Axiom of Choice:




  • if you only have to make finitely many choices; or

  • if these choices can be made in a uniform manner (e.g., if I have infinitely many nonempty sets of natural numbers, I can choose the least one from each set).



More often there is no way to uniformly pick these representatives, and without appealing to some extra-logical hypothesis we cannot make the choices as required.




(There are many statements known to be equivalent to the Axiom of Choice. The most common one you see in mathematics outside of logic/set theory is Zorn's Lemma.)


polynomials - Roots of Equation

Let the roots of the equation
$$
x^3 + px^2 + qx + r = 0
$$
be in arithmetic progression. Show that
$$
p^2 \ge 3q.
$$




Attempt: Let the roots be $\alpha$, $\beta$, and $\gamma$. Then
$$
\sum\alpha=-p, \quad \sum\alpha\beta=q, \quad\text{and}\quad \alpha\beta \gamma=-r.
$$
Since roots are in $AP$, we have $2\beta=\alpha+\gamma$.

calculus - When am I allowed to use ln(x) when integrating functions?



In Mathematics, we know the following is true:




$$\int \frac{1}{x} \space dx = \ln(x)$$



Not only that, this rule works for constants added to x:
$$\int \frac{1}{x + 1}\space dx = \ln(x + 1) + C{3}$$
$$\int \frac{1}{x + 3}\space dx = \ln(x + 3) + C$$
$$\int \frac{1}{x - 37}\space dx = \ln(x - 37) + C$$
$$\int \frac{1}{x - 42}\space dx = \ln(x - 42) + C$$



So its pretty safe to say that $$\int \frac{1}{x + a}\space dx = \ln(x + a) + C$$ But the moment I introduce $x^a$ where $a$ is not equal to 1, the model collapses. The integral of $1/x^a$ is not equal to $\ln(x^a)$. The same goes for $\cos(x)$, and $\sin(x)$, and other trig functions.




So when are we allowed or not allowed to use the rule of $\ln(x)$ when integrating functions?


Answer



Generally speaking, "using $\ln (x)$" as a rule or technique is unheard of. When one speaks of techniques, they usually include integration by substitution, integration by parts, trig substitutions, partial fractions, etc. With introductory calculus in mind, $\ln |x|$ is defined as $\int \frac{1}{x} \ dx.$ This can be extended to $\ln |u| = \int \frac{1}{u} \ du.$ Note that there are many more definitions for $\ln (x)$, but I felt this best related particularly to your examples.



For your first couple of examples, when choosing your $u$ to be the denominator, the $du$ is simply equal to $dx.$ This is what 'allows' the integrand to be evaluated to just $\ln |u|$ where $u$ is a linear expression.



In regards to $\int \frac{1}{x^2 + a} \ dx$, this can be handled using an inverse tangent and would be evaluated to
$$\dfrac{\operatorname{arctan}(\frac{x}{\sqrt{a}})}{\sqrt{a}} + C$$




For integrals of the form
$$\int \frac{1}{x^n + a} \ dx$$



where $n \ge 3$, you will have to revert to partial fractions. For more on partial fractions, see this.


The Formula to this Sequence Series

What is the formula for following sequence?



$$\frac12 + \frac12 \cdot \frac34 + \frac12 \cdot \frac34 \cdot \frac56 + ... + \frac12 \cdot \frac34 \cdot \frac56 ... \frac{2n - 1}{2n}$$



This is a question from my Calculus class about sequences.

sequences and series - Find $frac{1}{7}+frac{1cdot3}{7cdot9}+frac{1cdot3cdot5}{7cdot9cdot11}+cdots$ upto 20 terms



Find $S=\frac{1}{7}+\frac{1\cdot3}{7\cdot9}+\frac{1\cdot3\cdot5}{7\cdot9\cdot11}+\cdots$ upto 20 terms



I first multiplied and divided $S$ with $1\cdot3\cdot5$
$$\frac{S}{15}=\frac{1}{1\cdot3\cdot5\cdot7}+\frac{1\cdot3}{1\cdot3\cdot5\cdot7\cdot9}+\frac{1\cdot3\cdot5}{1\cdot3\cdot5\cdot7\cdot9\cdot11}+\cdots$$
Using the expansion of $(2n)!$

$$1\cdot3\cdot5\cdots(2n-1)=\frac{(2n)!}{2^nn!}$$
$$S=15\left[\sum_{r=1}^{20}\frac{\frac{(2r)!}{2^rr!}}{\frac{(2(r+3))!}{2^{r+3}(r+3)!}}\right]$$
$$S=15\cdot8\cdot\left[\sum_{r=1}^{20}\frac{(2r)!}{r!}\cdot\frac{(r+3)!}{(2r+6)!}\right]$$
$$S=15\sum_{r=1}^{20}\frac{1}{(2r+5)(2r+3)(2r+1)}$$



How can I solve the above expression? Or is there an simpler/faster method?


Answer



Hint:



$\frac{1}{(2r+5)(2r+3)(2r+1)}=\frac{1}{4}\left(\frac{1}{(2r+3)(2r+1)}-\frac{1}{(2r+5)(2r+3)}\right)$



Wednesday 30 July 2014

real analysis - Infinite series and the Riemann zeta function



I have two questions concerning infinite series in the context of the Riemann zeta function.




  1. Given the properties of infinite series, why can't we regroup the terms in $\zeta(0)$ in such a way as to give $\zeta(-1)$? i.e.




$$\zeta(0)=\sum_{n=0}^\infty \frac{1}{n^0}=\sum_{n=0}^\infty 1=1+1+1+\ldots=(1)+(1+1)+(1+1+1)+\ldots=1+2+3+\ldots=\frac{1}{1^{-1}}+\frac{1}{2^{-1}}+\frac{1}{1^{-3}}+\ldots=\sum_{n=0}^\infty n=\sum_{n=0}^\infty \frac{1}{n^{-1}}=\zeta(-1)$$




  1. This one might be a lot simpler to answer: why can we assign a value to $\zeta(-1)=\sum_{n=0}^\infty \frac{1}{n^{-1}}$ when the infinite series on the RHS is clearly divergent, i.e. its $n^{th}$ term is always bigger than its $(n-1)^{th}$ term?


Answer



In short: in a non-absolutely convergent series you can't do things like reorder and group terms because you may get a different answer. In fact, you can reorder the terms in in the sum $1/1-1/2+1/3-1/4+...$ (which in this case does converge, but not absolutely) to give you any real result you like!!!
http://en.wikipedia.org/wiki/Absolute_convergence




$\zeta(-1)$ is something quite different. For numbers with $Re(s) \leq 1$ we don't define $\zeta(s)=\sum_{n=1}^{\infty}n^{-s}$, but instead as the function which "smoothly" extends this sum which is well-defined on $Re(s) > 1$ to those numbers with $Re(s) \leq 1$. It is, as was mentioned, a meromorphic continuation.


complex analysis - Roots of a polynomial and its derivative

All roots of a complex polynomial have positive imaginary part. Prove that all roots of its derivative also have positive imaginary part.




It's not a homework. This issue has been proposed in the materials to prepare for exams.

combinatorics - Proof related to maximum degree of node in a graph




So I'm given this problem - Prove that in every graph with 25 vertices, in which holds that in every 3-subset of vertices, at least two of them are connected, there exists a node of degree at least 12. I tried proving this by contradiction, by counting what is the least number of edges this graph has to have, given upper conditions. I didn't think much, and I said - well, at least, it has to have as many edges as it has 3 subsets of 25 nodes. However, I don't think this is true anymore, at least I can't figure out how to prove this, as some edges may repeat through these subsets. Then, I could show that if a three subset has only one edge, no other 3 subset will have this edge as its only edge. Is this sufficient to prove what I thought before? I'm pretty confused right now, so any help would be appreciated.


Answer



Hint: Fix two vertices and consider the triples that include them.




Call the vertices $v_1$ and $v_2$.

Let $u$ be a different vertex. We have at least one of the edges $uv_1$, $uv_2$ and $v_1v_2$. So each of the $23$ vertices not equal to $v_1$ or $v_2$ contributes at least $1$ to $\deg(v_1)+\deg(v_2)$, giving us $\deg(v_1)+\deg(v_2) \geq 23$.





Hint for an alternative approach: Pick some vertex $v$ of degree $\lt 12$ and consider the subgraph given by the vertices not adjacent to $v$.


Tuesday 29 July 2014

real analysis - How I can prove that the sequence $sqrt{2} , sqrt{2sqrt{2}}, sqrt{2sqrt{2sqrt{2}}}$ converges to 2?




Prove that the sequence $\sqrt{2} , \sqrt{2\sqrt{2}}, \sqrt{2\sqrt{2\sqrt{2}}} \ $ converges to $2$.




My attempt




I proved that the sequence is increasing and bounded by $2$, can anyone help me show that the sequence converges to $2$?
Thanks for your help.


Answer



Another prove:



Notice that:
$a_1 = 2^{1/2},\ a_2 = 2^{3/4},\ a_3 = 2^{7/8}$, and so on, thus,
$$a_n = 2^{(2^n-1)/2^n} = 2^{1-1/2^n}$$



Taking limit as $n$ tends to infinity, we have that

$$\lim_{n \rightarrow \infty} a_n = 2^1 = 2$$


real analysis - $0^0$ -- indeterminate, or $1$?





One of my teachers argued today that 0^0 = 1. However, WolframAlpha, intuition(?) and various other sources say otherwise... 0^0 doesn't really "mean" anything..



can anyone clear this up with some rigorous explanation?


Answer



Short answer: It depends on your convention and how you define exponents.



Long answer: There are a number of ways of defining exponents. Usually these definitions coincide, but this is not so for $0^0$: some definitions yield $0^0=1$ and some don't apply when both numbers are zero (leaving $0^0$ undefined).



For example, given nonnegative whole numbers $m$ and $n$, we can define $m^n$ to be the number of functions $A \to B$, where $A$ is a set of size $n$ and $B$ is a set of size $m$. This definition gives $0^0=1$ because the only set of size $0$ is the empty set $\varnothing$, and the only function $\varnothing \to \varnothing$ is the empty function.




However, an analyst might not want $0^0$ to be defined. Why? Becuase look at the limits of the following functions:
$$\lim_{x \to 0^+} 0^x = 0, \qquad \lim_{x \to 0} x^0 = 1, \qquad \lim_{x \to 0^+} (e^{-1/t^2})^{-t} = \infty$$
All three limits look like $0^0$. So when this is desired, you might want to leave $0^0$ undefined, so that it's a lack of definition rather than a discontinuity.



Typically this is resolved by:




  • If you're in a discrete setting, e.g. considering sets, graphs, integers, and so on, then you should take $0^0=1$.

  • If you're in a continuous setting, e.g. considering functions on the real line or complex plane, then you should take $0^0$ to be undefined.




Sometimes these situations overlap. For example, usually when you define functions by infinite series
$$f(x) = \sum_{n=0}^{\infty} a_nx^n$$
problems occur when you want to know the value of $f(0)$. It is normal in these cases to take $0^0=1$, so that $f(0)=a_0$; the reason being that we're considering what happens as $x \to 0$, and this corresponds with $\lim_{x \to 0} x^0 = 1$.


abstract algebra - How many irreducible polynomials of degree $n$ exist over $mathbb{F}_p$?



I know that for every $n\in\mathbb{N}$, $n\ge 1$, there exists $p(x)\in\mathbb{F}_p[x]$ s.t. $\deg p(x)=n$ and $p(x)$ is irreducible over $\mathbb{F}_p$.





I am interested in counting how many such $p(x)$ there exist (that is, given $n\in\mathbb{N}$, $n\ge 1$, how many irreducible polynomials of degree $n$ exist over $\mathbb{F}_p$).




I don't have a counting strategy and I don't expect a closed formula, but maybe we can find something like "there exist $X$ irreducible polynomials of degree $n$ where $X$ is the number of...".



What are your thoughts ?


Answer



Theorem: Let $\mu(n)$ denote the Möbius function. The number of monic irreducible polynomials of degree $n$ over $\mathbb{F}_q$ is the necklace polynomial
$$M_n(q) = \frac{1}{n} \sum_{d | n} \mu(d) q^{n/d}.$$




(To get the number of irreducible polynomials just multiply by $q - 1$.)



Proof. Let $M_n(q)$ denote the number in question. Recall that $x^{q^n} - x$ is the product of all the monic irreducible polynomials of degree dividing $n$. By counting degrees, it follows that
$$q^n = \sum_{d | n} d M_d(q)$$



(since each polynomial of degree $d$ contributes $d$ to the total degree). By Möbius inversion, the result follows.



As it turns out, $M_n(q)$ has a combinatorial interpretation for all values of $q$: it counts the number of aperiodic necklaces of length $n$ on $q$ letters, where a necklace is a word considered up to cyclic permutation and an aperiodic necklace of length $n$ is a word which is not invariant under a cyclic permutation by $d$ for any $d < n$. More precisely, the cyclic group $\mathbb{Z}/n\mathbb{Z}$ acts by cyclic permutation on the set of functions $[n] \to [q]$, and $M_n(q)$ counts the number of orbits of size $n$ of this group action. This result also follows from Möbius inversion.




One might therefore ask for an explicit bijection between aperiodic necklaces of length $n$ on $q$ letters and monic irreducible polynomials of degree $n$ over $\mathbb{F}_q$ when $q$ is a prime power, or at least I did a few years ago and it turns out to be quite elegant.



Let me also mention that the above closed form immediately leads to the "function field prime number theorem." Let the absolute value of a polynomial of degree $d$ over $\mathbb{F}_q$ be $q^d$. (You can think of this as the size of the quotient $\mathbb{F}_q[x]/f(x)$, so in that sense it is analogous to the norm of an element of the ring of integers of a number field.) Then the above formula shows that the number of monic irreducible polynomials $\pi(n)$ of absolute value less than or equal to $n$ satisfies
$$\pi(n) \sim \frac{n}{\log_q n}.$$


statistics - Probability that P people will have N distinct birthdays



This question is rather difficult to describe clearly, so I will begin with an example. Suppose I have a 365 people in a room. The odds are very low that all these people have different birthdays. In fact, ignoring leap years and assuming that every birthday is equally likely, I estimated (using a program) that the most likely scenario is 231 distinct birthdays (with a probability of ~6.69%).



More abstractly, out of P people, what are the odds that there are exactly N distinct birthdays amongst them (ignoring leap years and assuming that every birthday is equally likely)? In my above example I used P=365 and I estimated the answer for N=231. I am curious if there is a general solution to the problem which could give an exact answer for all N and P.




For those who are interested, the program I created tries random combinations of birthdays and counts the number of unused birthdays. With it, I created a table of estimated probabilities for all N for P=365. Here is the most interesting part of the table:



242 - 1.1837375%
241 - 1.5966975%
240 - 2.0933325%
239 - 2.6695062%
238 - 3.3059700%
237 - 3.9824950%
236 - 4.6610425%

235 - 5.2969675%
234 - 5.8606362%
233 - 6.2995763%
232 - 6.5841737%
231 - 6.6932175%
230 - 6.6143263%
229 - 6.3563137%
228 - 5.9308837%
227 - 5.3896263%
226 - 4.7588988%

225 - 4.0879863%
224 - 3.4110800%
223 - 2.7697687%
222 - 2.1889950%
221 - 1.6793675%
220 - 1.2546263%

Answer



There are $\binom{365}{N}$ ways to select the $N$ birthdays. Then there are $P \brace N$ ways to select select which of the birthdays each person will have so that every of the selected birthdays is indeed the birthday of at least one person.




All in all there are $\binom{365}{N}{P \brace N}N!$ outcomes that give exactly $N$ distinct birthdays.



Hence the probability is:



$$\dfrac{\binom{365}{N}{P \brace N}N!}{365^P}$$






Here $\binom{n}{k}$ is the binomial coefficient and $n\brace k$ is the stirling number of the second kind.


Monday 28 July 2014

Partial sum formula of a polynomial series?



I am trying to find the partial sum formula of the following series:




$$
\sum_{y=1}^{\infty} \frac{4y^2-12y+9}{(y+3)(y+2)(y+1)y}
$$



I have tried using Faulhaber's formula without success. I have also tried rewriting the system using partial fraction decomposition to obtain 4 terms. This didn't solve the issue either.



When I use WolframAlpha to evaluate the sum, (or other computational software), it becomes 1/2. Is there some way to derive the this infinite sum to a partial sum formula?



This partial sum formula is according to WolframAlpha:

$$
\frac{n^3-2n^2+3n}{2(n+1)(n+2)(n+3)}
$$



Thank you in advance!
J


Answer



Decompose the fraction on simple elements:



$$\frac{4y^2-12y+9}{(y+3)(y+2)(y+1)y}=\frac{a}{y}+\frac{b}{y+1}+\frac{c}{y+2}+\frac{d}{y+3}$$

and since the series is convergent then we have $a+b+c+d=0$. Now the partial sum is



$$\sum_{y=1}^n \frac{a}{y}+\frac{b}{y+1}+\frac{c}{y+2}+\frac{d}{y+3}=a\sum_{y=1}^n \frac{1}{y}+b\sum_{y=2}^{n+2} \frac{1}{y}+c\sum_{y=3}^{n+3} \frac{1}{y}+d\sum_{y=4}^{n+4} \frac{1}{y}$$
and the simplification is clear. Can you take it from here?


functional equations - Examples of functions where $f(ab)=f(a)+f(b)$



What are some examples of continuous (on a certain interval) real or complex functions where $f(ab)=f(a)+f(b)$ (like $\ln x$?)


Answer



Define $g(x)=f(e^x)$. Then
$$g(x+y)=f(e^{x+y})=f(e^xe^y)=f(e^x)+f(e^y)=g(x)+g(y).$$



If $f$ is continuous, so is $g$, and it's a well-known exercise to show that $g$ must then be of the form $g(x)=cx$ for some constant $c$ (see Cauchy's functional equation).




Thus, $\ln(x)$ and constant multiples of it are the only examples of the kind you seek.


Fractions in Modular Arithmetic




Suppose we have $$\frac{2}{3} \equiv x \bmod 5$$



I understand that the first step that needs to be made is: $$2 \equiv 3x \bmod 5$$



But from there I'm having a hard time understanding the logic of how to solve for $x$. Obviously, with simple numbers like this example, the answer is $4$, but how can I abstract the process to solve for $x$ when the numbers become very large?


Answer



Modulo arithmetic generally deals with integers, not fractions. Instead of division, you multiply by the inverse. For instance, you would not have $\frac2 3\equiv x\mod 5$, you would have $2*3^{-1}\equiv x\mod 5$. In this case, $3^{-1}\equiv 2\mod 5$, so you would have $2*2\equiv 4\mod5$. The inverse of a number $a$ in modular arithmetic is the number $a^{-1}$ such that $a*a^{-1}\equiv 1\mod n$.


How to proof linearity property of summations with induction

Recently I have faced with this question:




$ {\sum_{k=1}^{n} (ca_k+ b_k) = c \sum_{k=1}^{n} a_k + \sum_{k=1}^{n} b_k }$



Proof linearity property of summations for all n ≥ 0 by using mathematical induction on n



I know that proving with induction is basically trying with $P(1)$, $P(m)$ and $P(m+1)$ however in my previous examples, right hand side always had one simple equation with only n variable. This does not, so I don't know how to solve this. Could someone explain and solve please? I progressed until here, don't know what else to do:



$ {c \sum_{k=1} ^{m} a_k + \sum_{k=1} ^{m} b_k + ca_{m+1}+ b_{m+1}} $

real analysis - The convergence of $sqrt {2+sqrt {2+sqrt {2+ldots}}}$

I would like to know if this sequence converges $\sqrt {2+\sqrt {2+\sqrt {2+\ldots}}}$.



I know this sequence is increasing monotone, but I couldn't prove it's bounded.



Thanks

soft question - What does a "convention" mean in mathematics?




We all know that $0!=1$, the degree of the zero polynomial equals $-\infty$, the interval$[a,a)=(a,a]=(a,a)=\emptyset$ ... and so on, are conventions in mathematics. So is a convention something that we can't prove with mathematical logic, or is it just intuitions, or something
that mathematicians agree about?
Are they the same as axioms? What does "convention" mean in mathematics?
And is $i^2 = -1$ a convention? If not how can we prove existence of such number?


Answer



To answer the question in the title, I would say: 'convention' in mathematics means exactly the same as in ordinary English.



As for your examples: $0!:=1$ and $[a,a):=\emptyset$ are definitions. It is a convention not to use a different definition, or to leave it undefined. Of course in this sense, every definition is a convention.




It think that informally, one says a certain definition (such as the two above) is '(just) convention', to mean that they are 'extreme' or 'degenerate' cases, and leaving them undefined would still make the theory go through, but it is more convenient to define them anyway (for example to prevent having to exclude this extreme case in statement of theorems). For example, I think you could get by not defining $[a,a)$ or $[a,b]$ for $ba$ which could be tiresome.


functional equations - Math question functions help me?

I have to find find $f(x,y)$ that satisfies
\begin{align}
f(x+y,x-y) &= xy + y^2 \\
f(x+y, \frac{y}x ) &= x^2 - y^2
\end{align}




So I first though about replacing $x+y=X$ and $x-y=Y$ in the first one but then what?

sequences and series - If $a_1 + 2a_2, a_2 + 2a_3, a_3 + 2a_4 dots$ converges, prove $a_1, a_2, dots$ converges


Let $a_1, a_2, \dots$ be a sequence of real numbers such that the
sequence $a_1 + 2a_2, a_2 + 2a_3, a_3 + 2a_4 \dots$ converges. Prove
that the sequence $a_1, a_2, \dots$ must also converge.





I tried rewriting the $a$ sequence in terms of the first convergent sequence. Letting $$b_1 = a_1 + 2a_2, \ b_2 = a_2 + 2a_3, \ b_3 = a_3 + 2a_4, \dots$$ we have $$a_2 = \frac{b_1 - a_1}{2},$$
$$a_3 = \frac{b_2 - a_2}{2} = \frac{2b_2 - b_1 + a_1}{4}, $$
$$a_4 = \frac{b_3 - a_3}{2} = \frac{4b_3 - 2b_2 + b_1 - a_1}{8}, \\ \dots$$



I have no experience in analysis so I tried using only the definition of convergence and $\lim_{n \to \infty}a_n$. My definition of convergence is for any $\epsilon > 0$ there exists $N$ such that $|b_n - c| < \epsilon$ for $n \ge N$. However even though I can get a magnitude bound on the alternating sum of $b_1, b_2, \dots$, I'm still unsure of how to deal with the finitely many $b_1, \dots, b_{N-1}$ that aren't within $\epsilon$ of $c$. I think Cauchy sequences might help but I have no experience with them either.

Sunday 27 July 2014

trigonometry - What is the exact value of sin 1 + sin 3 + sin 5 ... sin 177 + sin179 (in degrees)?

My attempt:



First I converted this expression to sum notation:



$\sin(1^\circ) + \sin(3^\circ) + \sin(5^\circ) + ... + \sin(175^\circ) + \sin(177^\circ) + \sin(179^\circ)$ = $\sum_{n=1}^{90}\sin(2n-1)^\circ$




Next, I attempted to use Euler's formula for the sum, since I needed this huge expression to be simplified in exponential form:



$\sum_{n=1}^{90}\sin(2n-1)^\circ$ = $\operatorname{Im}(\sum_{n=1}^{90}cis(2n-1)^\circ)$



$\operatorname{Im}(\sum_{n=1}^{90}cis(2n-1)^\circ)$ = $\operatorname{Im}(\sum_{n=1}^{90}e^{i(2n-1)^\circ})$



$\operatorname{Im}(\sum_{n=1}^{90}e^{i(2n-1)^\circ})$ = $\operatorname{Im}(e^{i} + e^{3i} + e^{5i} + ... + e^{175i} + e^{177i} + e^{179i})$



Next, I used the sum of the finite geometric series formula on this expression:




$\operatorname{Im}(e^{i} + e^{3i} + e^{5i} + ... + e^{175i} + e^{177i} + e^{179i})$ = $\operatorname{Im}(\dfrac{e^i(1-e^{180i})}{1-e^{2i}})$



$\operatorname{Im}(\dfrac{e^i(1-e^{180i})}{1-e^{2i}})$ = $\operatorname{Im}(\dfrac{2e^i}{1-e^{2i}})$



Now I'm stuck in here;

integration - function which is Riemann integrable

Consider $f:[-1,1]\to\mathbb{R}$, $x\mapsto \begin{cases}
1, & \text{if } x=0 \\

0 & \text{else }
\end{cases}$ I want to know why f is Riemann integrable and I tried something. First of all we had the definition in lecture: $\int_{-1}^{1^*} f(x)\,dx=inf\{\int_{-1}^1 \phi(x)\,dx;\phi:[-1,1]\to\mathbb{R} \;\text{step function}, f\le\phi \}$ and $\int_{-1^*}^{1} f(x)\,dx=inf\{\int_{-1}^1 \phi(x)\,dx;\phi:[-1,1]\to\mathbb{R} \;\text{step function}, \phi\le f \}$. Now the bounded function f is Riemann integrable, if $\int_{-1}^{1^*} f(x)\,dx=\int_{-1^*}^{1} f(x)\,dx$.



My first try: consider the step function $\phi:[-1,1]\setminus\{0\}\to\mathbb{R},\; x\mapsto 0$. It is $f=\phi$ everywhere on $[-1,1]\setminus \{0\}$. Therefore it is
$\int_{-1}^{1^*} f(x)\,dx=inf\{\int_{-1}^1 \phi(x)\,dx;\phi:[-1,1]\to\mathbb{R} \;\text{step function}, f\le\phi \}=\int_{-1}^{1} \phi(x)\,dx=0$ and $\int_{-1^*}^{1} f(x)\,dx=\int_{-1}^{1} \phi(x)\,dx=0$. Therefore f is Riemann-integrable and $\int_{-1}^{1} f(x)\,dx=0$. Could you help me, to correct my solution?



I'm not sure if it is ok, because my step function isn't defined in $x=0$. But we say that $\phi:[-1,1]\to\mathbb{R}$ is a step function if you have a partition of the interval $[-1,1]$, $x_0=-1Maybe I have to define $\phi$ in $x=0$ too but take $x=0$ as one of the $x_i's$.



I will try next to do this with Riemann Sums. Regards

calculus - Is there a series where the terms tend to zero faster than the harmonic series but it still diverges?



I know that for the harmonic series

$\lim_{n \to \infty} \frac1n = 0$ and
$\sum_{n=1}^{\infty} \frac1n = \infty$.



I was just wondering, is there a sequence ($a_n =\dots$) that converges "faster" (I am not entirely sure what's the exact definition here, but I think you know what I mean...) than $\frac1n$ to $0$ and its series $\sum_{n=1}^{\infty}{a_n}= \infty$?



If not, is there proof of that?


Answer



There is no slowest divergent series. Let me take this to mean that given any sequence $a_n$ of positive numbers converging to zero whose series diverges, there is a sequence $b_n$ that converges to zero faster and the series also diverges, where "faster" means that $\lim b_n/a_n=0$. In fact, given any sequences of positive numbers $(a_{1,n}), (a_{2,n}),\dots$ with each $\sum_n a_{i,n}=\infty$ and $\lim_n a_{i+1,n}/a_{i,n}=0$, there is $(a_n)$ with $\sum a_n=\infty$ and $\lim_n a_n/a_{i,n}=0$ for all $i$.



To see this, given $a_1,a_2,\dots$, first define $b_1=a_1,b_2=a_2,\dots,b_k=a_k$ until $a_1+\dots+a_k>1$. Second, let $b_{k+1}=a_{k+1}/2,b_{k+2}=a_{k+2}/2,\dots,b_n=a_n/2$ until $a_{k+1}+\dots+a_n>2$, etc. That is, we proceed recursively; if we have defined $b_1,\dots,b_m$ and $b_m=a_m/2^r$, and $b_1+\dots+b_m>r+1$, let $b_{m+1}=a_{m+1}/2^{r+1},\dots,b_l=a_l/2^{r+1}$ until $a_{m+1}+\dots+a_l>2^{r+1}$. The outcome is that $\sum b_i=\infty$ and $\lim b_i/a_i=0$.




Similarly, given $(a_{1,n}),(a_{2,n}),\dots$, with each $(a_{k+1,n})$ converging to zero faster than $(a_{k,n})$, and all of them diverging, let $a_i=a_{1,i}$ for $i\le n_1$, where $a_{1,1}+\dots+a_{1,n_1}>1$, then $a_i=a_{2,i}$ for $n_11$ and that for any $k>n_2/2$ we have $a_{2,k}/a_{1,k}<1/2$, etc. That is, if we have defined $n_k$, we let $a_i=a_{k+1,i}$ for $n_k1$ and for all $l>n_{k+1}/2$ we have $a_{k+1,l}/a_{i,l}<1/2^{k+1}$ for all $i

We can modify the above slightly so that given any sequences $(a_{i,n})$ with $\sum_n a_{i,n}<\infty$ and $\lim_n a_{i+1,n}/a_{i,n}=\infty$ for all $i$, we can find $(a_n)$ with $\sum_n a_n<\infty$ and $\lim_n a_n/a_{i,n}=\infty$, so there is no fastest convergent series, and not even considering a sequence of faster and faster convergent series is enough. (In modern terms, there is no $(0,\omega)$-gap.)



What we cannot do in general is, given $(a_{i,n})$, with all $\sum_n a_{i,n}=\infty$, find $(a_n)$ with $\sum a_n=\infty$ and $a_n/a_{i,n}\to0$ for all $i$, if the $a_{i,n}$ are not ordered so that $a_{i+1,n}$ converges to zero faster than $a_{i,n}$. For example, we can have $a_n=1/n$ if $n$ is odd and $a_n=1/n^2$ if $n$ is even, and $b_n=1/n^2$ if $n$ is odd and $b_n=1/n$ if $n$ is even, and if $c_n$ converges to zero faster than both, then $\sum c_n$ converges. (These exceptions can typically be fixed by asking monotonicity of the sequences, which is how these results are usually presented in the literature.)



Note that the argument I gave is completely general, no matter what the series involved. For specific series, of course, nice "formulas" are possible. For example, given $a_n=1/n$, we can let $b_n=1/(n\log n)$ for $n>1$. Or $c_n=1/(n\log n\log\log n)$, for $n\ge 3$. Or ... And we can then "diagonalize" against all these sequences as indicated above.



By the way, the first person to study seriously the boundary between convergence and divergence is Paul du Bois-Reymond. He proved a version of the result I just showed above, that no "decreasing" sequence of divergent series "exhausts" the divergent series in that we can always find one diverging and with terms going to zero faster than the terms of any of them. A nice account of some of his work can be found in the book Orders of Infinity by Hardy. Du Bois-Reymond's work was extended by Hadamard and others. What Hadamard proved is that given $(a_i)$ and $(b_i)$ with $\sum a_i=\infty$, $\sum b_i<\infty$, and $b_i/a_i\to 0$, we can find $(c_i),(d_i)$ with $c_i/a_i\to0$, $b_i/d_i\to 0$, $d_i/c_i\to0$, $\sum c_i=\infty$, $\sum d_i<\infty$. More generally:





If we have two sequences of series, $(a_{1,n}), (a_{2,n}),\dots$ and $(b_{1,n}),(b_{2,n}),\dots$, such that




  • Each $(a_{i+1,n})$ converges to zero faster than the previous one,

  • Each $(b_{i+1,n})$ converges to zero slower than the previous one,

  • Each $(a_{i,n})$ converges to zero slower than all the $(b_{j,n})$,

  • Each $\sum_n a_{i,n}$ diverges, and

  • Each $\sum_n b_{i,n}$ converges,




then we can find sequences $(c_n),(d_n)$, "in between", with one series converging and the other diverging.




In modern language, we say that there are no $(\omega,\omega)$-gaps, and similarly, there are no $(\omega,1)$- or $(1,\omega)$-gaps. This line of research led to some of Hausdorff's deep results in set theory, such as the existence of so-called $(\omega_1,\omega_1)$- or Hausdorff gaps. What Hausdorff proved is that this "interpolation" process, which can be iterated countably many times, cannot in general be carries out $\omega_1$ times, where $\omega_1$ is the first uncountable ordinal.


calculus - continuity at zero implies continuity everywhere


Suppose $f(x)$ is defined on the entire line and continuous at the
origin with the property that $f(\alpha+\beta) = f(\alpha) + f(\beta)$
where $\alpha,\beta \in \mathbb{R}$. Prove that $f(x)$ must be
continuous at every point $x=a$.





Try:



First notice that $f(0) = f(0+0)=f(0)+f(0)=2f(0) \implies f(0)=0$. Since $f$ continuous at zero, then $\lim_{x \to 0} f(x) = f(0) = 0$. Next, let $a $ be arbitary real number. Then,



$$ \lim_{ x \to a} f(x) = \lim_{x \to a} f(x-a)+f(a)=_{h = x-a} \lim_{h \to 0} f(h) + f(a) = 0 + f(a)=f(a)$$



So f is continuous everywhere.




Is this a correct solution?

Saturday 26 July 2014

integration - How can I prove that $ int text{sech}(x) ~ mathrm{d}{x} = {sin^{-1}}(tanh(x)) + c $?



How can I prove that
$$
\int \text{sech}(x) ~ \mathrm{d}{x} = {\sin^{-1}}(\tanh(x)) + c?
$$
I don’t know how to prove this identity. Any help?




I tried to multiply by $ \dfrac{\cosh(x)}{\cosh(x)} $, and everything is okay, but at last I didn’t get the same answer.


Answer



Since you know the result, why not just differentiate it:
$$
\begin{align}
(\arcsin (\tanh x))'&=(\tanh x)'\times \frac{1}{\sqrt{1-(\tanh x)^2}}\\\\
&=({\rm sech}\: x)^2 \times \frac{1}{\sqrt{({\rm sech}\: x)^2 }}\\\\
&={\rm sech}\: x.
\end{align}
$$





I don’t know how to prove this identity. Any help?




This proves that
$$
\int{\rm sech}\: x\:dx = \arcsin (\tanh x)
$$ up to a constant.







Edit: you might want to write
$$
\begin{align}
\int{\rm sech}\: x\:dx&=\int({\rm sech}\: x)^2 \times \frac{1}{\sqrt{({\rm sech}\: x)^2 }}\:dx\\\\
&=\int(\tanh x)'\times \frac{1}{\sqrt{1-(\tanh x)^2}}\:dx\\\\
&=\arcsin (\tanh x)+C.
\end{align}
$$



number theory - Devise a test for divisibility of an integer by 11, in terms of properties of its digits

Using the fact that $10 \equiv-1 \pmod{11}$, devise a test for divisibility of an integer by $11$, in terms of properties of its digits.




Approach:



Let the number with its digits $a_0\cdots a_n$ be represented as $f(x)=a_0x^n+\cdots+a_n$. By exhaustion $f(-1)=0$ if the number is divisible by $11$, so $f(10)$ is divisible by $11$. Do I have to prove my conjecture? or it's trivial.

number theory - How to show the congruence involving the order




If $\alpha$ is the lower positive integer such as $x^{\alpha}\equiv 1\mod m$ and the order of $x$ in modulus $m$ is define by $\alpha = \textrm{ord}_{m} x$



Prove that :



$$a \cdot b \equiv 1\mod m \Longrightarrow \textrm{ord}_{m}a = \textrm{ord}_{m}b$$



I have no idea how to start it!!



Thanks in advance!!


Answer




Hint: Suppose $\textrm{ord}_m a = r$ and $\textrm{ord}_m b = s$; then $a\cdot b\equiv 1\mod m$ implies that $a^rb^r\equiv 1\mod m$, so that $b^r\equiv 1\mod m$. What does that tell you about $r$ as it relates to $s=\textrm{ord}_m b$? Now repeat the process, raising both sides to $s$.


number theory - Method to solve quadratic congruence

I learned quadratic congruence by myself and stuck in these problems:





  1. I know if quadratic congruence $X^2=a(\mod\mbox{ p} )$ with $p$ is an odd prime number and $\gcd(a,p)=1$, then it has no solution or has exactly two solutions. So,
    what is Theorem/Lemma that guarantee a quadratic congruence has solutions?


  2. For linear congruence, we can use the extended Euclidean algorithm to find solution of linear congruence. So my question, what is method to find solution of quadratic congruence?




I would really appreciate if anyone could help me out here

integration - bounding a sum using a definite integral



Conjecture. Let $1\begin{equation}\sum_{n=1}^k(k+1-n)^{-\frac{p}{p+1}}n^{-\frac{1}{p+1}}\leq C.\end{equation}



Discussion. I was thinking of estimating the above sum using a definite integral, like so:




\begin{equation*}\int_1^{k}\frac{dx}{x^{1/(p+1)}(k+1-x)^{p/(p+1)}}\end{equation*}



However, I don't remember any tools from calculus which would enable me to uniformly bound this integral, much less compute it! Anyone have any ideas? I suppose I could plug it into MATLAB to see if the conjecture is possibly true. But even if I do that, I need a rigorous proof.



Thanks guys!



EDIT: Just to be clear, $C$ is allowed to depend on $p$, but it is not allowed to depend on $k$.


Answer



Rephrasing the conjecture: Let $\alpha,\beta>0$ with $\alpha+\beta=1$. Then there exists $C>0$ such that
$$

s_{\alpha,\beta}(k) = \sum_{n=1}^k (k+1-n)^{-\alpha} n^{-\beta} < C
$$
uniformly in $k$.



Split the sum around $n=\frac k2$. For the smaller values of $n$,
$$
\sum_{n=1}^{k/2} (k+1-n)^{-\alpha} n^{-\beta} \le (k/2)^{-\alpha} \sum_{n=1}^{k/2} n^{-\beta} \sim (k/2)^{-\alpha} \frac{(k/2)^{1-\beta}}{1-\beta} < \frac2{1-\beta}.
$$
(The $\sim$ is because $\sum_{n=1}^{k/2} n^{-\beta} \sim \int_{t=1}^{k/2} t^{-\beta}\,dt$.) Similarly, for the larger values of $n$,
$$

\sum_{n=k/2}^k (k+1-n)^{-\alpha} n^{-\beta} \le (k/2)^{-\beta} \sum_{n=k/2}^k (k+1-n)^{-\alpha} = (k/2)^{-\beta} \sum_{m=1}^{k/2} m^{-\alpha} < \frac2{1-\alpha}
$$
by the same computation. Therefore $s_{\alpha,\beta}(k)$ is asymptotically at most $2/(1-\alpha)+2/(1-\beta)$, which implies that there exists $C>2/(1-\alpha)+2/(1-\beta)$ such that $s_{\alpha,\beta}(k) < C$ for every $k$.



[Tangent: if we set $S_{\alpha,\beta}(x) = \sum_{k=1}^\infty s_{\alpha,\beta}(k) x^{k+1}$, then it's easy to check that $S_{\alpha,\beta}(x) = R_\alpha(x)R_\beta(x)$ where $R_\alpha(x) = \sum_{k=1}^\infty k^{-\alpha} x^k$ and similarly for $R_\beta$. (In other words, the sequence $\{s_{\alpha,\beta}(k)\}_{k=1}^\infty$ is naturally the convolution of the sequences $\{k^{-\alpha}\}$ and $\{k^{-\beta}\}$.) Both power series defining $R_\alpha(x)$ and $R_\beta(x)$ have radius of convergence $1$, and both series converge at $x=-1$ by the alternating series test (indeed, they both converge for every complex $z$ with $|z|=1$ except $z=1$).



I would like to then conclude that the power series defining $S_{\alpha,\beta}(x)$ also converges at $x=-1$; this would imply a stronger result, namely that $s_{\alpha,\beta}(k)\to0$ as $k\to\infty$. However, arbitrary rearrangement of power series is only possible when the series converges absolutely, and this is not true for the $R_\alpha(-1)$ and $R_\beta(-1)$ series. So some other justification for the convergence of the $S_{\alpha,\beta}(-1)$ would be necessary.]


summation - Mathematical Induction with Exponents: $1 + frac12 + frac14 + dots + frac1{2^{n}} = 2 - frac1{2^{n}}$



Prove $1 + \frac{1}{2} + \frac{1}{4} + ... + \frac{1}{2^{n}} = 2 - \frac{1}{2^{n}}$ for all positive integers $n$.



My approach was to add $\frac{1}{2^{n + 1}}$ to both sides for the induction step. However, I got lost in the algebra and could not figure out the rest of the proof. Any help would be greatly appreciated.



Thank you in advance.


Answer




Hint



if we add $\frac{1}{2^{n+1}}$ to the right side, we get



$$2-\frac{1}{2^n}+\frac{1}{2^{n+1}}$$



$$=2-\frac{2}{2^{n+1}}+\frac{1}{2^{n+1}}$$



$$2-\frac{1}{2^{n+1}}$$
qed.



functions - Explicit bijection between $[0,1)$ and $(0,1)$




Proving that $[0,1)$ and $(0,1)$ have the same cardinality (without assuming any previous knowledge) can be done easily using Cantor-Bernstein theorem.



However I'm wondering if someone can build an explicit bijection between these sets.



It's easy to build a bijection between $(0,1)$ and $\mathbb R$, so a bijection from $[0,1)$ to $\mathbb R$ will also fit the bill.


Answer



Let us partition $(0,1)$ into a countable number of disjoint subsets of the form $[\frac{1}{n+1},\frac{1}{n})$ for $n=0,1,2,\ldots$.



These half-open intervals may then be positioned in reverse order to form a half-open interval of equal length. Whether this construction is sufficiently explicit is open to question, but it does allow the relocation of any $x\in (0,1)$ to $[0,1)$ to be computed in short order.




A more succinct construction is to define $f:[0,1) \to (0,1)$ by $f(0) = 1/2$, $f(1/n) = 1/(n+1)$ for integer $n \ge 2$, and $f(x) = x$ otherwise.


calculus - Showing that $int_{-infty}^{infty} exp(-x^2) ,mathrm{d}x = sqrt{pi}$











The primitive of $f(x) = \exp(-x^2)$ has no analytical expression, even so, it is possible to evaluate $\int f(x)$ along the whole real line with a few tricks. How can one show that

$$
\int_{-\infty}^{\infty} \exp(-x^2) \,\mathrm{d}x = \sqrt{\pi} \space ?
$$


Answer



Such an integral is called a Gaussian Integral



This link should help you out.


Friday 25 July 2014

limits - How to find $lim _{ nto infty } frac { ({ n!) }^{ 1over n } }{ n } $?




How to find $\lim _{ n\to \infty } \frac { ({ n!) }^{ 1\over n } }{ n } $ ?
I tried taking using logarithm to bring the expression to sum form and then tried L Hospital's Rule.But its not working.Please help!



This is what wolfram alpha is showing,but its not providing the steps!



BTW if someone can tell me a method without using integration, I'd love to know!


Answer



Note




\begin{align}\frac{(n!)^{1/n}}{n} &= \left[\left(1 - \frac{0}{n}\right)\left(1 - \frac{1}{n}\right)\left(1 - \frac{2}{n}\right)\cdots \left(1 - \frac{n-1}{n}\right)\right]^{1/n}\\
&= \exp\left\{\frac{1}{n}\sum_{k = 0}^{n-1} \log\left(1 - \frac{k}{n}\right)\right\}
\end{align}



and the last expression converges to



$$\exp\left\{\int_0^1\log(1 - x)\, dx\right\} = \exp(-1) = \frac{1}{e}.$$



Alternative: If you want to avoid integration, consider the fact that if $\{a_n\}$ is a sequence of positive real numbers such that $\lim\limits_{n\to \infty} \frac{a_{n+1}}{a_n} = L$, then $\lim\limits_{n\to \infty} a_n^{1/n} = L$.




Now $\frac{(n!)^{1/n}}{n} = a_n^{1/n}$, where $a_n = \frac{n!}{n^n}$. So



$$\frac{a_{n+1}}{a_n} = \frac{(n+1)!}{(n+1)^{n+1}}\cdot \frac{n^n}{n!} = \frac{n+1}{n+1}\cdot\frac{n^n}{(n+1)^n} = \left(\frac{n}{n+1}\right)^n = \left(\frac{1}{1 + \frac{1}{n}}\right)^n = \frac{1}{\left(1 + \frac{1}{n}\right)^n}.$$



Since $\lim\limits_{n\to \infty} (1 + \frac{1}{n})^n = e$, then $$\lim_{n\to \infty} \frac{a_{n+1}}{a_n} = \frac{1}{e}.$$



Therefore $$\lim_{n\to \infty} \frac{(n!)^{1/n}}{n} = \frac{1}{e}.$$


Hint on finding limit without application of L'Hospital's rule (mathematical analysis)




I've been trying to solve this limit without application of L'Hospital's rule for some time now, but with no success, tried a couple of approaches but all end up in dead end.




$$\lim_{x\to0} \frac{\ln(e+x)-e^x}{\cos^2x-e^x}$$



any kind of hint would be appreciated.


Answer



$$\dfrac{\ln(x+e)-e^x}{\cos^2x-e^x}$$



$$=\dfrac{1+\ln(1+x/e)-e^x}{(\cos x+e^{x/2})(\cos x-e^{x/2})}$$



$$=\dfrac{-\dfrac1e\cdot\dfrac{\ln(1+x/e)}{x/e}-\dfrac{e^x-1}x}{(\cos x+e^{x/2})\left(-\dfrac12\cdot\dfrac{e^{x/2}-1}{x/2}-\dfrac{1-\cos x}x\right)}$$




Now $\dfrac{1-\cos x}x=\dfrac x{1+\cos x}\cdot\left(\dfrac{\sin x}x\right)^2$


measure theory - A function constant almost everywhere



Q: Suppose $f:\mathbb{R}\rightarrow \mathbb{R}$ is measurable with respect to Lebesgue measure and $f(x)=f(x+1)=f(x+\pi)$ for almost every $x$. Prove that $f$ is constant almost everywhere.



Proof attempt: Since the ratio of $1$ and $pi$ is irrational, given any real number $r$ we have there exist $m, n\in \mathbb{Z}$ such that $|r-(m+n \pi )|< \epsilon$, so that the set $P=\{m+n\pi : m, n\in \mathbb{Z}\}$ is everywhere dense.




Now by Lusin's theorem take the interval $[a, b]$, so there is a compact set $E\subset [a,b]$ such that $\mu(E) > (b-a-\delta)$ for some arbitrarily small $\delta$ and so that $f|_{E}$ is continous. Pick a point $x$ like the one in the condition of the problem and any $y\in E$. By continuity of $f$ on $E$ and by the density of $P$ there will be a point such that $|f(x)-f(y)|=|f(x)-f(y)|=|C-f(y)|<\epsilon_{2}$. Taking the limit as $\delta \rightarrow 0$, $f$ is constant almost everywhere on the interval $[a, b]$, and since the interval is arbitrary we can say $f$ is constant almost everywhere.



Does this look alright? Is there another way of doing this without invoking something like Lusin's theorem? Measurable functions can be very ugly, so are there any other results similar to Lusin's theorem that allow us to put some sort of regularity on measurable functions and gain some intuition about them?



Thanks in advance!


Answer



This answer explains my idea from the comment in more detail.



First, you can not truncate $f$ in such a way that the truncated version is in $L^1$, because this will destroy periodicity.




My idea was to consider



$$
g := f \cdot \chi_{|f| \leq n}.
$$



You should verify that $g$ is again "periodic" in the same sense as $f$ is (and bounded).



Now take any approximation of the identity (e.g. $h_\varepsilon = \varepsilon^{-n} \cdot h(x/\varepsilon)$, where $h \in C_c$ with $\int h \, dx = 1$) and consider the family of convoultion products




$$
F_\varepsilon := (g \ast h_\varepsilon) (x) = \int g(x-y) h_\varepsilon (y) \, dy.
$$



It is then not too hard to show that $g \ast h_\varepsilon$ is continuous and is periodic in the classical sense with periods $1,\pi$, i.e. $F_\varepsilon (x) = F_\varepsilon (x+1) = F_\varepsilon (x+\varepsilon)$ for all $x$. This is so, because the integral used in the definition of $F_\varepsilon$ does not "see" the null-set on which this property might fail for $g$ (or $f$).



Now conclude (using a variant of your proof) that $F_\varepsilon \equiv C_\varepsilon$ for some constant $C_\varepsilon$.



By classical theorems about approximation by convolution, you also get $F_\varepsilon (x) \to g(x)$ almost everywhere (at every Lebesgue-point of $g$, see e.g. Why convolution of integrable function $f$ with some sequence tends to $f$ a.e.).




This will allow you to conclude $g \equiv c$ almost everywhere and then also that $f \equiv c'$ almost everywhere for some constant $c'$.


sequences and series - What, if anything, is the sum of all complex numbers?



If this question is ill-defined or is otherwise of poor quality, then I'm sorry.




What, if anything, is the sum of all complex numbers?





If anything at all, it's an uncountable sum, that's for sure.



I'm guessing some version of the Riemann series theorem would mean that there is no such thing as the sum of complex numbers, although - and I hesitate to add this - I would imagine that



$$\sum_{z\in\Bbb C}z=0\tag{$\Sigma$}$$



is, in some sense, what one might call the "principal value" of the sum. For all $w\in\Bbb C$, we have $-w\in\Bbb C$ with $w+(-w)=0$, so, if we proceed naïvely, we could say that we are summing $0$ infinitely many times${}^\dagger$, hence $(\Sigma)$.



We have to make clear what we mean by "sum", though, since, of course, with real numbers, one can define all sorts of different types of infinite sums. I'm at a loss here.




Has this sort of thing been studied before?



I'd be surprised if not.



Please help :)






$\dagger$ I'm well aware that this is a bit too naïve. It's not something I take seriously.



Answer



Traditionally, the sum of a sequence is defined as the limit of the partial sums; that is, for a sequence $\{a_n\}$, $\sum{a_n}$ is that number $S$ so that for every $\epsilon > 0$, there is an $N$ such that whenever $m > N$, $|S - \sum_{n = 0}^ma_n| < \epsilon$. There's no reason we can't define it like that for uncountable sequences as well: let $\mathfrak{c}$ be the cardinality of $\mathbb{C}$, and let $\{a_{\alpha}\}$ be a sequence of complex numbers where the indices are ordinals less than $\mathfrak{c}$. We define $\sum{a_{\alpha}}$ as that value $S$ so that for every $\epsilon > 0$, there is a $\beta < \mathfrak{c}$ so that whenever $\gamma > \beta$, $|S - \sum_{\alpha = 0}^{\gamma}a_{\alpha}| < \epsilon$. Note that this requires us to recursively define transfinite summation, to make sense of that sum up to $\gamma$.



But here's the thing: taking $\epsilon$ to be $1$, then $1/2$, then $1/4$, and so on, we get a sequence of "threshold" $\beta$ corresponding to each one; call $\beta_n$ the $\beta$ corresponding to $\epsilon = 1/2^n$. This is a countable sequence (length strictly less than $\mathfrak{c}$). Inconveniently, $\mathfrak{c}$ is regular: any increasing sequence of ordinals less than $\mathfrak{c}$ with length less than $\mathfrak{c}$ must be bounded strictly below $\mathfrak{c}$. So that means there's some $\beta_{\infty}$ that's below $\mathfrak{c}$ but greater than every $\beta_n$. But by definition, that means that all partial sums past $\beta_{\infty}$ are less than $1/2^n$ away from $S$ for every $n$. So they must be exactly equal to $S$. And that means that we must be only adding $0$ from that point forward.



This is a well-known result that I can't recall a reference for: the only uncountable sequences that have convergent sums are those which consist of countably many nonzero terms followed by nothing but zeroes. In other words, there's no sensible way to sum over all of the complex numbers and get a convergence.


calculus - Why does $3x^2+y^3+9=-2xy$ cannot be written in explicit form?



For this equation




$$3x^2 + y^3 + 9 = -2xy,$$



I checked on wolfram alpha and it doesn't seem to have an explicit form where $y$ is isolated and equals a function of $x$'s only.



The problem is that I don't understand why it can't be written explicitly, as the graph of the relation is a [non injective] function : it doesn't have more than one value of $y$ for each value of $x$.



So I should be able to write it as any normal function where $y = f(x)$ and its derivative with respect to $x$ should be expressed with only $x$'s instead of $x$'s and $y$'s - its derivative is : $\frac{-2(3x+y)}{3y^2+2x}$.



Please illuminate me, i'm really clueless.


Answer




According to wolfram alpha, there does exist a real explicit form:



enter image description here


linear algebra - Some questions about Hilbert matrix



This Exercise $12$ page $27$ from Hoffman and Kunze's book Linear Algebra.





The result o Example $16$ suggests that perhaps the matrix



$$A = \begin{bmatrix} 1 & \frac{1}{2} & \ldots & \frac{1}{n}\\
\frac{1}{2} & \frac{1}{3} & \ldots & \frac{1}{n+1}\\ \vdots & \vdots &
& \vdots \\ \frac{1}{n} &\frac{1}{n+1} &\ldots &\frac{1}{2n-1}
\end{bmatrix}$$



is invertible and $A^{-1}$ has integer entries. Can you prove that?





I will copy their words before that example:




Some people find it less awkward to carry along two sequences of
matrices, one describing the reduction of $A$ to the identity and the
other recording the effect of the same sequence of operations starting
from the identity.





In this post, Adrián Barquero says:




If your linear algebra textbook is Kenneth Hoffmann and Ray Kunze's
Linear Algebra book then I was in the same situation you're right now
a few years ago, but I was adviced at the moment that this particular
exercise wasn't as easy as one might expect at first.



The matrix you have is called the Hilbert matrix and the question you

have was already asked a couple of times in math overflow here and
here. They have excellent answers so I will just point you to them.




My question: Is it possible answer the question made by the authors without using high techniques? I am not allowed to use determinants here.



PS: Those answers in MO don't answer the question above. I apologise if this question is unappropriated or if I was unable to understand their answers.


Answer



Yes, I believe a student at the level of this book is able to answer the question. But this is not easy. I think it would be an issue at the Mathematical Olympiad.




I suggest you do this proof by induction on the size $n$ of the $n \times n$ Hilbert matrix $A_n.$ Start the induction step with $n = 2 $ . The statement is essentially the following:



$$
A_n = \left(\frac{1}{i + j-1}\right)_{ij} \mbox{ is a Hilbert matrix
}\Rightarrow A_n^{-1} \in M_{n \times n}(\mathbb{Z}).
$$



Step 1. Verify that by inspection for $ n = 2 $.



Step 2. Verify that if $A_n$ is a Hilbert matrix, then

$$
A_{n+1}=
\begin{pmatrix}
A_n & v^{T} \\
v & \frac{1}{2(n+1)-1}
\end{pmatrix}
\quad
$$

is a Hilbert matrix with $v=\left(\frac{1}{n+1},\dots,\frac{1}{2(n+1)-2}\right)$. Now apply matrix inversion in block form, special case 1.
And in this special case in which the blocks are matrices that commute, there is an exercise in the Hoffman book (do not remember which page).




Edit 1.
And in this particular case, this formula can be obtained using the definition of the inverse of a matrix. Just partition the inverse matrix into blocks of the same size and make the product and solve a matrix system. I believe that a student at the Mathematical Olympiad may have this idea, and an average student can understand it.



Edit 2. Done what was said in Edit 1, according to the notation of the link, just to verify that for $k =\frac{1}{2(n+1)-1} - vA_n^{-1}v^T,$ we have that $ \frac{1}{k}A_n^{-1}v^T \in\mathbb{Z} $. But this can be checked again by induction.


group theory - Explanation to Fermat's little theorem proof


Fermat's little theorem
$\forall a \in \mathbb{Z}$ and every prime p.
Then, $a^{p}\equiv a\pmod p$




$a=pm+r $




$\forall 0 \leq r

Proof for $r\not\equiv 0:$



Then, $\forall r \in \bar{U}\left ( p \right )$ and $\left | \bar{U}\left ( p \right ) \right |=p-1$



$r^{\left | \bar{U}\left ( p \right ) \right |}=e$ by a certain theorem in Cosets.



But this is really just $r^{p-1} \equiv 1\pmod p$




How does the last equivalence follows?
My knowledge of number theory is almost non-existant.
A verbose explanation would really help.



Thanks in advance.

real analysis - does there exist a smooth function which is nowhere convex/concave?



Consider $g\in {\rm C}^1[0,1]$. We say that $g$ is nowhere convex (concave, resp.) on $[0,1]$ if there is no open interval $I\subseteq [0,1]$ on which $g$ is convex (concave, resp.) Is it possible to find a function $g$ which satisfies:



Q1.(strong verison) $g$ is ${\rm C}^2[0,1]$ and $g$ is nowhere convex and nowhere concave on $[0,1]$?



Q2.(weak version) $g$ is twice differentiable on $(0,1)$ and $g$ is nowhere convex and nowhere concave on $[0,1]$?



I suspect that the answer to Q1 is no, and that the answer to Q2 is yes, but I am not exactly sure how to prove it. The question may be related to questions about sets of zeros of the first (or, equivalently, the second) derivative of $g$. By Whitney's Theorem, we can find ${\rm C}^{\infty}$-function whose first (or equivalently, second) derivative vanishes exactly on any given compact subset of $[0,1]$.




https://mathoverflow.net/questions/179445/non-zero-smooth-functions-vanishing-on-a-cantor-set



There are examples which show that $g'$ can be zero on the set ${\bf Q}\cap [0,1]$,



Set of zeroes of the derivative of a pathological function



so the elementary sufficient condition $g''>0$ for strict convexity does not apply in our case because every open interval contains at least one rational point. This however, in my view, does not discount the possibility that $g$ is (not necessarily strictly) convex in such intervals (g'' can occasionally be zero, and g still can be convex). Also, Darboux property of $g''$:



Is intermediate value property equivalent to Darboux property?




or existence of nowhere monotone continuous function $g'$ may be related to the answer:



Suppose a continuous function $f:\mathbb{R} \rightarrow \mathbb{R}$ is nowhere monotone. Show that there exists a local minimum for each interval.



the point here being that a point of strict inflection of $g$ is exactly the point of strict local extremum of $g'$,



Does there exist a continuous function from [0,1] to R that has uncountably many local maxima?



but I am not sure whether we can use the same argument if we consider general inflection point of $g$ (i.e., non-strict local extremum of $g'$).



Answer



Q1: The answer is no. If $g''(a)>0$ for some $a\in [0,1],$ then $g''>0$ in a neighborhood of $a$ by the continuity of $g''.$ Hence $g$ is strictly convex in that neighborhood. Similarly, if $g''(a)<0$ for some $a\in [0,1],$ then $g$ is strictly concave in that neighborhood. We're left with the case $f''\equiv 0.$ But this implies $f(x) = ax +b$ on $[0,1],$ hence $f$ is both convex and concave everywhere on $[0,1].$






Added later, in answer to the comment: It's actually possible for a $C^2$ function to have uncountably many inflection points. Suppose $K\subset [0,1]$ is uncountable, compact, and has no interior (the Cantor set is an example). Define



$$f(x)=\begin{cases}d(x,K)\sin (1/d(x,K)),&x\notin K\\ 0,& x\in K\end{cases}$$ Then $f$ is continuous, and $f$ takes on positive and negative values on any interval containing a point of $K.$ Define



$$g(x)=\int_0^x\int_0^t f(s)\,ds\,dt.$$




Then $g\in C^2[0,1]$ and $g''=f.$ It follows that every point of $K$ is an inflection point of $g.$


Thursday 24 July 2014

algebra precalculus - How to solve 2 tetrated 0.5 times?

I've been really interested in tetration lately. So I came up with a seemingly simple problem to solve, which is 2 tetrated 0.5 times, which I'll write as the following.



2^^0.5




To make sense of this notation, consider the following where A represents a real number:



A^A = A^^2



A^A^A = A^^3



etc.



Here's where my problem is. The answer I got and the one on Wikipedia are different. I'm assuming the answer on Wikipedia is the correct one, but I would like to know what I did wrong.




So here's how I tried to solve this problem:



First I say 2^^0.5 is the same as the "super square root" of 2 (I don't exactly know how to format this), which is equal to X.



Next I tetrate or "super square" both sides by 2, so the "super square root" of 2 becomes 2, and X becomes X tetrated 2 times, which looks like the following:



X^^2 = 2



Then I rewrite X tetrated 2 times as X to the power of X.




X^X = 2



Finally I graphed Y = X^X and Y = 2 on my calculator and found the intersection point in the first quadrant, which should be the answer of 2^^0.5. And I got the following:



X = 1.559610469 (approximately)



However, the answer to 2^^0.5 on Wikpedia is approximately 1.45933.



Does anyone know what I did wrong when trying to solve this problem? Any answers would be appreciated. Also, if you have any questions of what I did or what I'm asking, feel free to ask.

recreational mathematics - Help understanding proof of generalization of Cauchy-Schwarz Inequality



I'm having trouble with an exercise in the Cauchy Schwarz Master Class by Steele. Exercise 1.3b asks to prove or disprove this generalization of the Cauchy-Schwarz inequality:




enter image description here



The following is the solution at the end of the book:



enter image description here



After struggling to understand the solution for a few hours, I still cannot see why the substitution $c_k^2 / (c_1^2 + \ldots + c_n^2)$ would bring the target inequality to the solvable inequality. Neither do I understand what the $n^2 < n^3$ bound has to do with anything or how it allows us to take a "cheap shot".



Thanks!




Edit: I'm also wondering, is there a name for this generalization of Cauchy-Schwarz? Any known results in this direction?


Answer



I have reason to believe the text has a typo; maybe someone can correct me on this point. Because, to my mind, the definition of the $\hat{c}_i$'s would apply to the sum $\sum |a_k b_k c_k^2|$. I suspect it should read



$$\hat{c}_i=\frac{c_i}{\sqrt{c_1^2+c_2^2+\cdots+c_n^2}}.$$






If my hunch is correct, we would argue as follows (using the $\hat{c}_i$'s defined right above):




$$\left|\sum_{k=1}^n a_kb_k\hat{c}_k \right|
\color{Red}\le \sum_{k=1}^n |a_kb_k \hat{c}_k|
\color{Green}\le \sum_{k=1}^n |a_kb_k|
\color{Blue}=\left|\sum_{k=1}^n |a_k|\cdot|b_k|\right|
\color{Purple}\le \left(\sum_{k=1}^n |a_k|^2\right)^{1/2}\left(\sum_{k=1}^n |b_k|^2\right)^{1/2} $$



Above:





  • $\color{Red}\le$: Follows from triangle inequality

  • $\color{Green}\le$: Follows from $|\hat{c}_k|\le1$, $k=1,\cdots,n$.

  • $\color{Blue}=$: Follows because $x=|x|$ for $x\ge0$.

  • $\color{Purple}\le$: Cauchy-Schwarz applied to $|a_k|,|b_k|$.



Now take the far left and far right side of this, square, and multiply by $c_1^2+c_2^2\cdots+c_n^2$ (apply to $\hat{c}_i$).


exponentiation - Non-integer powers of negative numbers



Roots behave strangely over complex numbers. Given this, how do non-integer powers behave over negative numbers? More specifically:




  • Can we define fractional powers such as $(-2)^{-1.5}$?

  • Can we define irrational powers $(-2)^\pi$?


Answer




As other posters have indicated, the problem is that the complex logarithm isn't well-defined on $\mathbb{C}$. This is related to my comments in a recent question about the square root not being well-defined (since of course $\sqrt{z} = e^{ \frac{\log z}{2} }$).



One point of view is that the complex exponential $e^z : \mathbb{C} \to \mathbb{C}$ does not really have domain $\mathbb{C}$. Due to periodicity it really has domain $\mathbb{C}/2\pi i \mathbb{Z}$. So one way to define the complex logarithm is not as a function with range $\mathbb{C}$, but as a function with range $\mathbb{C}/2\pi i \mathbb{Z}$. Thus for example $\log 1 = 0, 2 \pi i, - 2 \pi i, ...$ and so forth.



So what are we doing when we don't do this? Well, let us suppose that for the time being we have decided that $\log 1 = 0$. This is how we get other values of the logarithm: using power series, we can define $\log (1 + z)$ for any $z$ with $|z| < 1$. We can now pick any number in this circle and take a power series expansion about that number to get a different power series whose circle of convergence is somewhere else. And by repeatedly changing the center of our power series, we can compute different values of the logarithm. This is called analytic continuation, and typically it proceeds by choosing a (say, smooth) path from $1$ to some other complex number and taking power series around different points in that path.



The problem you quickly run into is that the value of $\log z$ depends on the choice of path from $1$ to $z$. For example, the path $z = e^{2 \pi i t}, 0 \le t \le 1$ is a path from $1$ to $1$, and if you analytically continue the logarithm on it you will get $\log 1 = 2 \pi i$. And that is not what you wanted. (This is essentially the same as the contour integral of $\frac{1}{z}$ along this contour.)



One way around this problem is to arbitrarily choose a ray from the origin and declare that you are not allowed to analytically continue the logarithm through this ray. This is called choosing a branch cut, and it is not canonical, so I don't like it.




There is another way to resolve this situation, which is to consider the Riemann surface $(z, e^z) \subset \mathbb{C}^2$ and to think of the logarithm as the projection to the first coordinate from this surface to $\mathbb{C}$. So all the difficulties we have encountered above have been due to the fact that we have been trying to pretend that this projection has certain properties that it doesn't have. A closed path like $z = e^{2\pi i t}$ in which the logarithm starts and ends with different values corresponds to a path on this surface which starts and ends at different points, so there is no contradiction. This was Riemann's original motivation for defining Riemann surfaces, and it is this particular Riemann surface that powers things like the residue theorem.


Wednesday 23 July 2014

real analysis - Does ${sin (nx)}_1^infty$ converge in the $L^1$ norm on $[0,2pi]$?

This is a homework question from a problem set in an undergraduate-level real analysis course (coming from merely an intro to analysis course) about $L^p$ spaces.




Show that $\{\sin (nx)\}_{n=1}^\infty$ converges in the $L^1$ norm on $[0,2\pi]$





I showed that, for $f_n(x)=\sin(nx)$, the sequence of norms $\lVert f_n \rVert$ converges, but apparently I was supposed to show that $\lVert f-f_n\rVert\to0$, which I'm not really sure how to do. I'm probably missing something relatively simple, but I would appreciate the help.

elementary number theory - If the remainder in the division of $a$ by $b$ is $r$ and $c|b$, then...



How should I be proving this?




If the remainder in the division of $a$ by $b$ is $r$ and $c|b$, then the remainder of division of $a$ by $c$ equals the remainder in the division of $r$ by $c$




Would I use the definition of $a \equiv b$ (mod $r$) also means that $r|(b-a)$? Or should I be using the definition of the division algorithm with $a = qb + r$ where $0≤ r < |b|$? I also don't understand what to do about $c|b$.




I tried using the definition of divisibility so that $c \, m = b$, and substituting $b$ for $cm$ so that $r|(cm-a)$ but have been stuck here. I also tried $a = q cm + r$, so $r = a+(-qm)c$, which would make $a$ the remainder of the division of $r$ by $c$.



I'm honestly just so confused. The next part of the question asks to use $(b,c)$ to compute the remainders in the division of a huge number, so I'm leaning towards using the definition of $a \equiv b$ (mod $r$).


Answer



You just do it.



$a = kb + r$ for some integer $k$ and $0 \le r < b$. That's what having a remainder of $r$ means.



And $b = c*n$ for some integer $n$. That's what $c|b$ means.




so $a = k(c*n) + r$



And if the remainder of $r$ when divided by $c$ is $r'$ then there is an integer $m$ so that $r = c*m + r'$ and $0\le r' < c$



So $a = k(c*n) + c*m + r'$ so $a = c(kn + m) + r'$ with $0\le r < c$.



So the remainder of $a$ when divided by $c$ is $r'$ which is the remainder of $r$ divided by $c$.



...




Using the notation for equivalence we have



$a \equiv r \mod b$ which means $b|a-r$



And $b \equiv 0 \mod c$ which means $c|b$.



So by transitivity $c|a-r$ so $a \equiv r \mod c$.



I guess you have to prove that $c|b$ and $b|d\implies c|d$ but that's comes directly from the definition of "divids". There is a $n$ and $k$ so that $b = c*n$ and $a-r = b*k$. So $a-r = c(kn)$. So $c|a-r$.


limits - Computing $limlimits_{x to infty} left( cossqrt{ {2 pi over x}} right)^x$

I have tried to solve the below limit using the exponential formula and then applying l'Hospital but the problem turns hard to solve. Does anyone knows an easier way for it?



$$\lim_{x\rightarrow\infty}\left(\cos\sqrt{\frac{2\pi}{x}}\right)^x$$

real analysis - Is this:$sum_{n=1}^{infty}{(-1)}^{frac{n(n-1)}{2}}frac{1}{n}$ a convergent series?



Is there someone who can show me how do I evaluate this sum :$$\sum_{n=1}^{\infty}{(-1)}^{\frac{n(n-1)}{2}}\frac{1}{n}$$



Note : In wolfram alpha show this result and in the same time by ratio test it's not a convince way to judg that is convergent series



Thank you for any help


Answer



The parity of $\frac{n(n-1)}{2}$ is 4-periodic. Thus the sequence $(-1)^{\frac{n(n-1)}{2}}$ equals to:

$$ 1, \, -1, \, -1, \, 1, \, 1, \, -1, \, -1, \, 1, \, 1, \, -1, \, -1, \, 1 , \cdots$$
The original series' partial sum truncated at $N$ equals to
$$ \sum_{k=0}^{K} \left( \frac{1}{4k+1} - \frac{1}{4k+2} - \frac{1}{4k+3} + \frac{1}{4k+4}\right) + \sum_{i=1}^{N - 4K - 4}\frac{(-1)^{\frac{i(i-1)}{2}}}{4K + 4 + i}$$
where $K = \lfloor \frac{N}{4}\rfloor - 1$.



Then by a discussion on the partial sum we can conclude that the series is convergent.


real analysis - Show that the sequence $left(frac{2^n}{n!}right)$ has a limit.




Show that the sequence $\left(\frac{2^n}{n!}\right)$ has a limit.



I initially inferred that the question required me to use the definition of the limit of a sequence because a sequence is convergent if it has a limit $$\left|\frac{2^n}{n!} - L \right|{}{}<{}{}\epsilon$$




I've come across approaches that use the squeeze theorem but I'm not sure whether its applicable to my question. While I have found answers on this site to similar questions containing the sequence, they all assume the limit is $0$.



I think I need to show $a_n \geq a_{n+1},\forall n \geq 1$, so



$$a_{n+1} = \frac{2^{n+1}}{(n+1)!}=\frac{2}{n+1}\frac{2^{n}}{n!}

A monotonic decreasing sequence is convergent and this particular sequence is bounded below by zero since its terms are postive. I'm not sure whether or not I need to do more to answer the question.


Answer



It is easy to prove that

$$\sum_{k=1}^{\infty}\frac{2^n}{n!} \lt \infty$$
e.g. with the ratio test you have
$$\frac{a_{n+1}}{a_n}=\frac{n!}{2^n}\cdot\frac{2^{n+1}}{(n+1)!}=\frac{2}{n+1}\longrightarrow 0$$
Then $\lim\limits_{n\rightarrow\infty}\frac{2^n}{n!}$ has to be $0$






If you do not know anything about series you might assert, that $n!>3^n$ if $n\gt N_0$ for some fixed $N_0\in\mathbb{N}$. Therefore you have for those $n$
$$0<\frac{2^n}{n!} \lt\frac{2^n}{3^n}=\left(\frac{2}{3}\right)^n\longrightarrow 0$$
Thus $$\lim\limits_{n\longrightarrow\infty} \frac{2^n}{n!} =0 $$



elementary number theory - Showing $gcd(n^3 + 1, n^2 + 2) = 1$, $3$, or $9$



Given that n is a positive integer show that $\gcd(n^3 + 1, n^2 + 2) = 1$, $3$, or $9$.




I'm thinking that I should be using the property of gcd that says if a and b are integers then gcd(a,b) = gcd(a+cb,b). So I can do things like decide that $\gcd(n^3 + 1, n^2 + 2) = \gcd((n^3+1) - n(n^2+2),n^2+2) = \gcd(1-2n,n^2+2)$ and then using Bezout's theorem I can get $\gcd(1-2n,n^2+2)= r(1-2n) + s(n^2 +2)$ and I can expand this to $r(1-2n) + s(n^2 +2) = r - 2rn + sn^2 + 2s$ However after some time of chasing this path using various substitutions and factorings I've gotten nowhere.



Can anybody provide a hint as to how I should be looking at this problem?


Answer



As you note, $\gcd(n^3+1,n^2+2) = \gcd(1-2n,n^2+2)$.



Now, continuing in that manner,
$$\begin{align*}
\gcd(1-2n, n^2+2) &= \gcd(2n-1,n^2+2)\\

&= \gcd(2n-1, n^2+2+2n-1)\\
&= \gcd(2n-1,n^2+2n+1)\\
&= \gcd(2n-1,(n+1)^2).
\end{align*}$$



Consider now $\gcd(2n-1,n+1)$. We have:
$$\begin{align*}
\gcd(2n-1,n+1) &= \gcd(n-2,n+1) \\
&= \gcd(n-2,n+1-(n-2))\\
&=\gcd(n-2,3)\\

&= 1\text{ or }3.
\end{align*}$$
Therefore, the gcd of $2n-1$ and $(n+1)^2$ is either $1$, $3$, or $9$. Hence the same is true of the original gcd.


complex analysis - Bounded real part on the disk implies bounded imaginary part



If the real part of a holomorphic function on the unit disk is bounded, then the Borel-Caratheodory theorem implies that the function is bounded, thereby implying the imaginary part is in fact bounded. Is there a direct way to show that the imaginary part is bounded using the standard theory of maximum modulus principle and conformal maps?


Answer



The mapping $w$, defined by $w(z)=i\log\dfrac{1+z}{1-z}$ maps the unit disk to the strip $|{\bf Re}\,w|\leq\dfrac{\pi}{2}$.


Tuesday 22 July 2014

real analysis - Continuous Functions and Metric Spaces


Let $(X, d_x)$ and $(Y, d_y)$ be metric spaces. Let $f: X \rightarrow Y$ be a function.



Show that $f$ is continuous (w.r.t. the metrics $d_x, d_y$) if and only if $f$ inverts closed sets to closed sets.




I know that a continuous function maps compact sets to compact sets. Would the proof for this have something to do with that? Since compact sets are closed and bounded, it makes sense that a continuous function has to map closed sets to closed sets (for the compact sets to compact sets fact to hold).



I'm a bit stuck on the closed sets to closed sets $\implies$ continuous function part. Every point in a closed set is an accumulation point; and, for a function to be continuous, whenever a real sequence $(a_n)$ converges to $p$, then $f(a_n)$ must converge to $f(p)$ as well. How do I make the connection between the RHS and LHS?




Any help would be greatly appreciated. Thank you.

abstract algebra - Primitve roots and congruences?



Let $p$ be an odd prime. Show that the congruence $x^4$$\equiv -1\text{ (mod }p\text{)}\
$ has a solution if and only if $p$ is of the form $8k+1$.



Here is what I did



Suppose that $x^4$$\equiv -1\text{ (mod }p\text{)}\
$ and let $y$=$ind_rx$. Then $-x$ is aolso a solution and $ind_r(-x)$ $\equiv ind_r(-1) +ind_r(x)\text{}$ $\equiv (p-1)/2 +y(mod p-1)$ Without loss of generality, we may take $0
$ $\equiv ind_r(p-1)/2(mod p-1) \text{ }\
$. So $4y = (p-1)/2$ + $m$$(p-1)$ for some $m$. But $4y<2(p-1)$, so $4y=(p-1)/2$ and so $p=8y+1$ or $4y=3(p-1)/2$. In this case, 3 must divide $y$, so we have $p=8(y/3)+1$. In either case, $p$ is of the form. Conversely, suppose $p=8k+1$ and let $r$ be a primitive root of $p$. Take $x$ $=$ $r^k$. Then $x^4$$\equiv r^{4k} \text{ }\
$ $\equiv r^{(p-1)/2} \text{ }\
$ $\equiv -1 (mod p) \text{ }\
$. This $x$ is a solution.



We just learned some of this topic. My TA said do not worry about it yet. Is this correct?



Is there another way to do this? If so, please show me.


Answer




There is a simpler way:



If $x^4\equiv-1\pmod p, x^8=(x^4)^2\equiv(-1)^2\pmod p\equiv1$



So,$ord_px$ must divide $8$



But it can not divide $4$ as $x^4\equiv-1\pmod p\implies ord_px=8$



Using this, $ 8$ divides $\phi(p)=p-1$




Conversely seems to be fine in your approach


linear algebra - Find the eigen values of $Q$


Find all eigen values of the following matrix $Q$:




Here $n=pq,p

$$Q=\begin{bmatrix}{(n-1)I}_{l\times l}&&&&&& J_{l\times n-l} \\J^T_{(n-l)\times l}&&&&&& A_{(n-l)\times (n-l) }\end{bmatrix}$$




$A$ is a diagonal matrix of the form



$$A=\begin{bmatrix}
C_{(q-1)\times (q-1)} & 0 \\
0 & D_{(p-1)\times (p-1)}
\end{bmatrix}$$



where $$C= \begin{bmatrix}
p(q-1) & 1 & 1 &\ldots & 1\\1 & p(q-1) & 1 & \ldots & 1\\1 & 1 & p(q-1) &\ldots & 1 \\ \ldots &\ldots& \ldots & \ldots & \ldots \\ \ldots & \ldots & \ldots& \ldots & \ldots \\ \ldots& \ldots& \ldots & \ldots &\ldots
\\1 &1 &1 &\ldots & p(q-1)

\end{bmatrix}$$



and $$D=
\begin{bmatrix}
q(p-1) & 1 & 1 &\ldots & 1\\1 & q(p-1) & 1 & \ldots & 1\\1 & 1 & q(p-1) &\ldots & 1 \\ \ldots &\ldots& \ldots & \ldots & \ldots \\ \ldots & \ldots & \ldots& \ldots & \ldots \\ \ldots& \ldots& \ldots & \ldots &\ldots
\\1 &1 &1 &\ldots & q(p-1)
\end{bmatrix}$$



where $J$ is the all $1$ matrix.

set theory - Martin's Axiom and Choice principles



My main questions are:



(1) Does MA have any relation to choice principles such as AC, PIT/UL, DC, AD, etc.?
(2) In particular, does MA($\kappa$) it imply AC for collections of cardinality $\kappa$?




Somewhat related...



As we all know, Martin's Axiom generalizes BCT2 (Baire category theorem for cpt. Haus. spaces). Is there an axiom which generalizes the other version of BCT, for complete metric spaces (BCT1)?



Since it is currently unknown in ZF whether BCT2 implies BCT1 (and I assume many fine attempts have been made), might it be interesting to investigate the relationship between MA and a similarly generalized version of BCT1?


Answer



The first question itself is quite easily answerable with no.
$\newcommand{\ax}[1]{\mathsf{#1}}\newcommand{\MA}{\ax{MA}}\newcommand{\DC}{\ax{DC}}\newcommand{\AD}{\ax{AD}}\newcommand{\ZF}{\ax{ZF}}\newcommand{\BPI}{\ax{BPI}}\MA$
is a very "local" axiom. It merely suggestions that a class of partial orders whose cardinality is less than the continuum satisfy a certain property.




Let $M$ be any model of $\ax{ZFC}+\MA$, with the continuum as large as you want, and we can extend it to a model of $\ZF$, $N$, in which all the choice principle mentioned fail and $\Bbb R^M=\Bbb R^N$. In particular this tells us that $\MA$ holds in $N$.



In the other direction, $\AD$ implies $\ax{CH}$ (in the sense that every uncountable set of real numbers has size continuum), so it's vacuously true that $\MA$ holds. We may want to ask whether or not $\MA(\aleph_1)$ holds, but I'm not sure how to answer that.



To the second answer, we need to add the requirement that $\kappa<2^{\aleph_0}$ (even if the real numbers cannot be well-ordered), but even then the above shows that we may have to require the sets to be sets of real numbers. That is every family of $\kappa$ non-empty sets of real numbers have a choice function. The proof is almost obvious, using the set of finite choice functions ordered by $\supseteq$. If this poset is provably c.c.c. then $\MA(\kappa)$ will prove the existence of a choice function, but at the time I don't know how to prove this is c.c.c. (and I'm not 100% sure it's even true).



Here is a paper which may be of interest. Shannon, G. P. Provable forms of Martin's Axiom, Notre Dame J. Formal Logic 31, Number 3 (1990), 382-388.


irrational numbers - How can I prove that no limit exists for this function over this particular interval?



I was given the following function: $$ f(x) = \begin{cases} \frac{1}{q} &\text{if }x = \frac{p}{q} \text{ is rational, reduced to lowest terms} \\ 0 &\text{if }x \text{ is irrational}\end{cases}$$
And was asked to attempt to prove or disprove whether or not, for some $j$ on the interval $(0,1)$, $\lim \limits_{x \to j}{}f(x)$ exists.
Intuitively, I know that I can find an irrational number between any two rationals which would obviously result in some discontinuity. I considered the idea that maybe it could be possible assuming that among the infinitely many rationals on the interval which also share a common denominator and could be used to find the limit, but that obviously would not work. However, I cannot formulate any proper line of reason to prove the limit does not exist.
How can I prove this, as formally as possible?


Answer



Recall the definition of $\lim_{x \to j}f(x)=a$:




$$\mbox{Given}\ \varepsilon > 0\ \mbox{there exists}\ \delta > 0\\ \mbox{such that}\ \left| f(x) - a\right|<\varepsilon\\ \mbox{whenever}\ 0<\left| x-j \right| <\delta$$



Now for any $j$, given $\varepsilon>0$, if we choose $\delta$ small enough we can ensure that the denominator $q$ of any fraction $\frac{p}{q}$ (in its lowest terms) in the interval $\left(j-\delta,j+\delta\right)$ satisfies $q>\frac{1}{\varepsilon}$, except possibly for $j$ itself, if it is rational. So that $0\le\left| f(x)\right|<\varepsilon$ whenever $0<\left|x-j\right|<\delta$.



Plugging this into the definition of a limit we see:



$$\lim_{x\to j}f(x) = 0\ \mbox{for all}\ j\in\left(0,1\right)$$



Now a function $g$ is continuous at $j$ iff $\lim_{x\to j}g(x)=g(j)$. It follows that $f$ is discontinuous at rational points and continuous at irrational points.


Monday 21 July 2014

real analysis - Advanced Sum: Compute $sum_{n=1}^inftyfrac{H_{2n}H_n^{(2)}}{(2n+1)^2}$

How to prove





$$\sum_{n=1}^\infty\frac{H_{2n}H_n^{(2)}}{(2n+1)^2}=
\\ \small{\frac43\ln^32\zeta(2)-\frac72\ln^22\zeta(3)-\frac{21}{16}\zeta(2)\zeta(3)+\frac{713}{64}\zeta(5)-\frac4{15}\ln^52-8\ln2\operatorname{Li}_4\left(\frac12\right)-8\operatorname{Li}_5\left(\frac12\right)}$$




where $H_n^{(q)}=\sum_{k=1}^n\frac{1}{n^q}$ is the harmonic number, $\operatorname{Li}_r(x)=\sum_{n=1}^\infty\frac{x^n}{n^r}$ is the polylogarithm function and $\zeta$ is the Riemann zeta function.



This problem is proposed by Cornel with no solution submitted.







My trial



By applying integration by parts we have



$$\int_0^1 x^{2n}(\operatorname{Li}_2(x)-\zeta(2))\ dx=-\frac{H_{2n}}{(2n+1)^2}-\frac{1}{(2n+1)^3}$$



now multiply both sides by $H_n^{(2)}$ then sum both sides from $n=1$ to $\infty$ we get




$$\int_0^1(\operatorname{Li}_2(x)-\zeta(2))\sum_{n=1}^\infty H_n^{(2)}x^{2n}\ dx=-\sum_{n=1}^\infty\frac{H_{2n}H_n^{(2)}}{(2n+1)^2}-\sum_{n=1}^\infty\frac{H_n^{(2)}}{(2n+1)^3}$$



$$\int_0^1\frac{(\operatorname{Li}_2(x)-\zeta(2))\operatorname{Li}_2(x^2)}{1-x^2}\ dx=-\sum_{n=1}^\infty\frac{H_{2n}H_n^{(2)}}{(2n+1)^2}-\color{blue}{\sum_{n=1}^\infty\frac{H_n^{(2)}}{(2n+1)^3}}$$



I managed to find the blue sum using Abel's summation. As for the integral, I tried integration by parts but still resistant.




QUESTION



Any idea how to crack the integral or a different approach to find the target sum?





Thanks.

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...