Sunday, 30 November 2014

elementary set theory - Prove that [0,1] is equivalent to (0,1) and give an explicit description of a 1-1 function from [0,1] onto (0,1)



The problem is stated as follows:




Show that there is a one-to-one correspondence between the points of the closed interval $[0,1]$ and the points of the open interval $(0,1)$. Give an explicit description of such a correspondence.




Now, I think I can prove the first part of the problem by demonstrating the following:




Define $f: (0,1) \to \mathbb{R}$ as follows.



For $n \in \mathbb{N}$, $n \geq 2$, $\space{ }f(\frac{1}{n}) = \frac{1}{n-1}$
and for all other $x \in (0,1)$, $\space{}f(x) = x$




  1. Prove that $f$ is a $1-1$ function from $(0,1)$ onto $(0,1]$


  2. Slightly modify the above function to prove that $[0,1)$ is equivalent to $[0,1]$


  3. Prove that $[0,1)$ is equivalent to $(0,1]$





Since the "equivalent to" relation is both symmetric and transitive, it should follow that $[0,1]$ is equivalent to $(0,1)$. Hence, there does exist a one-to-one correspondence between $[0,1]$ and $(0,1)$.



I have no trouble with the above. My problem is in "finding an explicit description of such a correspondence." Can I modify the above function, or will that not suffice?


Answer



Steps 2 and 3 are not necessary. The function $g:(0,1] \to [0,1]$ defined by $g(1) = 0$ and $g(x) = f(x)$ if $x \neq 1$ is a bijection. This shows that $(0,1]$ is equivalent to $[0,1]$ and, by transitivity, that $(0,1)$ is equivalent to $[0,1]$. Furthermore, the function $g \circ f$ is a one-to-one correspondence between $(0,1)$ and $[0,1]$ that you can describe explicitly.


sequences and series - What value does $sum_{n=1}^{infty} dfrac{1}{4n^2+16n+7}$ converge to?



What value does



$$\sum_{n=1}^{\infty} \dfrac{1}{4n^2+16n+7}$$
converge to?




Ok so I've tried changing the sum to:



$$\sum_{n=1}^{\infty} \dfrac{1}{6(2n+1)}-\dfrac{1}{6(2n+7)}$$



and then writting some values:
$$\frac16·(\frac13+\frac15+\frac17\dots+\frac1{2N+1})-\frac16·(\frac19+\frac1{11}+\frac1{13}\dots+\frac1{2N+7})$$



but I don't know what else I can do to finish it! Any hint or solution?


Answer




Hint: Let's look at the $100$th partial sum. It's good to get some concreteness.



$$\frac{1}{6}\left(\frac{1}{3}+\frac{1}{5}+\cdots+\frac{1}{201}\right)-\frac{1}{6}\left(\frac{1}{9}+\cdots+\frac{1}{205}+\frac{1}{207}\right).$$



We have a bunch of terms that are repeated: $\frac{1}{9}+\cdots+\frac{1}{201}$ exists in each bracketed portion, so we can simply cancel all of them out to get



$$\frac{1}{6}\left(\frac{1}{3}+\frac{1}{5}+\frac{1}{7}-\frac{1}{203}-\frac{1}{205}-\frac{1}{207}\right).$$



Can you see how to use this line of reasoning to get the answer?


Do I Need To Find All Functions Satisfying A Given Equation In Such Cases?



I am trying to do an exercise from Venkatachala's book on functional equation (specifically exercise 2.5) which is the following:




Let $f: \mathbb{N} \rightarrow \mathbb{N}$ be a function such that





  • $f(m) < f(n)$ whenever $m < n$;

  • $f(2n) = f(n) + n$ for all $n \in \mathbb{N}$; and

  • $n$ is a prime number whenever $f(n)$ is a prime number.



Find $f(2001)$.




We can see from sheer inspection that $f(n) = n$ works. Suppose that I manage to prove via induction that $f(n)=n$ works. But is that enough to conclude that $f(2001) = 2001$ (the induction didn't prove that it is the only function) ? Is it necessary to find all functions that satisfy this (I don't think there are more functions that satisfy this by the way) ?.



Answer



As TrostAft noted, the formulation of the problem is slightly unclear. It is not totally trivial that only one function may satisfy the condition, so it is not clear that there is only one possible value for $f(2001)$. So asking for it as if was a unique value is strange.



That said, it is pretty easy to find one function that satisfies it (as you did), so the question would be over if you assume that "Since it asks for one value, it can only be one value, so it must be 2001." So let's do the full work and find all values, which (as is to be expected) comes to finding all functions that satisfy the conditions.



Then you use the following idea, that is in some sense what TrostAft wrote:



For any $n\ge 1$, consider the $n+1$ function values $f(n), f(n+1),\ldots,f(2n-1),f(2n)$. We know $f(n) < f(n+1) <\ldots < f(2n-1) < f(2n) = f(n) + n$ from the first and second conditions.



So those $n+1$ function values (positive integers) come from a set of exactly $n+1$ values ($f(n),f(n)+1,\ldots,f(n)+n$), so they must be exactly those intergers, in that order: $f(n+1)=f(n)+1, f(n+2)=f(n)+2, \ldots, f(2n-1)=f(n)+n-1, f(2n)=f(n)+n$.




So, for each $n$ we get $f(n+1)=f(n)+1$. In other words, any possible $f$ must be of the form $f(n)=n+c$, for some constant c.



Now we need to use that strange third condition to find which values of $c$ are possible. It's easy to see that this is only possible if $c=0$.



Otherwise, for each prime $p > c$, $p-c$ would also be a prime (because $f(p-c)=p$ and the third condition), and that impossible, as the gaps between primes can get arbitrarily high (none of the $(k-1)$ numbers $k!+2, k!+3,\ldots,k!+k$ is prime).



So only $c=0$ is possible and $f(n)=n$ is the only function satisfying the 3 conditions.


elementary number theory - Prove that$gcd(a+b, a-b) = gcd(2a, a-b) = gcd(a+b, 2b) $



Question:




Prove that$\gcd(a+b, a-b) = \gcd(2a, a-b) = \gcd(a+b, 2b) $



My attempt:



First we prove $\gcd(a+b, a-b) = \gcd(2a, a-b)$.



Let $ d = \gcd(a+b, a-b)$ and $ \ e = \gcd(2a, a-b)$



$ d|a+b$ and $ \ d|a-b \implies \exists m,n \in \mathbb{Z}$ such that $ \ a+b = dm$ and $\ a-b = dn \implies a + a -dn = dm \implies 2a = dm + dn \implies 2a = d(m+n) \implies d|2a$




So, $ \ d|a-b$ and $ \ d |2a \implies d\le e$



$e|2a$ and $ \ e|a-b \implies \exists m,n \in \mathbb{Z}$ such that $ \ 2a = em$ and $\ a-b = en \implies 2a - (a-b) = a + b = em-en = e(m-n) \implies e|(a+b)$



So, $ \ e|(a-b)$ and $ \ e |(a+b) \implies e\le d$



Hence $ e =d$



Similarly, if I prove that $\gcd(a+b, a-b) = \gcd(a+b, 2b)$, will the proof be complete?




I am not quite sure if this is the correct way to prove the problem. My professor used a similar approach to prove that $ \ \gcd(a,b) = \gcd( a- kb, b)$.


Answer



Your approach is correct. However a few steps can be shortened.



Let $d=\gcd(a+b,a-b)$, then $d | a+b$ and $d | a-b$, thus $d$ divides all linear combinations of $a+b$ and $a-b$, in particular $d|(a+b)-(a-b)=2b$ and $d|(a+b)+(a-b)=2a$. Thus $d$ divides both $2a$ and $2b$.



Now you can take it from here.


calculus - What is the maximum volume of an equilateral triangular prism inscribed in a sphere of radius 2?

What is the maximum volume of an equilateral triangular prism inscribed in a sphere of radius 2?

Since the volume of an equilateral triangular prism is $\frac{\sqrt3}{4}a^2h$,where $a$ is the side length of the base triangle and $h$ is the height of the prism.How to express this volume in terms of radius of the sphere so that i can differentiate it and equate it to zero.Thanks.

complex analysis - Can we use analytic continuation to obtain $sum_{n=1}^infty n = b, bneq -frac{1}{12}$

Intuitive question




It is a popular math fact that the sum definition of the Riemann zeta function:
$$\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s} $$
can be extended to the whole complex plane (except one) to obtain $\zeta(-1)=-\frac{1}{12}$. The right hand side for the above equation in $-1$ becomes the sum of the natural numbers so in some sense we have obtained a value for it. My question is: is this value depending on the choice of the Riemann zeta function as the function to be analytically continued, or do we always get $-\frac{1}{12}$?



Proper formulation



let $(f_n:D\subset \mathbb{C}\rightarrow \mathbb{C})_{n\in \mathbb{N}}$ be a sequence of functions and $a \in \mathbb{C}$ with $\forall n\in \mathbb{N}: f_n(a) = n$ and
$$f(z):=\sum_{n=0}^\infty f_n(z)$$
convergent on a part of the complex plane, such that it can be analytically continued to a part of the plane that contains $a$. Does it then follow that, under this continuation, $f(a)=-\frac{1}{12}$ and why (or can you give a counterexample)?




Examples




  • The case of the Riemann zeta function is the case where $\forall n
    \in \mathbb{N}: f_n(s) = \frac{1}{n^s}$ and $a=-1$

  • The case where $\forall n \in \mathbb{N}: f_n(z) = \frac{n}{z^n}$ and $a=1$ does yield the sum of all natural numbers but it's continuation $\frac{z}{(z-1)^2}$ has a pole at $a$.

real analysis - Bijection from $mathbb{N}$ to $mathbb{N}$ such that $f(n) geq n text{ for large } n$ or $f(n) leq n text{ for large } n$ holds.



Let us consider an arbitrary bijective map $f: \mathbb{N} \to \mathbb{N}$. Then which one of the following is correct? \begin{align} & f(n) \geq n \text{ for large } n \\ & f(n) \leq n \text{ for large } n \end{align} I know that $f(n)=n$ is a bijection. But if $f$ is any other bijection other than identity, which of the above must hold? It may be happen that both can hold for different $f$. Please give me some example or if any proof, of the above.


Answer



None! We can define $f$ in a way that it "swaps" every pair of numbers: $f(1)=2$, $f(2)=1$, $f(4)=3$, $f(3)=4$... This is a bijection, but you have $f(n)=n+1>n$ for $n$ odd whereas $f(n)=n-1 for $n$ even.



Hope this helps!


elementary number theory - Notation for "the highest power of $p$ that divides $n$"




If $p$ is a prime and $n$ an integer, is there a standard or commonly used notation for "the highest power of $p$ that divides $n$"?



It's a concept that is often used repeatedly in number-theoretic proofs (see for example this answer), and a convenient notation could make such proofs much more concise. This answer uses the notation $\{n,p\}$, which is convenient but seems not to be widely used.



Edit: Prompted by Thomas Kildetoft's comment below, by a convenient notation I mean one which facilitates not only simple statements such as:




  • $m$ is the highest power of $p$ that divides $n$.




but also more complex statements such as:




  • $m$ = (The highest power of $p$ that divides $n$) + 1


Answer



Yes, there is a standard notation, namely $p^e\mid\mid n$, which says that $e$ is the largest power of $p$ which divides $n$.



Reference: Martin Aigner, Number Theory.




Edit: For more advanced purposes, like $p$-adic numbers etc., a common notation is also $\nu_p(n)$, which also then appears in more elementary context. For elementary number theory I have seen $p^e\mid\mid n$ more often, though.


calculus - Compute $sum_{n=1}^inftyfrac{H_n^2H_n^{(2)}}{n^3}$

How to prove




$$\sum_{n=1}^\infty\frac{H_n^2H_n^{(2)}}{n^3}=\frac{19}{2}\zeta(3)\zeta(4)-2\zeta(2)\zeta(5)-7\zeta(7)\ ?$$
where $H_n^{(p)}=1+\frac1{2^p}+\cdots+\frac1{n^p}$ is the $n$th generalized harmonic number of order $p$.





This series is very advanced and can be found evaluated in the book (Almost) Impossible Integrals, Sums and Series page 300 using only series manipulations, but luckily I was able to evaluate it using only integration, some harmonic identities and results of easy Euler sums.



Can we prove the equality above in different methods besides series manipulation and the idea of my solution below? All approaches are highly appreciated.



Solution is posted in the answer section.



Thanks

Saturday, 29 November 2014

limits - $mathop {lim }limits_{x to {0^ + }} left( {frac{1}{x} - frac{1}{{sqrt x }}} right);$?

$\mathop {\lim }\limits_{x \to {0^ + }} \left( {\frac{1}{x} - \frac{1}{{\sqrt x }}} \right) = \mathop {\lim }\limits_{x \to {0^ + }} \frac{{1/\sqrt x - 1}}{{\sqrt x }} = \mathop {\lim }\limits_{x \to {0^ + }} \frac{{ - \frac{1}{2}{x^{ - 3/2}}}}{{\frac{1}{2}{x^{ - 1/2}}}} = \mathop {\lim }\limits_{x \to {0^ + }} \left( { - \frac{1}{x}} \right) = - \infty $. However, the answer is $\infty$. Can you help me spot my error? Thanks!

derivatives - Differentiate $frac{x^3}{{(x-1)}^2}$





Find $\frac{d}{dx}\frac{x^3}{{(x-1)}^2}$




I start by finding the derivative of the denominator, since I have to use the chain rule.



Thus, I make $u=x-1$ and $g=u^{-2}$. I find that $u'=1$ and $g'=-2u^{-3}$. I then multiply the two together and substitute $u$ in to get:



$$\frac{d}{dx}(x-1)^{2}=2(x-1)$$



After having found the derivative of the denominator I find the derivative of the numerator, which is $3x^2$. With the two derivatives found I apply the quotient rule, which states that




$$\frac{d}{dx}(\frac{u(x)}{v(x)})=\frac{v'u-vu'}{v^2}$$



and substitute in the numbers



$$\frac{d}{dx}\frac{x^3}{(x-1)^2}=\frac{3x^2(x-1)^2-2x^3(x-1)}{(x-1)^4}$$



Can I simplify this any further?Is the derivation correct?


Answer



You're mixing the product rule and the quotient rule. You can apply each of them, but not simultaneously.





  • by the product rule: remember $\dfrac{\mathrm d}{\mathrm dx}\Bigl(\dfrac1{x^n}\Bigr)=-\dfrac n{x^{n+1}}$, so
    \begin{align}
    \dfrac{\mathrm d}{\mathrm dx}\biggl(\dfrac{x^3}{(1-x)^2}\biggr)&=\dfrac{\mathrm d}{\mathrm dx}(x^3)\cdot\dfrac1{(1-x)^2}+x^3\dfrac{\mathrm d}{\mathrm dx}\dfrac1{(1-x)^2} \\
    &= \frac{3x^2}{(1-x)^2}+\frac{2x^3}{(1-x)^3} =\frac{x^2\bigl(3(1-x)+2x\bigr)}{(1-x)^3}\\&
    =\frac{x^2(3-x)}{(1-x)^3}.
    \end{align}

  • by the quotient rule:
    $$\dfrac{\mathrm d}{\mathrm dx}\biggl(\dfrac{x^3}{(1-x)^2}\biggr)=\frac{3x^2(1-x)^2+x^3\cdot2(1-x)}{(1-x)^4}=\frac{x^2\color{red}{(\not1-\not x)}\bigl(3(1-x)+2x\bigr)}{( 1-x)^{\color{red}{\not4}\,3}}=\dots$$


  • Since there are exponents, logarithmic differentiation may help make it shorter. It's simpler to use it with Lagrange's notations: set $f(x)=\dfrac{x^3}{(1-x)^2}$. Then
    $$\frac{f'(x)}{f(x)}=\frac 3x+\frac2{1-x}=\frac{3-x}{x(1-x)}, \quad\text{so }\;f'(x)=\frac{f'(x)}{f(x)}\cdot f(x)=\dotsm $$


Friday, 28 November 2014

integration - How prove this $lim_{ntoinfty}int_{0}^{frac{pi}{2}}sin{x^n}dx=0$

show that

$$\lim_{n\to\infty}\int_{0}^{\dfrac{\pi}{2}}\sin{x^n}dx=0$$



I have see this similar problem
$$\lim_{n\to\infty}\int_{0}^{\dfrac{\pi}{2}}\sin^n{x}dx=0$$
poof:
$\forall \xi>0,0<\delta<\xi/2$,and there is $N$,such $0<\sin^n{\pi/2-\delta}<\xi/\pi(n\ge N)$
then we have
$$\int_{0}^{\pi/2}\sin^n{x}dx=\left(\int_{0}^{\pi/2-\delta}+\int_{\pi/2-\delta}^{\pi/2}\right)\sin^n{x}dx=I_{1}+I_{2}$$
then
$$|I_{1}|\le\left(\sin{\pi/2-\delta}\right)^n(\pi/2-\delta)<\xi/\pi\cdot\pi/2=\xi/2$$

and
$$|I_{2}|\le\left(\pi/2-(\pi/2-\delta)\right)=\delta<\xi/2$$
and This problem have many other methods,



But for this $$\lim_{n\to\infty}\int_{0}^{\dfrac{\pi}{2}}\sin{x^n}dx=0$$
I can't prove it,Thank you

combinatorics - Combinatorial identity: summation of stars and bars

I noticed that the following identity for a summation of stars and bars held for specific $k$ but I was wondering if someone could provide a general proof via combinatorics or algebraic manipulation. I wouldn't be surprised if this is a known result; it looks very similar to the Hockey Stick identity.



$$\sum_{i=0}^k {d+i-1 \choose d-1} = {d+k \choose k}$$



The left can be immediately rewritten as $\sum_{i=0}^k {d+i-1 \choose i}$ if it helps inspire intuition.

limits - Avoid L'hopital's rule




$$\lim_{x\to 0} {\ln(\cos x)\over \sin^2x} = ?$$



I can solve this by using L'Hopital's rule but how would I do this without this?


Answer



$$\frac{\log\left(\cos\left(x\right)\right)}{\sin^{2}\left(x\right)}=\frac{1}{2}\frac{\log\left(1-\sin^{2}\left(x\right)\right)}{\sin^{2}\left(x\right)}=-\frac{\sin^{2}\left(x\right)+O\left(\sin^{4}\left(x\right)\right)
}{2\sin^{2}\left(x\right)}\stackrel{x\rightarrow0}{\rightarrow}-\frac{1}{2}.$$


Thursday, 27 November 2014

logic - Mathematical induction question: why can we "assume $P(k)$ holds"?



So I see that the process for proof by induction is the following
(using the following statement for sake of example:
$P(n)$ is the formula for the sum of natural numbers $\leq n$: $0 + 1 + \cdots +n = (n(n+1))/2$ )





  1. Show that the base case is trivially true: $P(0)$: let $n = 0$. Then $0 = (0(0+1))/2$, which is true.

  2. Show that if $P(k)$ holds then $P(k+1)$ also holds. Assume $P(k)$ holds for some unspecified value of $k$. It must be then shown that $P(k+1)$ holds.



This is the part I don't get: 'Assume $P(k)$ holds'



We are just assuming something is true and then 'proving' that something else is true based on that assumption. But we never proved the assumption itself is true so it doesnt seem to hold up to me. Oviously proof by induction works so am I viewing the process incorrectly?


Answer



The "inductive step" is a proof of an implication:

$$\mathbf{if}\ P(k),\ \mathbf{then}\ P(k+1).$$
So we are trying to prove an implication.



When proving an implication, instead of proving the implication, we usually assume the antecedent (the clause after "if" and before "then"), and then use that to prove the consequent (the clause after "then"). There are several reasons why this is reasonable, and one reason why it is valid.



Reasonable:




  1. An implication, $P\to Q$, is false only in the case where $P$ is true but $Q$ is false. In any other combination of "truth values", the implication is true. So in order to show that $P\to Q$ is valid (always true), it is enough to consider the case when $P$ is already true: if $P$ is false, then the implication will be true regardless of what $Q$ is.


  2. More informally: in proving $P\to Q$, we can say: "if $P$ is false, then it doesn't matter what happens to $Q$, and we are done; if $P$ is true, then..." and proceed from there.





Why is it a valid method of proof?



There is something called the Deduction Theorem. What it says is that if, from the assumption that $P$ is true, you can produce a valid proof that shows that $Q$ is true, then there is a recipe that will take that proof and transform it into a valid proof that $P\to Q$ is true. And, conversely, if you can produce a valid proof that $P\to Q$ is true, then from the assumption that $P$ is true you can produce a proof that shows that $Q$ is true.



The real interesting part of the Deduction Theorem is the first part, though: that if you can produce a proof of $Q$ from the assumption that $P$ is true, then you can produce a proof of $P\to Q$ without assuming anything about $P$ (or about $Q$). It justifies the informal argument given above.



That's why, in mathematics, whenever we are trying to prove an implication, we always assume the antecedent is already true: the Deduction Theorem tells us that this is a valid method of proving the implication.


calculus - How find this sum of $lim_{ntoinfty}sum_{k=1}^{n}left(frac{1}{2^k}sum_{i=0}^{2^{k-1}-1}ln{left(frac{2^k+2+2i}{2^k+1+2i}right)}right)$

Find the sum of the limit
$$\lim_{n\to\infty}\sum_{k=1}^{n}\left(\frac{1}{2^k}\sum_{i=0}^{2^{k-1}-1}\ln{\left(\frac{2^k+2+2i}{2^k+1+2i}\right)}\right)$$



My try: since
$$\sum_{i=0}^{2^{k-1}-1}\ln{\left(\dfrac{2^k+2+2i}{2^k+1+2i}\right)}=\sum_{i=0}^{2^{k-1}-1}\left(\ln{(2^k+2+2i)}-\ln{(2^k+1+2i)}\right)$$




My friend tells me this sum has an analytical solution. But
I can't find it. Thank you.

sequences and series - Closed form for the finite sum $sum_{r=1}^n frac{1}{r^2}$


What will be the value of
$$\sum_{r=1}^n \frac{1}{r^2}$$

in terms of finite $n$?




I tried to solve it using $V_n$ method but couldn't get how to convert this series in telscopic series.



Please anyone help me to solve it.

calculus - Obtain magnitude of square-rooted complex number



I would like to obtain the magnitude of a complex number of this form:



$$z = \frac{1}{\sqrt{\alpha + i \beta}}$$



By a simple test on WolframAlpha it should be




$$\left| z \right| = \frac{1}{\sqrt[4]{\alpha^2 + \beta^2}}$$



The fact is that if I try to cancel the root in the denominator I still have a troublesome expression at the numerator:



$$z = \frac{\sqrt{\alpha + i \beta}}{\alpha + i \beta}$$



And this alternative way seems unuseful too:



$$z = \left( \alpha + i \beta \right)^{-\frac{1}{2}}$$




If WolframAlpha gave the correct result, how to prove it?


Answer



If you convert the number to the complex exponential form, the solution is easy.



Let $s = \alpha + \beta i = r e^{\theta i}$, then $z = s^{-\frac{1}{2}} = r^{-\frac{1}{2}} e^{-\frac{\theta}{2}i}$. The conjugate (written with an overbar) of a complex exponential $re^{\theta i}$ is just $re^{-\theta i}$, so calculating $z\bar{z}$ leads to the exponential terms cancelling and leaves $z\bar{z} = r^{-1}$. Now $r = \sqrt{\alpha^2 + \beta^2}$ and you need $|z| = \sqrt{z\bar{z}}$.


calculus - Seperation of variables justification?

I haven't found a similar question on Math SE, but I may not have looked enough because I find it hard to believe someone hasn't already asked this. Anyways, here goes:



I'm studying mathematics, but one of the courses is a course on physics. So, since my university chooses not to give courses on differential equations until we have a solid knowledge of Algebra, Geometry, Analysis, Topology, etc., the physics course includes a small supplement on ODE's. To my dismay though, one of the first things we learned was that we could solve $$\frac{dy}{dx}=f(y)g(x)$$
By multiplying by $dx$ on both sides, dividing by $f(y
)$ and integrating on the left with respect to $x$, and on the right with respect to $x$. I have no clue how this even makes sense as $dy/dx$ and $dx$ or $dy$ in an integral are just notations. Could someone elaborate a justification for this process? As a side note, is there any way to discuss these things intrinsically? Or is it like calculus where we always talk about $f(x)$ and use the canonical basis?

Wednesday, 26 November 2014

complex numbers - Find all $zinBbb C$ such that $|z+1|+ |z-1|=4$

I'd like to find all points of the complex plane which satisfy



$$|z+1| + |z-1| = 4. $$



I know this is an ellipsis with foci $1$ and $-1$, and I know that the answer is




$$3 x^2+4 y^2 \leq 12,$$



but I can't find a correct way of getting there.



First, I write $z$ as $x + i y$ and square both sides of the equation, then divide by 2 and get



$$x^2+y^2+1+\sqrt{(x-1)^2+y^2} \sqrt{(x+1)^2+y^2} =8.$$



Pass $x^2+y^2+1$ to the RHS (right hand side), then




$$\sqrt{(x-1)^2+y^2} \sqrt{(x+1)^2+y^2} =7-x^2-y^2.\tag{1} $$



Now, I would have to square both sides of the equation like this,



$$((x-1)^2+y^2)((x+1)^2+y^2) = (7-x^2-y^2)^2, \tag{2}$$



but the problem is that I cannot assure that the RHS is not negative, so there could be a value for $z$ such that (2) is satisfied but not (1), i.e it could exist $z=x + i y$ which satisfies



$$\sqrt{(x-1)^2+y^2} \sqrt{(x+1)^2+y^2} =-(7-x^2-y^2)\tag{3} $$




in which case also satisfy (2) but not (1)!



So I would get an incorrect solution.

To test the convergence of series 1



To test the convergence of the following series:




  1. $\displaystyle \frac{2}{3\cdot4}+\frac{2\cdot4}{3\cdot5\cdot6}+\frac{2\cdot4\cdot6}{3\cdot5\cdot7\cdot8}+...\infty $


  2. $\displaystyle 1+ \frac{1^2\cdot2^2}{1\cdot3\cdot5}+\frac{1^2\cdot2^2\cdot3^2}{1\cdot3\cdot5\cdot7\cdot9}+ ...\infty $



  3. $\displaystyle \frac{4}{18}+\frac{4\cdot12}{18\cdot27}+\frac{4\cdot12\cdot20}{18\cdot27\cdot36} ...\infty $




I cannot figure out the general $u_n$ term for these series(before I do any comparison/ratio test).



Any hints for these?


Answer




I cannot figure out the general $u_n$ term for these series(before I do any comparison/ratio test).





For the first series, one can start from the fact that, for every $n\geqslant1$, $$u_n=\frac{2\cdot4\cdots (2n)}{3\cdot5\cdots(2n+1)}\cdot\frac1{2n+2}=\frac{(2\cdot4\cdots (2n))^2}{2\cdot3\cdot4\cdot5\cdots(2n)\cdot(2n+1)}\cdot\frac1{2n+2},$$ that is, $$u_n=\frac{(2^n\,n!)^2}{(2n+1)!}\cdot\frac1{2n+2}=\frac{4^n\,(n!)^2}{(2n+2)!}.$$ Similar approaches yield the two other cases.


Non-standard analysis - infinitesimals and archimedean property

I got a question about infinitesimals in non-standard analysis. If I understand correctly, they are defined to be the number that is closest to zero.



However, at the same time, they satisfy all the properties of real numbers - so for example, let's call such an infinitesimal $\epsilon$. Then $2 \epsilon$ and $3 \epsilon$ are greater than $\epsilon$. And most importantly, if they satisfy the rules known from real numbers, then they should also satisfy the archimedean property that states that it's always possible to give a number closer to zero than given number.



I've heard non-standard analysis simplifies some proofs. Erm, how can we prove theorems about real numbers in a system that includes objects that don't satisfy the properties of real numbers (infinitesimals in this case, because they are defined to be the number nearest to zero, which in fact doesn't exist)?

calculus - Evaluating the integral $int_0^infty frac{x sin rx }{a^2+x^2} dx$ using only real analysis




Calculate the integral$$ \int_0^\infty \frac{x \sin rx }{a^2+x^2} dx=\frac{1}{2}\int_{-\infty}^\infty \frac{x \sin rx }{a^2+x^2} dx,\quad a,r \in \mathbb{R}. $$
Edit: I was able to solve the integral using complex analysis, and now I want to try and solve it using only real analysis techniques.


Answer



It looks like I'm too late but still I wanna join the party. :D



Consider
$$
\int_0^\infty \frac{\cos rx}{x^2+a^2}\ dx=\frac{\pi e^{-ar}}{a}.
$$


Differentiating the both sides of equation above with respect to $r$ yields
$$
\begin{align}
\int_0^\infty \frac{d}{dr}\left(\frac{\cos rx}{x^2+a^2}\right)\ dx&=\frac{d}{dr}\left(\frac{\pi e^{-ar}}{a}\right)\\
-\int_0^\infty \frac{x\sin rx}{x^2+a^2}\ dx&=(-a)\frac{\pi e^{-ar}}{a}\\
\Large\int_0^\infty \frac{x\sin rx}{x^2+a^2}\ dx&=\Large\pi e^{-ar}.
\end{align}
$$
Done! :)


Why is it legitimate to solve the differential equation $frac{dy}{dx}=frac{y}{x}$ by taking $int frac{1}{y} dy=int frac{1}{x} dx$?




Answers to this question Homogeneous differential equation $\frac{dy}{dx} = \frac{y}{x}$ solution? assert that to find a solution to the differential equation $$\dfrac{dy}{dx} = \dfrac{y}{x}$$ we may rearrange and integrate $$\int \frac{1}{y}\ dy=\int \frac{1}{x}\ dx.$$ If we perform the integration we get $\log y=\log x+c$ or $$y=kx$$ for constants $c,k \in \mathbb{R}$. I've seen others use methods like this before too, but I'm unsure why it works.



Question: Why is it legitimate to solve the differential equation in this way?


Answer



You start with
$$
y'=\frac{y}{x}\implies \frac{y'}{y}=\frac{1}{x}\implies\int\frac{y'dx}{y}=\int \frac{dx}{x},
$$
and you make the change of variables in the first integral, which results in what you've written

$$
\int\frac{dy}{y}=\int \frac{dx}{x}
$$


Solving the Integral $int_0^{infty} , mathrm{arcsinh} left(frac{a}{sqrt{x^2+y^2}} right) , mathrm{cos}(b , x) ,mathrm{d}x$

Is there any possibily to solve the following integral



$$\int_0^{\infty} \, \mathrm{arcsinh} \left(\frac{a}{\sqrt{x^2+y^2}} \right) \, \mathrm{cos}(b \, x) \,\mathrm{d}x$$



with $a>0$, $y>0$ and $-\pi/2<\mathrm{arg}(b)<0$.



I assume, the result is connected to Bessel and Struve functions. Thank you.



Edit:
Using integration by parts with




$$\int\mathrm{cos}(b \, x)\,\mathrm{d}x=\frac{\mathrm{sin}(b \, x)}{b}$$



$$\frac{\mathrm{d}}{\mathrm{d}x}( \mathrm{arcsinh} \left(\frac{a}{\sqrt{x^2+y^2}} \right))= - \frac{a \,x}{(x^2+y^2) \, \sqrt{x^2+y^2+a^2}}$$



and the limits



$$ \lim_{x\to0} \frac{\mathrm{sin}(b \, x)}{b} \, \mathrm{arcsinh} \left(\frac{a}{\sqrt{x^2+y^2}} \right) = 0$$
$$ \lim_{x\to\infty} \frac{\mathrm{sin}(b \, x)}{b} \, \mathrm{arcsinh} \left(\frac{a}{\sqrt{x^2+y^2}} \right) = 0 $$




Gives the integral



$$\int_0^{\infty} \, \frac{a \,x}{(x^2+y^2) \, \sqrt{x^2+y^2+a^2}} \, \frac{\mathrm{sin}(b \, x)}{b} \,\mathrm{d}x$$



if that makes anything simpler...



Edit2:



Mathematica tells me that the last integrand can be presented as the product of three G-functions. Inhere it is said that the integral of the product of three G-functions can be computed under certain restrictions. Sadly it is not mentioned which restrictions. Does anybody know anything about this?




It would be:



$$\frac{1}{x^2+y^2+a^2} = \frac{1}{\sqrt{\pi} \, \sqrt{y^2+a^2}} \,\mathrm{MeijerG}\left[\left\{\{\tfrac{1}{2} \},\{ \} \right\},\left\{\{0 \},\{ \} \right\},\tfrac{x^2}{y^2+a^2}\right]$$



$$\frac{1}{x^2+y^2} = \frac{1}{y^2} \,\mathrm{MeijerG}\left[\left\{\{0 \},\{ \} \right\},\left\{\{0 \},\{ \} \right\},\tfrac{x^2}{y^2}\right]$$



$$\mathrm{sin}(b\,x)= \sqrt{\pi} \, \mathrm{MeijerG}\left[\left\{\{ \},\{ \} \right\},\left\{\{\tfrac{1}{2} \},\{ 0\} \right\},\tfrac{x^2 \, b^2}{4}\right]$$



which finally results in




$\frac{a}{y^2 \, \sqrt{y^2+a^2}} \, \int_0^{\infty} \, x \, \mathrm{MeijerG}\left[\left\{\{\tfrac{1}{2} \},\{ \} \right\},\left\{\{0 \},\{ \} \right\},\tfrac{x^2}{y^2+a^2}\right] \, \mathrm{MeijerG}\left[\left\{\{0 \},\{ \} \right\},\left\{\{0 \},\{ \} \right\},\tfrac{x^2}{y^2}\right] \, \mathrm{MeijerG}\left[\left\{\{ \},\{ \} \right\},\left\{\{\tfrac{1}{2} \},\{0 \} \right\},\tfrac{x^2 \, b^2}{4}\right] \, \mathrm{d}x$



The $\mathrm{MeijerG}$ are defined according to Mathematica syntax. Or (I hope I converted this correctly)



$$\frac{a}{y^2 \, \sqrt{y^2+a^2}} \, \int_0^{\infty} \, x \, G^{1,1}_{1,1}\left(\begin{array}{c|c}\begin{matrix}\frac{1}{2}\\ 0 \end{matrix}&\frac{x^2}{y^2+a^2}\end{array}\right) \, G^{1,1}_{1,1}\left(\begin{array}{c|c}\begin{matrix}0\\ 0 \end{matrix}&\frac{x^2}{y^2}\end{array}\right)\, G^{1,0}_{0,2}\left(\begin{array}{c|c}\begin{matrix}-\\\frac{1}{2},\, 0\end{matrix}&\frac{x^2 \,b^2}{4}\end{array}\right) \, \mathrm{d}x$$

Tuesday, 25 November 2014

Pseudo-finite field vs Nonstandard finite field



Let $\mathbb{N}^*$ be a countable non-standard model of Peano arithmetic (PA) and let $\mathbb{Z}^*$ be the integers extended from $\mathbb{N}^*$. A non-standard finite field would be a ring $\mathbb{Z}^* /n^* \mathbb{Z}^*$ where $n^*$ is a non-standard prime number larger than any standard natural number.




How is a nonstandard finite field different from a pseudo-finite field? Can there be a mapping from one to the other?


Answer



The answer is negative. Not every pseudo-finite field, i.e a model of the theory of finite fields, is "nonstandard integers modulo a nonstandard prime". Every field of this form has characteristic zero. By contrast, there are pseudo-finite fields of positive characteristic:



http://www.logique.jussieu.fr/~zoe/papiers/Helsinki.pdf



(see page 17, example 5.1)


inequality - Prove $ln(n) lt n$ using only $log(x^y) = ylog(x) $




Prove $$\ln(n) \lt n \;\;(n\in \mathbb N) $$
Using only the rule $$\log(x^y) = y\log(x) $$
I tried using any of the known Inequalities am-gm, power mean, titu's lemma, holder's,... but I seem I can't go anywhere near required.
Any hint?



$\mathbf {Edit} $
It was simpler than I thought, it's done with Bernoulli's Inequality.
$$e^n\gt 2^n = (1+1)^n \ge 1+n \gt n$$
$$\ln(e^n) \gt \ln(n)$$
$$\ln(n) \lt n$$


Answer




Your inequality could be generalized as $$\ln x \leq x-1, x>0$$ To prove it, you perhaps have to use calculus, not just elementary mathematics.Notice that, for $f(x)=\ln x-x+1,(x>0)$, $$f'(x)=\frac{1}{x}-1.$$ Thus, $f'(x)>0,$ when $01.$ For the reason, $f(x)$ reaches its maximum value $f(1)=0$ at $x=0$, namely, $f(x) \leq 0$.



But if you have known Bernoulli's inequality, there indeed exists an elementary proof.



Since




$$(1+x)^n \geq 1+nx,(x \geq -1, n\in \mathbb{N_+}).$$





Let $x=e-1$. Then $$e^n \geq 1+n(e-1)=1+ne-n>n.$$



It follows that $$n>\ln n.$$


complex analysis - Integral of $int_{-infty}^{infty} frac {dk}{ik+1}$



I came across this integral today in the context of inverse fourier transforms:



$$ R(x)={1 \over 2\pi}\int_{-\infty}^{\infty} \frac {e^{ik(x-1)}}{ik+1}dk$$




I know the solution is supposed to be



$$ R(x) = \theta(x-1)e^{-(x-1)} $$



Where $\theta(x)$ is the Heaviside step function.



I have worked out the integral with contour integration and residual theorem for $x \gt 1$ and $x \lt 1$, wich work out as $e^{-(x-1)}$ and $0$ respectively.



My problem is for $x=0$, where I expect to be $R(0)={1 \over 2}$. The integral would then be:




$$R(0)={1 \over 2\pi}\int_{-\infty}^{\infty} \frac {dk}{ik+1}$$
Wich i don't know how to calculate. Wolfram alpha tells me that the integral is in fact ${1\over2}$.



My first instinct was to multiply and divide the integrand by $e^{ik}$ and then solve the integral by closing the contour and using the residual theorem; but the residual is $-i$, so it would be $R(0) = 1$.



I know there are different definitions of the Heaviside function, and in some $\theta(0) =1$, but we used $\theta(0) ={1 \over 2}$ the whole course so I find it improbable my professor would use it differently here. Also, Wolfram seems to agree that it should be ${1 \over 2}$.



First time posting, so I hope I'm following all the rules.


Answer




By completing the contour, you have
$$ \int_{-\infty}^\infty \frac{dk}{ik+1} = \lim_{R\to \infty} \left(\oint_{[-R,R] \;\cup\;\Gamma_R} \frac{dz}{iz+1}- \int_{\Gamma_R} \frac{dz}{iz+1} \right), $$
where $\Gamma_R = \{Re^{it}, 0 \leq t \leq \pi\}.$



The first integral converges to $2\pi i\cdot (-i) = 2\pi$, as you have found yourself by the Residue theorem.



However, the second integral does not converge to $0$ as you assumed for some reason. We have
\begin{align}
\int_{\Gamma_R} \frac{dz}{iz+1} &= \int_0^\pi \frac{iRe^{it}}{iRe^{it}+1} \, dt
= \int_{0}^\pi 1-\frac{1}{iRe^{it}+1} \, dt \to \pi-0 = \pi, \text{ as } R\to\infty.

\end{align}



Hence,
\begin{align}
\frac{1}{2\pi}\int_{-\infty}^\infty \frac{dk}{ik+1} &= \frac{1}{2\pi}\lim_{R\to \infty} \left(\oint_{[-R,R] \;\cup\;\Gamma_R} \frac{dz}{iz+1}- \int_{\Gamma_R} \frac{dz}{iz+1} \right)\\
&= \frac{1}{2\pi} (2\pi - \pi) = \frac{1}{2},
\end{align}
as wanted.


real analysis - Is there any geometric way to characterize $e$?



Let me explain it better: after this question, I've been looking for a way to put famous constants in the real line in a geometrical way -- just for fun. Putting $\sqrt2$ is really easy: constructing a $45^\circ$-$90^\circ$-$45^\circ$ triangle with unitary sides will make me have an idea of what $\sqrt2$ is. Extending this to $\sqrt5$, $\sqrt{13}$, and other algebraic numbers is easy using Trigonometry; however, it turned difficult working with some transcendental constants. Constructing $\pi$ is easy using circumferences; but I couldn't figure out how I should work with $e$. Looking at enter image description here



made me realize that $e$ is the point $\omega$ such that $\displaystyle\int_1^{\omega}\frac{1}{x}dx = 1$. However, I don't have any other ideas. And I keep asking myself:



Is there any way to "see" $e$ geometrically? And more: is it true that one can build any real number geometrically? Any help will be appreciated. Thanks.


Answer




For a certain definition of "geometrically," the answer is that this is an open problem. You can construct $\pi$ geometrically in terms of the circumference of the unit circle. This is a certain integral of a "nice" function over a "nice" domain; formalizing this idea leads to the notion of a period in algebraic geometry. $\pi$, as well as any algebraic number, is a period.



It is an open problem whether $e$ is a period. According to Wikipedia, the answer is expected to be no.



In general, for a reasonable definition of "geometrically" you should only be able to construct computable numbers, of which there are countably many. Since the reals are uncountable, most real numbers cannot be constructed "geometrically."


analysis - Let $f$ be continuous real-valued function on $[0,1]$. Then, $F(x)=max{f(t):0leq tleq x}$ is continuous



Let $f$ be continuous real-valued function on $[0,1]$ and



\begin{align} F(x)=\max\{f(t):0\leq t\leq x\}. \end{align}
I want to show that $F(x)$ is also continuous on $[0,1]$.




MY WORK



Let $\epsilon> 0$ be given and $x_0\in [0,1].$ Since f is continuous at $x_0\in [0,1],$ then $\forall x\in [0,1]$ with $|x-x_0|<\delta,$ it implies $|f(x)-f(x_0)|<\epsilon.$



Also, \begin{align} |f(t)-f(x_0)|<\epsilon, \text{whenever}\; |t-x_0|<\delta,\;\forall\; t\in[0,x]\end{align}



Taking max over $t\in[0,x]$, we have
\begin{align} \max|f(t)-f(x_0)|<\epsilon, \text{whenever}\; |t-x_0|<\delta,\;\forall\; t\in[0,x]\end{align}
\begin{align} |\max f(t)-\max f(x_0)|<\epsilon, \text{whenever}\; \max|t-x_0|<\delta,\;\forall\; t\in[0,x]\end{align}

\begin{align} |F(x)-f(x_0)|<\epsilon, \text{whenever}\; |x-x_0|<\delta\end{align}
which implies that $F(x)$ is continuous on $[0,1].$



I am very skeptical about this proof of mine. Please, is this proof correct? If no, a better proof is desired. Thanks!


Answer



Your proof is wring because you cannot take maximum over $t \in [0,x]$ in an inequality which is valid only for $|x-t| <\delta$. Here are some hints for a correct proof. Verify that $|F(x)-F(y)| \leq \max \{|f(t)-f(s)|:x\leq t \leq y, x\leq t \leq y\}$ for $x

Monday, 24 November 2014

real analysis - Limit of $b_n$ when $b_n=frac{a_n}{n}$ and $limlimits_{ntoinfty}(a_{n+1}-a_n)=l$




I have the following problem.



I have to find the limit of $b_n = \frac{a_n}{n}$, where $\lim\limits_{n\to\infty}(a_{n+1}-a_n)=l$



My approach:



I express $a_n$ in terms of $b_n$, i.e.
$a_n=nb_n$ and $a_{n+1}=(n+1)b_n$




We look at the difference: $a_{n+1}-a_n=(n+1)b_{n+1}-nb_n$



Assuming that $b_n$ converges to a real number m, we see that:



$l=(n+1)m-nm$, from where I conclude that $m=l$.



What I'm left with is proving that $b_n$ is convergent which I'm not sure how to do.



Thanks in advance!



Answer



As @ParamanandSingh suggested, from Stolz–Cesàro theorem
$$a_{n+1}-a_n=\frac{a_{n+1}-a_n}{(n+1)-n} \rightarrow l, n \rightarrow \infty$$
where $\{n\}_{n \in \mathbb{N}}$ is monotone and divergent, then
$$\frac{a_n}{n} \rightarrow l, n \rightarrow \infty$$


Sunday, 23 November 2014

calculus - How to solve non-fractional limits like $lim_{x to infty} x^{(ln5) div (1+ln x)}$?



I have the limit $\lim_{x \to \infty} x^{(\ln5) \div (1+\ln x)}$. I am trying to figure out how to solve this, but I only know how to handle limits when they can be made into fractions. Is there some way I can do that here, or should I try something else?



This is a homework problem, and I don't need a complete answer, but I'd appreciate some advice.


Answer



We asked to evaluate $\lim \limits_{x \rightarrow \infty} x^{\frac{\ln(5)}{1+\ln(x)}}$ which can also be written as $\lim \limits_{x \rightarrow \infty} e^{\ln(x){\frac{\ln(5)}{1+\ln(x)}}}$. Using a few other rules, you can get to the following expression

$$ e^{\ln(5) \cdot \lim \limits_{x \rightarrow \infty} \frac{\ln(x)}{1+\ln(x)} } $$



Now I will leave the details to you but as a hint, remind yourself of L'hospital's rule.



And finally, you can show:



$$ \lim \limits_{x \rightarrow \infty} x^{\frac{\ln(5)}{1+\ln(x)}} = 5 $$


calculus - How to prove that the series $sum_{n=1}^{infty} (1+n!)/(1+n)!$ diverges?



Wolfram alpha told me to use comparison test, so I am trying to compare it with the series $\sum_{n=1}^{\infty} n!/(1+n)!$. Am I on the right track? And if is the right way, how can I show that $\sum_{n=1}^{\infty} n!/(1+n)!$ diverges?


Answer



Yes you are doing great! Notice that your series is always greater than $\dfrac {n!}{(n+1)!}= \dfrac{n!}{n!(n+1)}=\dfrac {1}{n+1}$, which is basically the harmonic series which famously diverges


calculus - Antiderivative of sec(x)


Possible Duplicates:
Evaluating $\\int P(\\sin x, \\cos x) \\text{d}x$
Ways to evaluate $\int \sec \theta \, \mathrm d \theta$







Using Mathematica to get the antiderivative for sec(x), I get $$-\log(\cos\frac{x}{2}-\sin\frac{x}{2})+\log(\cos\frac{x}{2}+\sin\frac{x}{2}).$$



This doesn't look familiar, so, I'm thinking there's probably some identity or other way to transform this...



Any insight would be appreciated.

integration - Approximating a circle vs a diagonal.

Situation 1: A regular $n$-gon is inscribed in a circle. As $n$ increases without bound, the area of the $n$-gon approaches the area of the circle and the perimeter of the $n$-gon approaches the circumference of the circle.



Situation 2:
Consider a $1$ by $1$ square with one side labeled South and the other labeled North East and West as in a map.



A path is constructed from the Southwest corner to the Northeast corner.




If the path runs east on the south side for a distance $\frac{1}{2^n}$, then goes north for the same distance, then east again for distance $\frac{1}{2^n}$. And so on. Then the total length of the path is $2$



As $n$ increases without bound the area under the path and above the south side of the square approaches the area under the diagonal, but the length of the path remains $2$ and does not approach the length of the diagonal.



Why is there a difference?

integration - Closed form of :$inttan ( e^{-x²}) d x$ Over reals

I w'd surprised if this integral $\int\tan ( e^{-x²})\ d x$ has a closed form since :$\int_{-\infty}^{+\infty }\tan ( e^{-x²})\ d x$ is assumed as a constant by wolfram alpha and The inverse calculator didn't give me any thing such that it's value is $2.27591....$
, then any way to show that if it has a closed form , or it is just a constant ?

real analysis - Prove that the reciprocal of a polynomial function $f(x)$ is uniformly continuous on $R$.

Prove that the reciprocal of a polynomial function $f(x)$ is uniformly continuous on $R$.



(It is provided that the reciprocal of the function exists. In other words, $f(x)$ is never zero for any value of $x$.)



I go by the way:



Let $g(x) = \frac{1}{f(x)}$, where $f(x) = a_0 + a_1x + a_2x^2 + . . . + a_nx^n.$




Now we have to show that $g(x)$ is uniformly contnuous on $R.$



Then



$|g(x)-g(x_o)| = |\frac{1}{f(x)} - \frac{1}{f(x_0)}|$
$=|x - x_0||\frac{a_1(x - x_0) + a_2(x^2 - x_0^2) + . . . + a_n(x^n - x_0^n)}{f(x)f(x_0)}|$



Then how to proceed??

elementary number theory - Proving property of congruence - help needed





Let $c,d,m,k ∈ \mathbb{Z}$ such that $m ≥ 2$ and $k$ is not zero. Let
$f = \gcd(k,m)$. If $c \equiv d \pmod m $ and $k$ divides
both $c$ and $d$, then $$ \frac{c}{k} \equiv \frac{d}{k}
\left({\bmod} \frac{m}{f}\right)$$




My lecturer asked me to prove this statement, as an exercise.



To prove this, I started by considering the two cases:





  1. If suppose $k$ and $m$ are relatively prime, meaning that the $f=1$, then by the Congruence and Division Cancellation Law, we
    know that $ \frac{c}{k} \equiv \frac{d}{k} \pmod m$. For
    this case, $ \frac{c}{k} \equiv \frac{d}{k} \big({\bmod}
    \frac{m}{f}\big)$ must be true since $\frac{m}{f} = \frac {m}{1} =
    m$

  2. Now, it remains to prove the other case, where $k$ and $m$ are not relatively prime. By the definition of divisibility, we know that
    $c \equiv d \pmod m $ is equivalent of saying $c =
    d+mj$. We divide both sides by the common divisor k, gives us
    $\frac{c}{k} = \frac{d}{k} + \frac{mj}{k}$. Now, we consider $f = \gcd(k,m)$. This implies that there must exists and integer i, such that $k=lf$, for some integer $l$. Thus, $\frac{c}{k} = \frac{d}{k} + \frac{mj}{k} = \frac{d}{k} + \frac{mj}{lf}$. (Is this true? -- since $l$ does not divide $m$ and the fact that it has to be integer, then $l$ must divide $j$)




I have no idea where to continue. Am I on the right track? Any hints to finish the proof?


Answer



Since $(k,m)=f$, we get $(k/f,m)=1$ so there is $x$ such that $(k/f)\cdot x\equiv 1\pmod m$ holds. Especially, we get $k\cdot(x/f)\equiv 1\pmod {m/f}$.



From $cx\equiv dx\pmod m$ we can derive $c\cdot (x/f)\equiv d\cdot (x/f)\pmod {m/f}$ and you can check that $c\cdot(x/f)\equiv c/k\pmod {m/f}$.


probability - Let $X$ be a continuous random variable with density function $f_X$. What is $Y=aX+b$?



Let $X$ be a continuous random variable with density function $f_X$ and let $a,b>0$.



What is $Y=aX+b$?



I need some help with this one. And I am quite sure it is not $af_X+b$.


Answer



We have that
$$

\Pr\{aX+b\le y\}=\Pr\{X\le(y-b)/a\}=\int_{-\infty}^{(y-b)/a}f_X(x)\mathrm dx.
$$
Using the substitution $x=(t-b)/a$, we obtain that
$$
\Pr\{Y\le y\}=\int_{-\infty}^{(y-b)/a}f_X(x)\mathrm dx=\int_{-\infty}^y\frac1af_X((t-b)/a)\mathrm dt
$$
and the density function $f_Y(y)=a^{-1}f_X((y-a)/b)$.



In general, if $Y=g(X)$ with a monotone function $g$, we have that
$$

f_Y(y) = \left| \frac{\mathrm d}{\mathrm dy} (g^{-1}(y)) \right| \cdot f_X(g^{-1}(y)),
$$
where $g^{-1}$ denotes the inverse function (see here for more details). In this particular case $g(x)=ax+b$ for $x\in\mathbb R$.


Saturday, 22 November 2014

algebra precalculus - Prove $sumlimits_{i=1}^{n}frac{a_{i}^{2}}{b_{i}} geq frac{(sumlimits_{i=1}^{n}a_i)^2}{sumlimits_{i=1}^{n}b_i}$




So I have the following problem, which I'm having trouble solving:



Let $a_1$ , $a_2$ , ... , $a_n$ be real numbers. Let $b_1$ , $b_2$ , ... , $b_n$ be positive real numbers. Prove



$$ \frac{a_{1}^{2}}{b_{1}} + \frac{a_{2}^{2}}{b_{2}} + \cdot \cdot \cdot +\frac{a_{n}^{2}}{b_{n}} \geq \frac{(a_{1}+a_{2}+\cdot \cdot \cdot+a_{n})^2}{b_{1}+b_{2}+\cdot \cdot \cdot+b_{n}} $$



I was thinking that I somehow could use the Cauchy–Schwarz inequality, but with no success.



Any help would be very appreciated


Answer




You can simply use the cauchy-scwartz on the sets $$\left\{\frac{a_1}{\sqrt {b_1}},\frac{a_2}{\sqrt {b_2}},\dots,\frac{a_n}{\sqrt {b_n}}\right\}\text{ and }\left\{\sqrt {b_1},\sqrt {b_2},\dots,\sqrt {b_n}\right\}.$$



What you will get, is $$\left(\frac{a_{1}^{2}}{b_{1}} + \frac{a_{2}^{2}}{b_{2}} + \cdot \cdot \cdot +\frac{a_{n}^{2}}{b_{n}}\right)(b_{1}+b_{2}+\dots+b_{n}) \geq (a_{1}+a_{2}+\dots+a_{n})^2.$$


Friday, 21 November 2014

paradoxes - How do you explain this probability paradox?

Imagine there are two bags of money, and you are allowed to choose one. The probability that one of them contains $10^{n-1}$ dollars and the other contains $10^{n}$ dollars is $1/2^n$, $n\in\{1,2,3...\}$.



That is to say, there is $1/2$ probability that one of the two bags contains $\$1$ and the other contains $\$10$; $1/4$ probability that one of the two bags contains $\$10$ and the other contains $\$100$ , etc.



What's interesting is that, no matter which one you choose, you'll find that the other one is better. For example, if you open one bag, and find there are $\$10$ in there, then the probability of the other bag contains $\$1$ is $2/3$ and the probability of the other bag contains $\$100$ is $1/3$, and the expectation of that is $\$34$, which is better than $\$10$.



If the other one is definitely better regardless of how much you'll find in whichever one you choose, why isn't choosing the other one in the first place a better choice?

elementary number theory - Understanding how to compute $5^{15}pmod 7$




Compute $5^{15} \pmod 7$.





Can someone help me understand how to solve this? I know there is a trick, but my professor did not completely explain it in class and I'm stuck.


Answer



You know 7 is prime and 7 does not divide 5 so you can use Fermats Little Theorm to get $5^6\equiv1 (mod 7)$ $\Rightarrow$ $5^{15} \equiv 5^3 (mod 7)$
then you can do $ (25)(5)\equiv (-4)(2) (mod7) $ then $ -8 \equiv 6 (mod 7)$ $\Rightarrow $ $5^{15} \equiv 6 (mod 7)$ hence $5^{15}$ Modulo 7 is 6


elementary number theory - Showing that these two definitions of $gcd(a,b)$ are equivalent



So far I have encountered with two definitions of the GCD of $a$ and $b$.



The first definition is:





$\gcd(a,b)$ is an integer that has the following properties:




  1. $d>0$


  2. $d\mid a$ and $d\mid b$


  3. any common divisor $u$ of $a$ and $b$ also divides $d$





The second definition I saw is:





The greatest common divisor of two integers $a$ and $b$ (not both zero) is the largest integer which
divides both of them.




Can someone please show me the equivalence of these two definitions without using any theorems. thanks!


Answer



For clarity, let's record these lemmas:





By definition, $x\mid y\iff y=kx$ for some integer $k$.




  • If $y>0$, then it is impossible to have $0\mid y$.

  • If $y>0$ and $x<0$ then certainly $y\geq x$.

  • If $y>0$ and $x>0$, then we must have $k>0$, hence $k\geq 1$ (because there are no integers between $0$ and $1$), so that $y=kx \geq x$.



Thus, if $x$ and $y$ are integers such that $x\mid y$ and $y>0$, then $y\geq x$.









Suppose that $x\mid z$ and $y\mid z$. Then by definition $z$ is a common multiple of $x$ and $y$, hence $|z|$ is a common multiple of $x$ and $y$, so that in fact the integers
$$|z|,\qquad |z|-\mathrm{lcm}(x,y),\qquad |z|-2\mathrm{lcm}(x,y),\qquad \ldots$$
are all common multiples of $x$ and $y$. On this strictly decreasing list of integers, starting with the positive integer $|z|$, there must be a smallest positive entry; that entry can't be smaller than $\mathrm{lcm}(x,y)$ because that would contradict the definition of $\mathrm{lcm}$, and it can't be larger than $\mathrm{lcm}(x,y)$ because then $\mathrm{lcm}(x,y)$ would be a positive entry on the list smaller than it. Therefore $|z|-k\mathrm{lcm}(x,y)=\mathrm{lcm}(x,y)$ for some integer $k$, i.e. $z=\pm(k+1)\mathrm{lcm}(x,y)$ for some integer $k$.



Thus, we have shown that if $x\mid z$ and $y\mid z$, then $\mathrm{lcm}(x,y)\mid z$.





Now, let $a$ and $b$ be integers, and let $d$ be an integer such that $d\mid a$ and $d\mid b$.



Suppose that $d>0$ and that $d$ satisfies $u\mid d$ for any other integer $u$ with $u\mid a$ and $u\mid b$. Then by our first lemma, $d$ satisfies $d\geq u$ for any integer $u$ with $u\mid a$ and $u\mid b$.



Conversely, suppose that $d$ satisfies $d\geq u$ for any integer $u$ with $u\mid a$ and $u\mid b$. Then in particular $d\geq -d$ which means that $d>0$. If $u$ is any integer with $u\mid a$ and $u\mid b$, then each of $a$ and $b$ are a common multiple of $u$ and $d$, so that $\mathrm{lcm}(u,d)\mid a$ and $\mathrm{lcm}(u,d)\mid b$ by our second lemma. Therefore, by our assumption about $d$, we have that $d\geq \mathrm{lcm}(u,d)$ for any $u$ with $u\mid a$ and $u\mid b$, which implies that $u\mid d$ for any $u$ with $u\mid a$ and $u\mid b$.


complex numbers - Can square roots be negative?

1955 AHMSE Problem 20 asks when $\sqrt{25 - t^2} + 5 =0.$



I know square root of real numbers cannot be negative. So t cannot be real.




But I don't know whether imaginary numbers' square root can be negative or not. I think square roots can never be negative. Also, I don't think we can classify imaginary numbers as positive or negative.



Can square roots of imaginary numbers be negative?

Thursday, 20 November 2014

abstract algebra - Integral domain with two elements that do not have a gcd



I have the following example of an integral domain with two elements that do not have a gcd from wikipedia:




$R = \mathbb{Z}\left[\sqrt{-3}\,\,\right],\quad a = 4 = 2\cdot 2 = \left(1+\sqrt{-3}\,\,\right)\left(1-\sqrt{-3}\,\,\right),\quad b = \left(1+\sqrt{-3}\,\,\right)\cdot 2.$



The elements $2$ and $1 + \sqrt{−3}$ are two "maximal common divisors" (i.e. any common divisor which is a multiple of $2$ is associated to $2$, the same holds for $1 + \sqrt{−3}$), but they are not associated, so there is no greatest common divisor of a and b.



I can understand it, but how can I prove or strictly explain that $2$ and $1 + \sqrt{−3}$ are two "maximal common divisors" and they are not associated?


Answer



A simple but general way to deduce that this gcd fails to exist is by failure of Euclid's Lemma.



Lemma $\rm\ \ (a,b) = (ac,bc)/c\ \ $ if $\rm\ (ac,bc)\ $ exists $\rm\quad$ [GCD distributive law]




Proof $\rm\quad d\ |\ a,b\iff dc\ |\ ac,bc\iff dc\ |\ (ac,bc)\iff d\mid (ac,bc)/c$



But generally $\rm\ (ac,bc)\ $ need not exist, as is most insightfully viewed as failure of



Euclid's Lemma $\rm\quad a\ |\ bc\ $ and $\rm\ (a,b)=1\ \Rightarrow\ a\ |\ c\ \ $ if $\rm\ (ac,bc)\ $ exists.



Proof $\ \ $ If $\rm\ (ac,bc)\ $ exists then $\rm\ a\ |\ ac,bc\ \Rightarrow\ a\ |\ (ac,bc) = (a,b)\,c = c\ $ by the Lemma.



Hence if $\rm\, a,b,c\, $ fail to satisfy the Euclid Lemma $\Rightarrow,\,$

namely if $\rm\ a\ |\ bc\ $ and $\rm\ (a,b) = 1\ $ but $\rm\ a\nmid c\,$, then one immediately deduces that the gcd $\rm\ (ac,bc)\ $ fails to exist.$\,$ For the special case that $\rm\,a\,$ is an atom (i.e. irreducible), the implication reduces to: atom $\Rightarrow$ prime. So it suffices to find a nonprime atom
in order to exhibit a pair of elements whose gcd fails to exist. This task is a bit simpler, e.g. for $\rm\ \omega = 1 + \sqrt{-3}\ \in\ \mathbb Z[\sqrt{-3}]\ $ we have that the atom $\rm\, 2\ |\ \omega' \omega = 4,\,$ but $\rm\ 2\nmid \omega',\omega,\,$ so $\rm\,2\,$ is not prime. Therefore your gcd $\rm\, (2\omega,\, \omega'\omega) = (2+2\sqrt{-3},\,4)\ $ fails to exist in $\rm\ \mathbb Z[\sqrt{-3}]\,$.



Note that if the gcd $\rm\, (ac,bc)\ $ fails to exist then this implies that the ideal $\rm\ (ac,bc)\ $ is not principal. Therefore we've constructively deduced that the failure of Euclid's lemma immediately yields both a nonexistent gcd and a nonprincipal ideal.



That the $\Rightarrow$ in Euclid's lemma implies that Atoms are Prime $\rm(:= AP)$ is denoted $\rm\ D\ \Rightarrow AP\ $ in the list of domains closely related to GCD domains in my post here. There you will find links to further literature on domains closely related to GCD domains. See especially the referenced comprehensive survey by D.D. Anderson: GCD domains, Gauss' lemma, and contents of polynomials, 2000.



See also my post here for the general universal definitions of $\rm GCD,\, LCM$ and for further remarks on how such $\iff$ definitions enable slick proofs, and see here for another simple example of such.


functional analysis - Counterexample around Dini's Theorem




"Give an example of an increasing sequence $(f_n)$ of bounded continuous functions from $(0,1]$ to $\mathbb{R}$ which converge pointwise but not uniformly to a bounded continuous function $f$ and explain why Dini's Theorem does not apply in this case"





So clearly Dini's Theorem does not apply, as $(0,1]$ is not a closed interval (or compact metric space), but I can't figure out an example.



My first thought is $f_n(x)=\frac{1}{x^n}$, but this does not converge pointwise to a bounded continuous function, as $x=1$ is in the interval



My second thought is $f_n(x)=x^\frac{1}{n}$. This is clearly an increasing sequence of bounded continuous functions (I think?) I believe this converges pointwise to $f(x)=1$ for all $x\in (0,1]$, but I'm struggling to then show why this doesn't converge uniformly to $f(x)=1$



How would I do this? Or is then an easier/better example I could use?


Answer




Take $f_n(x)=e^{-\frac 1 {nx}}$ and $f=1$.



Note that $sup_x |f_n(x)-f(x)| \geq |f_n(\frac 1 n)-1|=1-\frac 1 e $.


Wednesday, 19 November 2014

trigonometry - Maxima and minima of $operatorname{sinc}$ function



The function $\operatorname{sinc}{\pi x}$ has maxima and minima given by the function's intersections with $\cos \pi x$, or alternatively by $\frac {d}{dx}\operatorname{sinc}{\pi x}=0$.



Mathematica tells me that




$$\frac {d}{dx}\operatorname{sinc}{\pi x}=\pi \Bigl(\frac {\cos \pi x}{\pi x}-\frac {\sin \pi x}{\pi^2 x^2}\Bigr)$$



So question 1, how do I prove this?



And question 2, how do I derive an equation for all maxima and minima?


Answer



Set derivative equal to 0.



You will after some manipulation like multiplying $(\pi x)^2$ and dividing by $\pi$ get $$\pi x \cos(\pi x) = \sin(\pi x)$$ and equivalently by dividing both sides by $\pi x$

$$\cos(\pi x) = \frac{\sin(\pi x)}{\pi x}$$
Now the right hand side is $\text{sinc}(\pi x)$ and left hand side is the function you want to show it should intersect.



So we are done showing where the extrema are.






Now to show which are max and which are min.



Sinc as a function is a multiplication between $\frac{1}{\pi x}$ and $\sin(\pi x)$




On $\mathbb R^+$ the first of these is monotonically decreasing and positive. Sin is periodic and alternating +1 -1. Both functions are continuous. We can now use an argument with theorem of intermediate value to show it will be alternatingly max and min with as many maxes as mins.


Computing $gcd$ of very large numbers with powers

How to calculate $\gcd$ of $5^{2^{303} - 1} - 1$ and $5^{2^{309} - 1} - 1$?



I stumbled upon this interesting problem and tried elementary algebraic simplification and manipulation. But found no success.

calculus - Elegant way to make a bijection from the set of the complex numbers to the set of the real numbers




Make a bijection that shows $|\mathbb C| = |\mathbb R| $



First I thought of dividing the complex numbers in the real parts and the complex parts and then define a formula that maps those parts to the real numbers. But I don't get anywhere with that. By this I mean that it's not a good enough defined bijection.




Can someone give me a hint on how to do this?





Maybe I need to read more about complex numbers.


Answer



You can represent every complex number as $z=a+ib$, so let us denote this complex number as $(a,b) , ~ a,b \in \mathbb R$. Hence we have cardinality of complex numbers equal to $\mathbb R^2$.



So finally, we need a bijection in $\mathbb R$ and $\mathbb R^2$.



This can be shown using the argument used here.





Note that since there is a bijection from $[0,1]\to\Bbb R$ (see appendix), it is enough to find a bijection from the unit square $[0,1]^2$ to the unit interval $[0,1]$. By constructions in the appendix, it does not really matter whether we consider $[0,1]$, $(0,1]$, or $(0,1)$, since there are easy bijections between all of these.




Mapping the unit square to the unit interval




There are a number of ways to proceed in finding a bijection from the unit square to the unit interval. One approach is to fix up the "interleaving" technique I mentioned in the comments, writing $\langle 0.a_1a_2a_3\ldots, 0.b_1b_2b_3\ldots\rangle$ to $0.a_1b_2a_2b_2a_3b_3\ldots$. This doesn't quite work, as I noted in the comments, because there is a question of whether to represent $\frac12$ as $0.5000\ldots$ or as $0.4999\ldots$. We can't use both, since then $\left\langle\frac12,0\right\rangle$ goes to both $\frac12 = 0.5000\ldots$ and to $\frac9{22} = 0.40909\ldots$ and we don't even have a function, much less a bijection. But if we arbitrarily choose to the second representation, then there is no element of $[0,1]^2$ that is mapped to $\frac12$, and if we choose the first there is no element that is mapped to $\frac9{22}$, so either way we fail to have a bijection.



This problem can be fixed.




First, we will deal with $(0,1]$ rather than with $[0,1]$; bijections between these two sets are well-known, or see the appendix. For real numbers with two decimal expansions, such as $\frac12$, we will agree to choose the one that ends with nines rather than with zeroes. So for example we represent $\frac12$ as $0.4999\ldots$.



Now instead of interleaving single digits, we will break each input number into chunks, where each chunk consists of some number of zeroes (possibly none) followed by a single non-zero digit. For example, $\frac1{200} = 0.00499\ldots$ is broken up as $004\ 9\ 9\ 9\ldots$, and $0.01003430901111\ldots$ is broken up as $01\ 003\ 4\ 3\ 09\ 01\ 1\ 1\ldots$.



This is well-defined since we are ignoring representations that contain infinite sequences of zeroes.



Now instead of interleaving digits, we interleave chunks. To interleave $0.004999\ldots$ and $0.01003430901111\ldots$, we get $0.004\ 01\ 9\ 003\ 9\ 4\ 9\ldots$. This is obviously reversible. It can never produce a result that ends with an infinite sequence of zeroes, and similarly the reverse mapping can never produce a number with an infinite sequence of trailing zeroes, so we win. A problem example similar to the one from a few paragraphs ago is resolved as follows: $\frac12 = 0.4999\ldots$ is the unique image of $\langle 0.4999\ldots, 0.999\ldots\rangle$ and $\frac9{22} = 0.40909\ldots$ is the unique image of $\langle 0.40909\ldots, 0.0909\ldots\rangle$.



modular arithmetic - Solution to rational Diophantine equations in fixed point

I'm trying to solve the following system of equations for $p$ and $q$, given fixed integers $x$, $y$ and $c$:




$$r = {{c x + p} \over {c y + q}} \ , \, \, \, r \in \mathbb{Z}$$



where



$$\{x, y, p, q, c\} \in \mathbb{Z}$$



$$0 \leq y \lt x$$



$$0 \leq \{p, q\} \lt c$$




$$c = 2^b$$



i.e. this is a fixed point fraction $x / y$, shifted by $b$ bits, with fractional adjustment terms $p$ and $q$ on the numerator and denominator. I'm trying to solve for the values of $p$ and $q$ that make the denominator evenly divide the numerator, or I need to know when there is no solution.



There will always a solution for some integer values $p$ and $q$, just not necessarily within the range $\left[0, c\right)$.



Often there is more than one solution. I am much more interested in knowing when there are no solutions than when there are one or more solutions, so if there is no easy way to find or enumerate solutions, but there's a way to quickly (ideally in $\mathrm O(1)$) determine when there are no solutions, it would solve my problem.



I have a feeling this is not a "fullblown" Diophantine equation problem, and that there's some nice simple trick involving modulo arithmetic or remainders, but I haven't been able to find it.

Tuesday, 18 November 2014

real analysis - Confusion in definition of limit and continuity




I was reading from this website,



enter image description here



But as far as I know is not it the same as definition of continuity. For example Kenneth Ross's book has the same definition for continuity



Let $f$ be a real valued function whose domain is a subset of $\mathbb{R}$. Then $f$ is continuous at $x_0\in dom(f)$ iff for each $\epsilon>0$ there exist $\delta>0$ such that $x\in dom(f)$ and $|x-x_0|<\delta$ imply $|f(x)-f(x_0)|<\epsilon$



I am confused.




https://www.math24.net/definition-limit-function/


Answer



The only difference there is that for the existence of the limit $f(a)$ does not need to be defined.



For continuity $f(a)$ must be defined (and must be equal to the limit).


integration - Showing that a line integral along a given curve is in independent of one the curve components





Let $\gamma(t)=(\gamma_1(t),\gamma_2(t),\gamma_3(t))$, $t \in [0, 2 \pi]$, be a smooth curve in $\Bbb R^3$ where
$$\gamma_1(t)=cos(t),\,\gamma_2(t)=sin(t),\,\gamma_3(t)>0$$
Let $F$ be the vector field $F(x,y,z)=(2y^2,\,x^2,\,3z^2)$.
Prove that the line integral $\int_{\gamma} F \cdot dl$ is independent of $\gamma_3(t)$.




Let me emphasize that according to this problem, $\gamma_3(t)$ is not necessarily a closed curve.



I've tried to go be the definition of $\int_{\gamma} F \cdot dl$ and got:




\begin{align}
\int_{\gamma} F \cdot dl & = \int_0^{2 \pi} (\,2cos^2(t), \, sin^2(t),\, 3\gamma_3^2(t) \,) \cdot (\,-sin(t),\,cos(t),\,\gamma_3^{'}(t)\,) \, dt \\
& =\int_0^{2 \pi} (-2cos^2(t)sin(t)+cos(t)sin^2(t)+3\gamma_3^2(t) \gamma_3^{'}(t)) \,dt \\
& =\int_0^{2 \pi}3\gamma_3^2(t) \gamma_3^{'}(t) \,dt \\ & =\gamma_3^3(t)|_0^{2 \pi}
\end{align}



I don't know if the author of this problem forgot to mention that $\gamma_3(t)$ is a closed curve or not, because if $\gamma_3(t)$ is indeed closed the argument above solves the problem.



Do you have an idea of how to solve this problem without this information?



Answer



I think that the fact that the integral is independent on $\gamma_3$ means, that if we have another curve through the same end points, i.e. if we have $\overline \gamma_3$ such that $\gamma_3(0)=\overline\gamma_3(0)$ and $\gamma_3(2\pi)=\overline\gamma_3(2\pi)$, then the integrals are equal if $\overline\gamma_3$ replaces $\gamma_3$. And your computation reveals that this is the case for your integral, since $$\int _\gamma F\cdot dl=\gamma_3^3(2\pi)-\gamma_3^3(0)
=\overline\gamma_3^3(2\pi)-\overline\gamma_3^3(0) =\int _{\overline \gamma} F\cdot dl$$
where $\overline \gamma=(\gamma_1,\gamma_2,\overline\gamma_3)$.


Monday, 17 November 2014

complex analysis - Sum over all inverse zeta nontrivial zeros



Starting from the Hadamard product for the Riemann Zeta Function (assuming the product is taken over matching pairs of zeros)
$$\zeta(s)=\frac{e^{(\log(2\pi)-1-\gamma/2)s}}{2(s-1)\Gamma(1+s/2)}\prod_{\rho}\left(1-\frac{s}{\rho} \right)e^{s/\rho}$$
can one derive the exact value of $\sum_{\rho} \frac{1}{\rho}$
to be
$$\sum_{\rho} \frac{1}{\rho} = -\log(2\sqrt{\pi})+1+\gamma/2$$
What implications does this have?



Answer



Let's start with the logarithm of the Hadamard product and take the derivative relatively to $s$ (remembering that $\;\psi(z):=\dfrac d{dz}\log\Gamma(z)\,$ with $\psi$ the digamma function) :
\begin{align}
\log\zeta(s)&=(\log(2\pi)-1-\gamma/2)s-\log\left(2(s-1)\Gamma(1+s/2)\right)+\sum_{\rho}\log\left(1-\frac{s}{\rho} \right)+\frac s{\rho}\\
\frac{\zeta'(s)}{\zeta(s)}&=\log(2\pi)-1-\gamma/2-\left(\frac 1{s-1}+\frac 12\psi\left(\frac s2+1\right)\right)+\sum_{\rho}\frac 1{s-\rho}+\frac 1{\rho}\\
\frac{\zeta'(s)}{\zeta(s)}+\frac 1{s-1}&=\log(2\pi)-1-\gamma/2-\frac 12\psi\left(\frac s2+1\right)+\sum_{\rho}\frac 1{s-\rho}+\frac 1{\rho}\\
\end{align}
taking the limit as $s\to 1$ and remembering that $\;\displaystyle \zeta(s)-\frac 1{s-1}=\gamma+O(s-1)\;$ and $\psi\left(\frac 32\right)=\psi\left(\frac 12\right)+2=-\gamma-\log(4)+2\;$ we get :
\begin{align}
\gamma&=\log(2\pi)-1-\gamma/2+\left(\gamma/2+\log(2)-1\right)+\sum_{\rho}\frac 1{1-\rho}+\frac 1{\rho}\\

\sum_{\rho}\frac 1{1-\rho}+\frac 1{\rho}&=-\log(4\pi)+2+\gamma
\end{align}
With the answer the double of your series (as it should since if $\rho$ is a root then $1-\rho$ will also be one).



I considered here only a formal derivation (the series is only conditionally convergent : the roots must be sorted with increasing absolute imaginary parts, the limit inside the series should be justified) ; for detailed proofs see for example Edwards' book p.67.



For the sum of integer powers of the roots see Mathworld starting at $(5)$.



Other references were proposed in this answer as this MO thread that may be more useful for your question concerning 'implications' (see the answer and comment to Micah Milinovich).


Why a decimal fraction is not expressing exactly what a rational number is in base 2?



I am currently using rational numbers to express currency and math operations with currency, while dealing with rational numbers has provided a great convenience in over coming the limitations of scaling integers into unit64 I have a few concerns around decimal fractions vs rational numbers and the quotient they express:




Given the following rational number:



$5764607523034235/576460752303423488$



this supposedly represents 0.01 'exactly', according to the 'quotToFloat' method found here: http://golang.org/src/pkg/math/big/rat.go



but given the following decimal fraction:



$1/100$




this supposedly does not represent 0.01 'exactly'.



My concern lies in holding 2 different rational numbers that represent the same qoutient but not being able to find equality between them without converting them and comparing.



Why is it that $1/100$ does not express 0.01 exactly? Am I butting up against base2 limitations?


Answer



Yes, you are up against base 2 limitations. It is the same as in base 10, you cannot represent $\frac 13=0.\overline 3$ exactly because it has a prime factor that does not divide $10$. $0.1_{10}=0.00011\overline{0011}_2$. It is a repeating "decimal" because of the factor $5$ in the denominator. Why are you using quotToFloat? It looks like your package has exact rationals, so you can just subtract them and check whether the difference is zero for equality.


Some questions about differential forms


  • If $A$ is a differential one form then $A\wedge A .. (more\text{ }than\text{ }2\text{ }times) = 0$ Then how does the $A\wedge A \wedge A$ make sense in the Chern-Simon's form, $Tr(A\wedge dA + \frac{2}{3} A \wedge A \wedge A)$ ?




    I guess this anti-commutative nature of wedge product does not work for Lie algebra valued one-forms since tensoring two vectors is not commutative.


  • If the vector space in which the vector valued differential form is taking values is $V$ with a chosen basis ${v_i}$ then books use the notation of $A = \Sigma _{i} A_i v_i$ where the sum is over the dimension of the vector space and $A_i$ are ordinary forms of the same rank.




I would like to know whether this $A_i v_i$ is just a notation for $A_i \otimes v_i$ ?




  • Similarly say $B$ is a vector valued differential form taking values in $W$ with a chosen basis ${w_i}$ and in the same notation, $B = \Sigma _j B_j w_j$. Then the notation used is that, $A \wedge B = \Sigma _{i,j} A_i \wedge B_j v_i \otimes w_j$




    I wonder if in the above $A_i \wedge B_j v_i \otimes w_j$ is just a notation for $A_i \wedge B_j \otimes v_i \otimes w_j$ ?


  • $A$ and $B$ are vector bundle valued differential forms (like say the connection-1-form $\omega$ or the curvature-2-form $\omega$) then how is $Tr(A)$ defined and why is $d(Tr A) = Tr( d A)$ and $Tr(A \wedge B) = Tr(B \wedge A)$ ?


  • Is $A \wedge A \wedge A \wedge A = 0$ ? for $A$ being a vector bundle valued $1$-form or is only $Tr(A \wedge A \wedge A \wedge A) = 0$ ?


  • If A and B are two vector bundle valued $k$ and $l$ form respectively then one defines $[A,B]$ as , $[A,B] (X_1,..,X_{k+l}) = \frac{1}{(k+l)!} \Sigma _{\sigma \in S_n} (sgn \sigma) [A (X_{\sigma(1)},X_{\sigma(2)},..,X_{\sigma(k)}) , B (X_{\sigma(k+1)},X_{\sigma(k+2)},..,X_{\sigma(k+l)})]$



    This means that if say $k=1$ then $[A,A] (X,Y) = [A(X),A(Y)]$ and $[A,A] = 2A \wedge A$.
    The Cartan structure equation states that, $d\Omega = \Omega \wedge \omega - \omega \wedge \Omega$.



    But some people write this as, $d\Omega = [\Omega,\omega]$.




    This is not clear to me. Because if the above were to be taken as a definition of the $[,]$ then clearly $[A,A]=0$ contradicting what was earlier derived.


Prove inequality: When $n > 2$, $n! < {left(frac{n+2}{sqrt{6}}right)}^n$



Prove: When $n > 2$,

$$n! < {\left(\frac{n+2}{\sqrt{6}}\right)}^n$$



PS: please do not use mathematical induction method.



EDIT: sorry, I forget another constraint, this problem should be solved by
algebraic mean inequality.



Thanks.


Answer



This used to be one of my favourite high-school problems. This is one approach: consider $y=\ln x$ and say that you want to integrate it between $1$ and $n$.




enter image description here



obviously the sum of the areas of trapezium $<\int_1^n\ln x\mathrm{d}x$. From this inequality, you get another inequality:
$$
n!<\left(\frac{n^{n+\frac{1}{2}}}{e^{n-1}}\right)
$$
Then just show the following inequality and you are done:
$$
\left(\frac{n^{n+\frac{1}{2}}}{e^{n-1}}\right)<{\left(\frac{n+2}{\sqrt{6}}\right)}^n

$$


probability - Dice: Expected highest value with a tricky condition

I know how to calculate the expected value "E" of a roll of n k-sided dice if we are supposed to keep the highest number rolled. If I am not wrong, the formula is:



E = k − (1^n + 2^n + ... + (k-1)^n)/k^n



But what I would like to know is how would affect to that expected value the following condition: every 1 that is rolled will cancel the highest remaining number rolled. If the number of 1s is at least half of the total number of dice, then the value of the roll is 0.



For example, rolling 6 10-sided die:





  • Roll #1: 7, 4, 1, 10, 1, 9 ----> The 1 cancels the 10, the second 1 cancels the 9, value of the roll is 7


  • Roll #2: 1, 9, 6, 1, 1, 3 ----> The first 1 cancels the 9, the second 1 cancels the 6, the third 1 cancels the 3, there are no more dice, the value of the roll is 0


  • Roll #3: 8, 1, 1, 1, 6, 1 ----> The first 1 cancels the 8, the second 1 cancels the 6, the only remaining dice are 1s which count as fails, so the value of the roll is 0.




As I said, I know how to calculate the expected value of the highest number in a roll of n dice, but I do not know where to start to add the condition of the 1s canceling out the highest numbers, so I would appreciate some help.



Thanks a lot.

polynomials - Prove $x^n-1$ is divisible by $x-1$ by induction




Prove that for all natural number $x$ and $n$, $x^n - 1$ is divisible by $x-1$.



So here's my thoughts:

it is true for $n=1$, then I want to prove that it is also true for $n-1$



then I use long division, I get:



$x^n -1 = x(x^{n-1} -1 ) + (x-1)$



so the left side is divisible by $x-1$ by hypothesis, what about the right side?


Answer



So first you can't assume that the left hand side is divisible by $x-1$ but for the right hand side we have that $x-1$ divides $x-1$ and by the induction hypothesis we have that $x-1$ divides $x^{n-1}-1$ so what can you conclude about the left hand side.


Sunday, 16 November 2014

calculus - Evaluate $lim_{xrightarrow 0} frac{sin x}{x + tan x} $ without L'Hopital



I need help finding the the following limit:




$$\lim_{x\rightarrow 0} \frac{\sin x}{x + \tan x} $$



I tried to simplify to:



$$ \lim_{x\rightarrow 0} \frac{\sin x \cos x}{x\cos x+\sin x} $$



but I don't know where to go from there. I think, at some point, you have to use the fact that $\lim_{x\rightarrow 0} \frac{\sin x}{x} = 1$. Any help would be appreciated.



Thanks!


Answer




$$
\frac{\sin x}{x + \tan x} = \frac{1}{\frac{x}{\sin x}+\frac{\tan x}{\sin x}} \to 1/2
$$


calculus - When can you treat a limit like an equation?




Lately, I've been very confused about the weird properties of limits. For example, I was very surprised to find out that $\lim_{n \to \infty} (3^n+4^n)^{\large \frac 1n}=4$ , because if you treat this as an equation, you can raise both sides to the $n$ power, subtract, and reach the wrong conclusion that $\lim_{n \to \infty} 3^n=0$ . I've asked this question before over here, and the answer was that $\infty-\infty$ is not well defined. I also found out here that you cannot raise both sides of a limit to a power unless the limit is strictly less than $1$ . However, there are also many examples where limits are treated as equations. For example, taking the logarithm of each side is standard procedure. Substitutions such as using $\lim_{x \to 0} \frac {\sin x}{x}=1$ work (although other substitutions sometimes don't work). So when can a limit be treated as an equation? Can you take for example the sine or tangent of each side like you can take the log? My guess is that you can treat it as an equation $at$ $least$ whenever $nothing$ is approaching $0$ or $\infty$ , but I'm not sure. Thanks.



P.S. Please keep the answers at a Calculus 1 level, and I have not learned the epsilon-delta definition of a limit.


Answer



The $n$ in the limit has no meaning outside of the limit. Therefore you cannot raise both sides to the $n$th power and then "bring in" the $n$ from the outside into the limit (to be combined with the $1/n$ exponent, as you seem to be doing). In an expression such as $$\lim_{n \to \infty} (3^n+4^n)^{1/n}$$ $n$ is a "dummy" variable. It simply tells you which variable in the inner expression we are taking the limit with respect to. The value of the limit is not a function of $n$, and therefore "raising both sides to the $n$th power" is not a meaningful operation.



It is, however, okay to square both sides (or cube, or raise to any fixed power) of the equation, or take the log, sin, tan, or any other function. An equation involving a limit is still an equation and you can always do any operation to both sides of an equation. However, you still have to be careful. Suppose you have an equation like $$\lim_{x \to 0} f(x) = L.$$ Now you take the log of both sides. You get $$\log \lim_{x \to 0} f(x) = \log L.$$ Notice I have not yet brought the $\log$ "into" the limit, which is usually the next step you want to take. In order for that step to be valid, you need to know that $\log$ is continuous at the values of $f(x)$ near $x=0$, and then you would get $$ \lim_{x \to 0} \log f(x) = \log L.$$


multivariable calculus - Partial Derivative and Differentiability

I need help for the following question.



Do the partial derivatives of the function $f(x, y)=min(|x|,|y|)$ exist and what are they?



Also, I don't think the function is differentiable at $(0,0)$ but how should I prove this from definition?



Many thanks!

solid geometry - Volume and surface area of a sphere by polyhedral approximation



Exposition:




In two dimensions, there is a (are many) straightforward explanation(s) of the fact that the perimeter (i.e. circumference) and area of a circle relate to the radius by $2\pi r$ and $\pi r^2$ respectively. One argument proceeds by approximating these quantities using regular cyclic polygons (equilateral, equiangular, on the circle of radius $r$), noting that such a polygon with $n$ sides can be decomposed into $n$ isosceles triangles with peak angle $\frac{2\pi}{n}$, base length $~2r\sin\frac{\pi}{n}$, and altitude $~r \cos \frac{\pi}{n}$ . Then, associating the circle with the limiting such polygon, we have,
$$
P = \lim_{n\to\infty} n \cdot \text{base length } = \lim_{n\to\infty}2r \cdot \pi \frac{n}{\pi} \sin \frac{\pi}{n} = 2\pi r ~~,
$$
and similarly, (via trig identity)
$$
A = \lim_{n\to\infty} n\left(\frac{1}{2} \text{ base } \times \text{ altitude }\right) = \lim_{n\to\infty}\frac{r^2\cdot 2\pi}{2} \frac{n}{2\pi} \sin \frac{2\pi}{n} = \pi r^2 ~~.
$$
Question:




Could someone offer intuition, formulas, and/or solutions for performing a similarly flavored construction for the surface area and volume of a sphere?



Images and the spatial reasoning involved are crucial here, as there are only so many platonic solids, so I am not seeing immediately the pattern in which the tetrahedra (analogous to the 2D triangles) will be arranged for arbitrarily large numbers of faces. Thus far my best result has been a mostly-rigorous construction relying on this formula (I can write up this proof on request). What I'd like to get out of this is a better understanding of how the solid angle of a vertex in a polyhedron relates to the edge-edge and dihedral angles involved, and perhaps a "dimension-free" notion for the ideas used in this problem to eliminate the need to translate between solid (2 degrees of freedom) and planar (1 degree) angles.


Answer



Alright, I've come up with a proof in what I think is the right flavor.



Take a sphere with radius $r$, and consider the upper hemisphere. For each $n$, we will construct a solid out of stacks of pyramidal frustums with regular $n$-gon bases. The stack will be formed by placing $n$ of the $n$-gons perpendicular to the vertical axis of symmetry of the sphere, centered on this axis, inscribed in the appropriate circular slice of the sphere, at the heights $\frac{0}{n}r, \frac{1}{n}r, \ldots,\frac{n-1}{n}r $ . Fixing some $n$, we denote by $r_\ell$ the radius of the circle which the regular $n$-gon is inscribed in at height $\frac{\ell}{n}r$ . Geometric considerations yield $r_\ell = \frac{r}{n}\sqrt{n^2-\ell^2}$ .



As noted in the question, the area of this polygonal base will be $\frac{n}{2}r_\ell^2 \sin\frac{2\pi}{n}$ for each $\ell$ . I am not sure why (formally speaking) it is reasonable to assume, but it appears visually (and appealing to the 2D case) that the sum of the volumes of these frustums should approach the volume of the hemisphere.




So, for each $\ell = 1,2,\ldots,n-1$, the term $V_\ell$ we seek is $\frac{1}{3}B_1 h_1 - \frac{1}{3}B_2 h_2 $, the volume of some pyramid minus its top. Using similarity of triangles and everything introduced above, we can deduce that
$$
B_1 = \frac{n}{2}r_{\ell-1}^2 \sin\frac{2\pi}{n}~,~B_2 = \frac{n}{2}r_\ell^2 \sin\frac{2\pi}{n} ~,~h_1 = \frac{r}{n}\frac{r_{\ell-1}}{r_{\ell-1}-r_{\ell}}~,~h_2=\frac{r}{n}\frac{r_{\ell}}{r_{\ell-1}-r_{\ell}} ~~.
$$
So, our expression for $V_\ell$ is
$$
\frac{r}{6} \sin\frac{2\pi}{n} \left\{ \frac{r_{\ell-1}^3}{r_{\ell-1}-r_{\ell}} - \frac{r_{\ell}^3}{r_{\ell-1}-r_{\ell}} \right\} = \frac{\pi r}{3n} \frac{\sin\frac{2\pi}{n}}{2\pi/n} \left\{ r_{\ell-1}^2 + r_\ell^2 + r_{\ell-1}r_\ell \right\}
$$ $$
= \frac{\pi r^3}{3n^3} \frac{\sin\frac{2\pi}{n}}{2\pi/n} \left\{ (n^2 - (\ell-1)^2) + (n^2-\ell^2) + \sqrt{(n^2-\ell^2)(n^2-(\ell-1)^2)} \right\} ~~.

$$
So, we consider $ \lim\limits_{n\to\infty} \sum_{\ell=1}^{n-1} V_\ell$ . The second factor involving sine goes to 1, and we notice that each of the three terms in the sum is quadratic in $\ell$, and so the sum over them should intuitively have magnitude $n^3$. Hence, we pass the $\frac{1}{n^3}$ into the sum and evaluate each sum and limit individually, obtaining 2/3, 2/3, and 2/3 respectively (the first two are straightforward, while the third comes from the analysis in this answer).



Thus, we arrive at $\frac{\pi r^3}{3} (2/3+2/3+2/3) = \frac{2}{3}\pi r^3$ as the volume of a hemisphere, as desired.



So was this too excessive or perhaps worth it? I'll leave that to all of you. :)


commutative algebra - Can you construct a field with 4 elements?



Can you construct a field with 4 elements? can you help me think of any examples?


Answer



Hint: Two of the elements have to be $0$ and $1$, and call the others $a$ and $b$. We know the mulitplicative group is cyclic of order $3$ (there is only one group to choose from), so $a*a=b, b*b=a, a*b=1$. Now you just have to fill in the addition table: how many choices are there for $1+a$? Then see what satisfies distributivity and you are there.



Added: it is probably easier to think about the choices for $1+1$


calculus - $limlimits_{xtoinfty}left(cosleft(frac{1}{x}right)right)^x$

I want to find $\lim\limits_{x\to\infty}\left(\cos\left(\frac{1}{x}\right)\right)^{\!x}$.
I'd like to use the fact that
$$\lim_{x\to\infty} e^{\ln(\cos(1/x)^x)}$$
but I am not sure what to do after this.

modular arithmetic - Solving Linear Congruence



Ok, I found a lot of questions asking about solving $a = b \pmod c$ where you could divide $a$ and $b$ by some $x$ where gcd$(x, c) = 1$. How do you solve when this is not the case?




Suppose I have $10 x \equiv 5 \pmod{15}$. How do I solve this? How can you solve to get a linear equation in $x$?



On inspection (and trying out values), I see that $x = 3n + 2$ is what I'm looking for. How can I get this mathematically?



And yes, this is homework, but I changed the numbers so that I could practice on the actual problem ;)


Answer



$\begin{eqnarray}\rm{\bf Hint}\ &&\rm mod\ mc\!:\ ac\,x\equiv bc&\iff&\rm mod\ m\!:\ ax\equiv b\quad for\ \ c\ne0\\
\rm &&\rm by\ \ \ \ mc\, \:|\: \ ac\,x-bc&\iff&\rm m\ |\ ax-b\\
\rm &&\rm because\ \ \ \dfrac{ac\,x-bc}{mc}&\ \ =\ \ &\rm \dfrac{ax-b}{m}
\end{eqnarray}$



real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...