Friday 31 October 2014

calculus - How to represent this partial sum?

I'm trying to find a way to represent $\sum_{n=1}^\infty n^2$ as a partial sum. I know that every term in this series can be represented, for example when $n=5$, as $5^2+4^2+3^2+2^2+1^2$. I know that $1+2+3+4...$ can be represented as $\frac{n(n+1)}{2}$ but I can't figure out how to add the squared terms to each number.

Summation with factorial

I want to understand how this step is performed.

Can you tell me that how this value of Po is obtained from the first equation.!



$$P_o=\frac{1}{\sum_{k=0}^\infty\left(\frac{\alpha}{u}\right)^k\frac{1}{k!}}$$



$$P_o=e^{-\frac{\alpha}{u}}$$

elementary number theory - Solving simple congruence



I'm trying to solve a simple congruence like $23d\equiv 1 (mod40)$ and so I'm using the method I found on Wikipedia (http://en.wikipedia.org/wiki/Linear_congruence_theorem). So finally I arrive at the conclusion that $d=40k+7$ for some integer $k$ and here's where I get stuck - how can I found all the solutions (here I know there's only one since $GCD(23, 40) = 1$ but I'm also talking about different problems) if not by taking some random values and plugging in everything I think of incrementally?



I mean: I know the solution is $k=0$ and so $d=7$ only thanks to the wild guess of "let's try 0 first" and scoring a jackpot then but how do I derive it rather than by common sense?



Answer



You're just a step away: $$23 d \equiv 1 \pmod {40} \implies d = 40k + 7 \iff d - 7 = 40 k $$ $$\iff d\equiv 7 \pmod {40}$$



So every solution $d$ satisfying your congruence equation is expressible, as you've shown, by $$d = 40k + 7 \iff 40\mid (d - 7)$$. There are infinitely many values of $k$ for which this is true.



Any unique integer $k \in \mathbb{Z}$ yields a value satisfying $d = 40k + 7\;$ such that $d \equiv 7 \pmod {40}$. In other words, the values of $d$ which satisfy your congruence equation are all $d$ for which $40 \mid (d - 7)$. There are infinitely many $d$ congruent to $7 \mod 40$.



It is standard, however, by convention, that we choose $k$ so that we have $0 \leq d \lt m$ where m is the modulus. So in this case, we choose $d = 0 \cdot 40 + 7$ to represent the equivalence class of all solutions: and represent the solution as $d \equiv 7 \mod 40,$ because $d = 7$ is the only solution $0 \leq 7 \lt 40$.


prime factorization - Power of 2 & 5 in product of consecutive numbers



Is there any way to calculate the powers of 2 and 5 in a product of consecutive $n$ numbers, if given $l$ and $r$.




i.e Suppose $x = l \times (l+1) \times (l+2) \times ...... \times (r-1) \times r$, Then if $x = 2^{p_1}.3^{p_2}.5^{p_3}.7^{p_4}....$, So I am interested a direct way of calculating $p_1$ and $p_3$ with just $l$ and $r$



I am just curious about it. I just want $2$ & $5$, but if there is a general solution for any prime number, It would be great if you mention that.


Answer



Let, us define the product of first $n$ integers by $n!$.



Now,highest power of prime $p$ contained in $n!$ is given by-



$$p_{\text{highest power}}=\lfloor\frac{n}{p^1}\rfloor+\lfloor\frac{n}{p^2}\rfloor+\lfloor\frac{n}{p^3}\rfloor+...+\lfloor\frac{n}{p^n}\rfloor$$ where $p^n\leq n$ and $\lfloor x\rfloor$ is the greatest integer function.




For more information see this-De Polignac's Formula.


Thursday 30 October 2014

divisibility - power of ten modulo prime

In a mathematical quiz (that I solved by computational means), I came across the problem of finding powers k of ten with a given congruence to a given prime number,
$$10^k \equiv q \text{ mod } (p)$$
as eg
$$10^k \equiv 46 \text{ mod } (47)$$
and I wonder if there is a generic approach to this problem.

sequences and series - Easy rational approximations of base-2 logarithms

I often find myself in need of a quick approximation to a base-2 logarithm of an integer, e.g. $\log_2 3$ or $\log_2 5$. While I can always reach for a calculator (or computer), I'd quite like to be able to derive one quickly using pen and paper, whenever I need it.




Ideally, such an approximation would be in the form of a sequence of rational numbers that approaches the target value, and which is easy to calculate by hand. Does anyone know of such a sequence?



To give an idea of what I mean: if I ever find myself needing to approximate the Golden ratio, I can simply write down the Fibonacci sequence, pick two consecutive terms and take the ratio between them. This gives successively better rational approximations to $\phi$ and I can easily calculate it even without a pen, since it requires only addition. While I imagine there isn't quite such an easy way to approximate $\log_2 3$ (or $\log_2 5$ or $\log_2 n$), I'm looking for something as close to that as possible.



(If such a thing exists only for natural logarithms it would still be helpful, but base 2 is greatly preferred.)

Square roots of a complex number



My book says that given the complex number $z$ with modulus $r$, its square roots are $±√re^{iΘ/2}$ where $Θ$ is the principal value of $\arg z$. My question is that why must we consider the principal value of its argument?


Answer



We consider the principal value by convention. But also if we take the other value $\frac{\theta}{2}+\pi$ we find the same two roots because $e^{i(\frac{\theta}{2}+\pi)}=-e^{i(\frac{\theta}{2})}$


Wednesday 29 October 2014

soft question - What would be an effective way to learn group theory on my own?



I've read the basics of this branch and I found it extremely interesing, and I would really love to learn more about it.



I want to study as much as I can on my own, as my course doesn't have group theory, unfortunately.




What would be a good (ground-up level) introductory book, and then a mid-level group theory book?



Thanks so much in advance.



E: I'm taking a number theory course at the moment. I haven't taken any other math courses yet, although I've some studied calculus, real analysis and linear algebra on my own.


Answer



In my opinion, Artin's is a pretty decent introductory text; I learned the very basics of abstract algebra predominantly from that book. The first chapter focuses on matrices, which was actually very good for me because my linear algebra course had focused on the abstract picture of linear algebra, and spent very little time with matrices. Artin does have a few linguistic oddities; off the top of my head, the use of the phrase "law of composition" for a binary operation, "Group operations" instead of group actions. Overall, though, I'd say it's a good comprehensive text to get you started.



For a text that has more meat to it, but is still well grounded for the beginner, I'd go for Dummit and Foote. If you're feeling ambitious.


divisibility - Number theory,GCD, coprime integers



I am sorry for the bad title but I really can't think of a better one.
So I was learning about the euclidean algorithm and I see a statement that is hard for me to understand. In the book that I was reading, it says that if X and Y are coprime integers, then (X-Y) and Y are also coprime. It says that this is really easy to prove, so the proof is omitted. Can anyone explain or write down the proof for this?



Answer



Suppose, to the contrary, that $X-Y$ and $Y$ are not co-prime. So there exists an integer $d \ge 2$ and integers $m,n$ such that $X-Y=md$ and $Y = nd$.



But then $X = (m+n)d$, so $d$ divides both $X$ and $Y$, contradicting the fact that $X$ and $Y$are co-prime.


linear algebra - What is the most efficient way to determine if a matrix is invertible?




I'm learning Linear Algebra using MIT's Open Courseware Course 18.06



Quite often, the professor says "... assuming that the matrix is invertible ...".



Somewhere in the lecture he says that using a determinant on an $n \times n$ matrix is on the order of $O(n!)$ operations, where an operation is a multiplication and a subtraction.



Is there a more efficient way? If the aim is to get the inverse, rather than just determine the invertibility, what is the most effecient way to do this?


Answer



Gauss-Jordan elimination can be used to determine when a matrix is invertible and can be done in polynomial (in fact, cubic) time. The same method (when you apply the opposite row operation to identity matrix) works to calculate the inverse in polynomial time as wel.


calculus - $dx$ being a desginator (with respect to $x$) or being a term?

I am confused as to what $dx$ truly is. I am doing some u-substitution problems and this is what I came across:



$$\int 2x(x-1)^{1/2}\,dx$$




$u=x-1$ and therefore $du=1$



when we substitute we get:
$$2 \int (u^{3/2}+u^{1/2}) \, du$$
(here the du simply replaces the dx because our variable changed)



In another example:
$$\int 4x^5(x^2+1)^{1/3} \, dx$$
$$u=x^3+1$$

$$du=3x^2$$
therefore it becomes:
$$\int 4(u-1)(du/3)(u)^{1/3}\,$$
-here my teacher didn't put $d$x at the end, she just left it off
So my question is this: why is it that sometimes $dx$ and $du$ are treated as values that can be multiplied to other terms in the integrand and sometimes they are simply treated as a command (do "blank" with respect to $x$, or $u$ or whatever is used)?

abstract algebra - Linear independence over $mathbb Q$ and $mathbb R$ of subsets of $2^{mathbb N}$



I have the following doubt:




Suppose $f_1,\ldots,f_n\in 2^{\mathbb N}$ are such that $\{f_1,\ldots,f_n\}$ is linearly independent in the $\mathbb Q$-vector space $\mathbb{Q^N}$. Does this set remain linearly independent in the $\mathbb R$-vector space $\mathbb{R^N}$?





Here $2=\{0,1\}$. I would like hints, not full answers.



Thanks






Edit: I have shown that if there is some $I\subseteq\Bbb N$ such that $f_1\upharpoonright I,\ldots,f_n\upharpoonright I$ is linearly independent over $\Bbb Q$ with $|I|\geq n$,then we are done, however I can't see why such $I$ should exist.


Answer




Suppose $\lambda_1 f_1 + \cdots + \lambda_n f_n = 0$, where $\lambda_1,\dots,\lambda_n\in\mathbb{R}$. Try picking a basis for the $\mathbb{Q}$-vector space spanned by $\lambda_1,\dots,\lambda_n$.


Tuesday 28 October 2014

calculus - Tough integrals that can be easily beaten by using simple techniques

This question is just idle curiosity. Today I find that an integral problem can be easily evaluated by using simple techniques like my answer to evaluate



\begin{equation}

\int_0^{\pi/2}\frac{\cos{x}}{2-\sin{2x}}dx
\end{equation}



I'm even shocked (and impressed, too) by user @Tunk-Fey's answer and user @David H's answer where they use simple techniques to beat hands down the following tough integrals



\begin{equation}
\int_0^\infty\frac{x-1}{\sqrt{2^x-1}\ \ln\left(2^x-1\right)}\ dx
\end{equation}



and




\begin{equation}
\int_{0}^{\infty}\frac{\ln x}{\sqrt{x}\,\sqrt{x+1}\,\sqrt{2x+1}}\ dx
\end{equation}
So, I'm wondering about super tough definite integrals that can easily beaten by using simple techniques with only a few lines of answer. Can one provide it including its evaluation?



To avoid too many possible answers and to narrow the answer set, I'm interested in knowing tough integrals that can easily beaten by only using clever substitutions, simple algebraic manipulations, trigonometric identities, or the following property



\begin{equation}
\int_b^af(x)\ dx=\int_b^af(a+b-x)\ dx

\end{equation}



I'd request to avoid using contour/ residue integrals, special functions (except gamma, beta, and Riemann zeta function), or complicated theorems. I'd also request to avoid standard integrals like



\begin{align}
\int_{-1}^1\frac{\cos x}{1+e^{1/x}}\ dx&=\sin 1\tag1\\[10pt]
\int_0^\infty\frac{\log ax}{b^2+c^2x^2}\ dx&=\frac{\pi\log\left(\!\frac{ab}{c}\!\right)}{2bc}\tag2\\[10pt]
\int_0^1\frac{1}{(ax+b(1-x))^2}\ dx&=\frac{1}{bc}\tag3\\[10pt]
\int_0^{\pi/2}\frac{\sin^kx}{\sin^kx+\cos^kx}\ dx&=\frac{\pi}{4}\tag4\\[10pt]
\int_0^\infty\frac{e^{-ax}-e^{-bx}}{x}\ dx&=\log\left(\!\frac{b}{a}\!\right)\tag5

\end{align}

elementary set theory - What is the practical benefit of a function being injective? surjective?



I have learned that an injective function is one such that no two elements of the domain map to the same value in the codomain.



Example: The function $x \mapsto x^2$ is injective when the domain is positive numbers but is not injective when the domain is all numbers because both $(-2)^2$ and $2^2$ map to the same value, $4$.



Is there an advantage of an injective function over a non-injective function? What is the practical benefit of injective functions? Or perhaps there is an advantage to a function not being injective?



I have also learned about surjective functions: a surjective function is one such that for each element in the codomain there is at least one element in the domain that maps to it.




What is the benefit of a function being surjective? Is there a danger to using functions that are not surjective?



I'd like to have a deeper understanding of injective and surjective than simply be able to parrot back their definitions. Please help.


Answer



Ed and Aaron, thank you very much for explaining the practical benefits of injectivity and surjectivity.



I have taken what I learned from you and cast it into my own thoughts and experiences. Please let me know of any errors.







Motivation



When I go hiking, I want to be able to retrace my steps and get back to my starting point.



When I go to a store, I want to be able to return home.



When I swim out from the shore, I want to be able to get back to the shore.



Going somewhere and then coming back to where one started is important in life.




And it is important in mathematics.



And it is important in functional programming.



Domain, Codomain, and Inverse



If a function maps a set of elements (the domain) to a set of values (the codomain) then it is often useful to have another function that can take the elements in the codomain and send them back to their original domain values. The latter is called the inverse function.



In order for a function to have an inverse function, it must possess two important properties, which I explain now.




The Injective Property



Let the domain be the set of days of the week. In Haskell one can create the set using a data type definition such as this:



data Day = Monday | Tuesday | Wednesday | Thursday | Friday | Saturday | Sunday


Let the codomain be the set of breakfasts. One can create this set using a data type definition such as this:




data Breakfast = Eggs | Cereal | Toast | Oatmeal | Pastry | Ham | Grits | Sausage


Now I create a function that maps each element of the domain to a value in the codomain.



Here is one such function. The first line is the function signature and the following lines is the function definition:



f  ::  Day  ->  Breakfast
f Monday = Eggs
f Tuesday = Cereal

f Wednesday = Toast
f Thursday = Oatmeal
f Friday = Pastry
f Saturday = Ham
f Sunday = Grits


An important thing to observe about the function is that no two elements in the domain map to the same codomain value. This function is called an injective function.



[Definition] An injective function is one such that no two elements in the domain map to the same value in the codomain.




Contrast with the following function, where two elements from the domain -- Monday and Tuesday -- both map to the same codomain value -- Eggs.



g  ::  Day  ->  Breakfast
g Monday = Eggs
g Tuesday = Eggs
g Wednesday = Toast
g Thursday = Oatmeal
g Friday = Pastry
g Saturday = Ham

g Sunday = Grits


The function is not injective.



Can you see a problem with creating an inverse function for g :: Day -> Breakfast?



Specifically, what would an inverse function do with Eggs? Map it to Monday? Or map it to Tuesday? That is a problem.



[Important] If a function does not have the injective property then it cannot have an inverse function.




In other words, I can't find my way back home.



The Surjective Property



There is a second property that a function must possess in order for it to have an inverse function. I explain that next.



Did you notice in the codomain that there are 8 values:



data Breakfast = Eggs | Cereal | Toast | Oatmeal | Pastry | Ham | Grits | Sausage



So there are more values in the codomain than in the domain.



In function f :: Day -> Breakfast there is no domain element that mapped to the codomain value Sausage.



So what would an inverse function do with Sausage? Map it to Monday? Tuesday? What?



The function is not surjective.




[Definition] A surjective function is one such that for each element in the codomain there is at least one element in the domain that maps to it.



[Important] If a function does not have the surjective property, then it does not have an inverse function.



[Important] In order for a function to have an inverse function, it must be both injective and surjective.



Injective + Surjective = Bijective



One final piece of terminology: a function that is both injective and surjective is said to be bijective. So, in order for a function to have an inverse function, it must be bijective.




Recap



If you want to be able to come back home after your function has taken you somewhere, then design your function to possess the properties of injectivity and surjectivity.


Monday 27 October 2014

calculus - Evaluate $int frac{1}{sin xcos x} dx $



Question: How to evaluate $\displaystyle \int \frac{1}{\sin x\cos x} dx $



I know that the correct answer can be obtained by doing:
$\displaystyle\frac{1}{\sin x\cos x} = \frac{\sin^2(x)}{\sin x\cos x}+\frac{\cos^2(x)}{\sin x\cos x} = \tan(x) + \cot(x)$ and integrating.




However, doing the following gets a completely different answer:
\begin{eqnarray*}
\int \frac{1}{\sin x\cos x} dx
&=&\int \frac{\sin x}{\sin^2(x)\cos x} dx\\
&=&\int \frac{\sin x}{(1-\cos^2(x))\cos x} dx.
\end{eqnarray*}
let $u=\cos x, du=-\sin x dx$; then
\begin{eqnarray*}
\int \frac{1}{\sin x\cos x} dx
&=&\int \frac{-1}{(1-u^2)u} du\\
&=&\int \frac{-1}{(1+u)(1-u)u}du\\
&=&\int \left(\frac{-1}{u} - \frac{1}{2(1-u)} + \frac{1}{2(1+u)}\right) du\\
&=&-\ln|\cos x|+\frac{1}{2}\ln|1-\cos x|+\frac{1}{2}\ln|1+\cos x|+C

\end{eqnarray*}



I tested both results in Mathematica, and the first method gets the correct answer, but the second method doesn't. Is there any reason why this second method doesn't work?


Answer



If I take the derivative of your second answer (call it $g(x)$), I get:
\begin{eqnarray*}
\frac{dg}{dx}
& = & -\frac{-\sin x}{\cos x} + \frac{\sin x}{2(1-\cos x)} + \frac{-\sin x}{2(1+\cos x)}\\
& = & \frac{\sin x\left(1-\cos^2 x + \frac{1}{2}\cos x(1+\cos x) - \frac{1}{2}\cos x(1-\cos x)\right)}{\cos x(1-\cos x)(1+\cos x)}\\
& = & \frac{\sin x\left( 1- \cos^2 x + \frac{1}{2}\cos x + \frac{1}{2}\cos^2 x - \frac{1}{2}\cos x + \frac{1}{2}\cos^2 x\right)}{\cos x(1-\cos^2 x)}\\

& = & \frac{\sin x}{\cos x\>\sin^2 x} = \frac{1}{\cos x\sin x}.
\end{eqnarray*}
So I'm not sure why Mathematica says the second method is not "the right answer".


elementary number theory - question on 'divides'



Let $a,b,c>0$ be natural numbers. Consider the following statments:



i) if $a\nmid b$ and $b |c$ then $a\nmid c$



ii) if $a |b$ and $b |c$ then $ab |bc$



iii) if $a |c$ and $b |c$ then $ab |c$




iv) If $a |b$ and $b |c$ and $c |a$ then $ac |b^2$



Question: Determine whether each statement is true or false.



$q_1,q_2,q_3$ are natural numbers



So for i) a is not a factor of b, and b divides c, say $c=q_1b$ so in the case when a is a factor of $q_1$ this is false.



for ii) a divide b implies $a |b=aq_1$ and b divides c so $aq_1 |c=aq_1q_2$ and as $aaq_1 |aq_1aq_1q_2$ which is true..




for iii) a divides c implies $a |c=aq_1$, b divides c implies $b |c=bq_2$ so ab does not divide c when a is not a factor of $q_2$ or b is not a factor of $q_1$ so false



for iv) a divides b so $a |b=aq_1$ b divides c $b=aq_1 |c=aq_1q_2$ and c divides a $aq_1q_2 |a$ which implies $q_1,q_2$ are 1 so this means that a,b and c must beequal so this is always true.



This seems like a really long way to do this is it right and is there a nicer way to get this done?



Thanks


Answer



You have the right idea for all of them. However, to show that (i) and (iii) are not true, you must give specific examples of $a,b,c.$




Your proof of (iv) looks optimal, though your proof of (ii) could be improved a bit. Once you get to $c=aq_1q_2,$ it follows directly that $$bc=baq_1q_2=abq_1q_2,$$ so that $ab\mid bc,$ as desired.


algebra precalculus - Proving $1^3+ 2^3 + cdots + n^3 = left(frac{n(n+1)}{2}right)^2$ using induction



How can I prove that





$$1^3+ 2^3 + \cdots + n^3 = \left(\frac{n(n+1)}{2}\right)^2$$




for all $n \in \mathbb{N}$? I am looking for a proof using mathematical induction.



Thanks


Answer



You are trying to prove something of the form, $$A=B.$$ Well, both $A$ and $B$ depend on $n$, so I should write, $$A(n)=B(n).$$ First step is to verify that $$A(1)=B(1).$$ Can you do that? OK, then you want to deduce $$A(n+1)=B(n+1)$$ from $A(n)=B(n)$, so write out $A(n+1)=B(n+1)$. Now you're trying to get there from $A(n)=B(n)$, so what do you have to do to $A(n)$ to turn it into $A(n+1)$, that is (in this case) what do you have to add to $A(n)$ to get $A(n+1)$? OK, well, you can add anything you like to one side of an equation, so long as you add the same thing to the other side of the equation. So now on the right side of the equation, you have $B(n)+{\rm something}$, and what you want to have on the right side is $B(n+1)$. Can you show that $B(n)+{\rm something}$ is $B(n+1)$?


integration - Help to prove that : $int_{0}^{1}int_{0}^{1}{1-xover 1-xy}cdot{xover ln{(xy)}}dxdy={1over 2}ln{1over 2}$




I was using a double integral to check for some constants.



I came across this one.



How can we show that



$$\int_{0}^{1}\int_{0}^{1}{1-x\over 1-xy}\cdot{x\over \ln{(xy)}}dxdy={1\over 2}\ln{1\over 2}$$



My try:




$$\int_{0}^{1}\int_{0}^{1}\left[{x(1-xy)^{-1}\over \ln{(xy)}}-{x^2(1-xy)^{-1}\over \ln{(xy)}}\right]dxdy$$



Apply binomial series:
$$\int_{0}^{1}\int_{0}^{1}\left[{x-x^2y+x^3y^2-x^4y^3+\cdots\over \ln{(xy)}}-{x^2-x^3y+x^4y^2-x^5y^3+\cdots\over \ln{(xy)}}\right]dxdy$$



I wonder if we can apply the Frullani theorem at this point?


Answer



Let $xy=u$ and $x=v$ so that your integral becomes, with Jacobian $-1/v$, $$-\int_0^1\int_0^v\frac{1-v}{(1-u)\ln u}dudv$$ which you can solve by changing the oreder of integration as $$-\int_0^1\int_u^1\frac{1-v}{(1-u)\ln u}dvdu=-\frac12\int_0^1\frac{1-u}{\ln u}du=\frac12\ln 2$$


properties of complex number multiplication




Could you please explain to me why this simple equation is true



$$-i * i = 1$$



or



$$-\sqrt{1}*\sqrt{1}=1$$



I know basic properties of roots $$\sqrt{a} * \sqrt{b} = \sqrt{a*b}$$




but what I get is $$-\sqrt{-1} * \sqrt{-1} = -\sqrt{(-1)*(-1)}= -\sqrt{1} = -1$$


Answer



The equality $-i\cdot i=1$ is equivalent to $i\cdot i=-1$.



But $i^2=-1$ is a property of $i$, so you're done.



$\sqrt{-1}$ is not one number, but two numbers:
$\sqrt{-1}=a\iff a^2=-1$, there are two such $a$: $a=i$ and $a=-i$.


GCD induction proof

I apologize if this is a duplicate question (believe me, I've searched). This question is a part of an ungraded class warm-up exercise.



Question:
Using induction, prove that for all positive integers $x$ and $y$, that $g(x,y)$ computes the GCD of $x$ and $y$:



$$g(x,y) = \left\{\matrix{x & \text{ if }~~~x=y\\g(x-y,y) & \text{ if }~~~ x>y\\g(x,y-x) & \text{ if }~~~x

Edit:
Since this is a piecewise function, the base case confuses me. Am I supposed to do three separate base cases here? Also, the fact that this is recursive is not helping.
The first thing I would do is the base case where x and y equals 1, so: gcd(1,1) = 1 since x=y.
From there I don't really know where to go.




Apparently, knowing how to do this problem is a prerequisite for the class, which is stressing me out because I have never encountered an induction proof like this, and I really don't want to drop the class if I don't have to.

abstract algebra - Extension of automorphism to finite algebraic extension




Let $L_2/L_1/K$ be a tower of field extensions where $L_2/K$ is finite algebraic [so $L_1/K$ is also finite]. Prove or disprove: For every $\sigma_1\in Aut_K(L_1)$ there is a $\sigma_2\in Aut_K(L_2)$ such that $\sigma_2|L_1=\sigma_1$.




The claim is clearly true if $L_2/K$ is normal, since we can extend $\sigma_1$ to a homomorphism into an algebraic closure and then use normality. Other than that, I'm completely lost. Do I need to induct on the degree of the extension? (Hints are appreciated.)


Answer



It is false. Take for example




$$\Bbb Q\subset\Bbb Q(\sqrt2)\subset\Bbb Q(\sqrt[4]2)$$



We have the automorphism



$$\sigma_1\in\text{Aut}_{\Bbb Q}(\Bbb Q(\sqrt2))\;,\;\;\sigma_1(a+b\sqrt2):=a-b\sqrt2\;,\;\;a,b\in\Bbb Q$$



Now, the only elements in $\;\text{Aut}_{\Bbb Q}(\Bbb Q(\sqrt[4]2))\;$ are the ones defined by (i.e, the real ones)



$$\text{Id.}:\begin{cases}1&\mapsto&1\\{}\\\sqrt[4]2&\mapsto&\sqrt[4]2\end{cases}\;\;\;\;,\;\;\;\;\tau:\begin{cases}&1&\mapsto&1\\{}\\&\sqrt[4]2&\mapsto&-\sqrt[4]2\end{cases}$$




yet observe that in both cases $\;\left(\sqrt[4]2\right)^2=\sqrt2\mapsto\sqrt2\;$


Sunday 26 October 2014

Solve the functional equation $2f(x)=f(ax)$ for some $a$.




I am trying to solve the following functional equation, and could use some help.$$ 2f(x)=f(ax)$$



For some $a\in\mathbb{R}$. By repeated adding $2f(x)$ together we notice that $$2nf(x)=f(a^nx).$$



Also $$2mf(a^{-m}x)=f(x)\Rightarrow \frac{1}{2m}f(x)=f(a^{-m}x).$$



Putting it all together I have for all $m,n\in\mathbb{N}$, $$\frac{n}{m}f(x)=\frac{2n}{2m}f(x)=f(a^{n-m}x)$$



I am unsure of how to proceed from here.



Answer



I give only some hints for the case $a>0$, $a\not =1$, and $f$ defined on $I=]0,+\infty[$.



Put $\displaystyle g(x)=\exp(-x\log 2)f(\exp(x\log a))$. Then show that $f(au)=2f(u)$ for all $u>0$ is equivalent to $g(x+1)=g(x)$ for all $x\in \mathbb{R}$. Hence we get that the solutions are the functions $f$ of the form $\displaystyle f(x)=g(\frac{\log x}{\log a})\exp(\frac{x\log2}{\log a})$ for any $g$, periodic of period $1$ on $\mathbb{R}$.


algebra precalculus - Difference of the square roots of consecutive integers is equal to the reciprocal of the sum of their square roots

The difference of the square roots of consecutive integers is equal to the reciprocal of the sum of the square roots of the same integers, like shown below.




$$\sqrt{x+1}-\sqrt{x}= \frac{1}{\sqrt{x+1}+\sqrt{x}}$$




However, I'm interested in looking at the proof for the above but I am unable to find any online or in a book. Can anyone show me?



Thank you very much

calculus - Find the limit $ lim_{x to infty} frac{e^{x}}{(1+frac{1}{x})^{x^{2}}} $



Find the limit
$$
\lim_{x \to \infty} \frac{e^{x}}{(1+\frac{1}{x})^{x^{2}}}.
$$




I know that the limit is of indeterminate form type $\frac{\infty}{\infty}$, but it seems using L'Hopital's rule directly does not help here. Do I somehow use the fact that $\lim_{x \to \infty} (1+\frac{1}{x})^{x}=e$?


Answer



For any $x>1$,
$$ x^2\log\left(1+\frac{1}{x}\right) = x -\frac{1}{2}+O\left(\frac{1}{x}\right),$$
so:
$$ x-x^2\log\left(1+\frac{1}{x}\right) = \frac{1}{2}+O\left(\frac{1}{x}\right). $$
By exponentiating such identity, you get that the limit is $e^{1/2}=\color{red}{\sqrt{e}}$.


calculus - Show that for all $lambda geq 1~$ $frac{lambda^n}{e^lambda} < frac{C}{lambda^2}$




Show that for any $n \in \mathbb N$ there exists $C_n > 0$ such that for all $\lambda \geq 1$




$$ \frac{\lambda^n}{e^\lambda} < \frac{C_n}{\lambda^2}$$




I can see that both sides of the inequality have a limit of $0$ as $\lambda \rightarrow \infty$ since, on the LHS, repeated application of L'Hôpital's rule will render the $\lambda^n$ term as a constant eventually, while the $e^{\lambda}$ term will remain, and the RHS is obvious.



I can also see that the denominator of the LHS will become large faster than the RHS denominator, but I can't seem to think of anything that will show that the inequality is true for all the smaller intermediate values.


Answer



HINT: The inequality $$\frac{\lambda^n}{e^\lambda} < \frac{C}{\lambda^2}$$ is equivalent to the inequality $\lambda^{n+2}e^{-\lambda}

linear algebra - To find eigenvalues of matrix with all same element




How many distinct eigenvalues are there in the matrix.



$$

\begin{bmatrix}
1 & 1 & 1 & 1 \\
1 & 1 & 1 & 1 \\
1 & 1 & 1 & 1 \\
1 & 1 & 1 & 1 \\
\end{bmatrix}
$$



I was wondering that is there any specific eigenvalues for matrices like this??




I hope I wouldn't have to find the determinant of this 4×4 matrix.


Answer



Note that $$\left(\begin{matrix}1&1&1&1\\ 1&1&1&1\\ 1&1&1&1\\ 1&1&1&1\end{matrix}\right)\left(\begin{matrix} a\\ b\\ c\\ d\end{matrix}\right)=\left(\begin{matrix} a+b+c+d\\ a+b+c+d\\ a+b+c+d\\ a+b+c+d\end{matrix}\right)$$
If $a=b=c=d$, then $$\left(\begin{matrix}1&1&1&1\\ 1&1&1&1\\ 1&1&1&1\\ 1&1&1&1\end{matrix}\right)\left(\begin{matrix} a\\ a\\ a\\ a\end{matrix}\right)=4\left(\begin{matrix} a\\ a\\ a\\ a\end{matrix}\right)$$
Hence $4$ is an eigenvalue.



Also see that since all the columns of the matrix are same, the rank of the matrix is $1$. So $4$ is the only non zero eigenvalue. $0$ is the other eigenvalue, with eigenvector, for example $(a~-a~a~-a)^t$.


contest math - Putnam Series Question

I'm studying for the Putnam Exam and am a bit confused about how to go about solving this problem.





Sum the series
$$
\sum_{m = 1}^{\infty} \sum_{n = 1}^{\infty} \frac{m^2n}{3^m(n3^m + m3^n)}.
$$




I've tried "splitting" the expression to see if a geometric sum pops up but that didn't get me anywhere. I've also tried examining the first few terms of the series for the first few values of $m$ to see if an inductive pattern emerged but no luck there either.

Saturday 25 October 2014

real analysis - If $f'(x)$ has a limit as $xto x_0$, then the function $f$ is differentiable at $x_0$



I've got a question about mathematical analysis of one-variable functions. Assume that we have a function defined for $x \neq x_0$ as composition/sum/product of differentiable functions and also $f(x_0) = a \in \mathbb{R}$. The task is to designate differentiability of $f$ in the domain (here: $\mathbb{R}$).



It is obvious for $x \neq x_0$. For $x_0$ we could check continuity in $x_0$ and compute one-sided derivatives of $f$ in $x_0$ by definition (if $f$ is continuous obviously). If left-side derivative equals right-side derivative, then $f$ is differentiable also in $x_0$.



Today I've heard about some "extension" of this method - how to determine differentiability without computing one-sided derivatives in $x_0$. Assume $f'$ is a derivative function computed for $x \neq x_0$. If $\lim_{x \to x_0^-} f'(x) = \lim_{x \to x_0^+} f'(x) = g \in \mathbb{R}$, then exists $f'(x_0)$ and also $f'(x_0) = g$. Could anyone sketch me a proof of this?



If we know that $f'(x)$ is continuous in $x_0$, we can obviously state that exists $f'(x_0)$ and it equals the limit of $f'$ in $x_0$, but the condition above is not a condition of continuity (we don't check whether $f'(x_0) = \lim_{x \to x_0} f'(x)$).


Answer




Assuming that $f$ has a limit at $x_0$ (otherwise this is false), you can use mean value theorem to get for $h > 0$, $(f(x + h) - f(x))/h = f'(y_h)$ for some $x < y_h < x + h$. Similarly for the left hand difference quotient.


Friday 24 October 2014

discrete mathematics - How to express this multi-player card probability mathematically?

I am having trouble conceptualizing how this would be expressed mathematically and could use your help! I have access to Maple and MATLAB so feel free to post examples.



The Problem:



Bill and John are playing a game where they each draw cards from a deck of 52. If they draw an Ace of Hearts, they win 100 dollars. If they draw a King of Hearts, they win 50 dollars. If they draw any other card, they lose 1 dollar. They can draw as few or as many cards as they want (essentially drawing without replacement).



The Ask:




What is the probability of Bill selecting an Ace of Hearts out of his 5 next cards, given that he has already lost 12 times and there are 30 cards left (John has played and lost also)? Remember, as each player draws more cards the probability of the next card being an Ace of Hearts increases.



Optional Ask:



Calculate Bill's expected value for playing this game, given that John has already played 10 cards and lost. Right now I am getting stuck on this calculation when scaling to multiple prizes; here is what I have so far:



$$\displaystyle{a}=\frac{{\Gamma{\left({M}+{1}\right)}\cdot\Gamma{\left({N}-{M}+{1}\right)}}}{{\Gamma{\left({M}-{K}+{1}\right)}\cdot\Gamma{\left({N}-{M}-{P}+{K}+{1}\right)}}}$$



which equals the number of arrangements for each prize. K is the number of prizes to select, P is the number of prizes in the game, M is the number of cards Bill chooses, and N is the number of cards left in the game. (Getting a feeling of hypergeometric distribution here)




and
$$\displaystyle{c}=\frac{{\Gamma{\left({N}+{1}\right)}}}{{\Gamma{\left({N}-{P}+{1}\right)}}}$$
for the number of total cases.



How do I add the prizes into the mix to calculate expected value?



Notes:



The formula(s) must be scalable to more than 2 players, more than 2 prizes, and more than 52 cards.




Thanks!

For function composition, can we take a subset of the codomain of the inner function as the domain of the outer function?

Let $f:A \rightarrow B $ and $g:B \rightarrow C $ be functions. Suppose that $f $ is not surjective. I want to construct a function composition $g\circ f $. But because there is at least one $b \in B $ for every $a \in A $ such that $b\neq f (a) $, it follows that $g $ is not defined for every $b \in B $ insofar as we cannot then construct the composition




$$g \circ f : A \rightarrow C \mathrm {\, \, \, \, defined \, \,by \, \, \, \, } (g\circ f)(x)= g (f (x)) $$



However, is it permissible to take the image $f (A) \subset B $ as the domain of $g $? That is, $g: f (A) \rightarrow C $. Then we are guaranteed that every $b=f (a) \in f (A) $ is mapped by $g $ to a unique element in $C $, that is, $g\circ f $ is well-defined.



Intuitively, this makes sense, but I am not sure if it is permissible. I hope it is clear what I am asking.

real analysis - Evaluate:: $ 2 sum_{n=1}^infty frac{(-1)^{n+1}}{n+1}left( 1 + frac12 +cdots + frac 1nright) $



How to evaluate the series:

$$ 2 \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n+1}\left( 1 + \frac12 + \cdots + \frac 1n\right) $$



According to Mathematica, this converges to $ (\log 2)^2 $.


Answer



Recall that, formally,



$$
\left(\sum_{n=1}^{\infty} a_n\right)\left(\sum_{n=1}^{\infty} b_n\right) = \sum_{n=1}^{\infty} c_{n+1},$$



where




$$
c_n = \sum_{k=1}^{n-1} a_k b_{n-k}.
$$



If the series $\sum c_{n+1}$ converges, then the above equality is actually true. You seem to know how to show this, so I'll just demonstrate the formal aspect of the problem.



Let $a_n = b_n = \frac{(-1)^{n}}{n}$. Then



$$

a_k b_{n-k} = \frac{(-1)^n}{k(n-k)} = \frac{(-1)^n}{n}\left(\frac{1}{k}+\frac{1}{n-k}\right),
$$



so that



$$
\begin{align*}
c_n &= \frac{(-1)^n}{n} \sum_{k=1}^{n-1} \left(\frac{1}{k}+\frac{1}{n-k}\right) \\
&= 2\frac{(-1)^n}{n} \sum_{k=1}^{n-1} \frac{1}{k}.
\end{align*}

$$



We therefore have



$$
2 \sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n+1} \sum_{k=1}^{n} \frac{1}{k} = \left(\sum_{n=1}^{\infty} \frac{(-1)^n}{n}\right)^2 = (-\log 2)^2 = (\log 2)^2.
$$


real analysis - Using the Law of the Mean (MVT) to prove the inequality $log(1+x)

If $x \gt0$, then $\log(1+x) \lt x$.



My attempt at the proof thus far...



Let $f(x) = x-\log(1+x)$, then $f'(x) = 1-\frac{1}{1+x}$ for some $\mu \in (0,x)$ (since $x>0$)



MVT give us $f(x) = f(a) + f'(\mu)(x-a)$




So plug in our values we get:



$$x-\log(1+x) = 0+(1-\frac{1}{1+\mu})(x-0)$$



which we can reduce to



$$\log(1+x)=\frac{x}{1+\mu}$$



Now For $x\leq1$, then $0\lt\mu\lt1$




S0 $0\lt \frac{1}{1+\mu}\lt 1$, thus $\log(1+x)\lt x$



If $x>1$, then....



So I can see clearly that if $x>1$ is plugged into $\frac{x}{1+\mu}$



then $\log(1+x)

I would appreciate tips hints or proof completions.

integration - How to prove $int^{infty}_{0}frac{sin x}{x}mathrm{d}x=frac{pi}{2}$








How to prove $$\int^{\infty}_{0}\frac{\sin x}{x}\mathrm{d}x=\frac{\pi}{2}$$
Do you have a simple method ?

algebra precalculus - Summation of series of first $2n$ natural numbers and first $3n$ natural numbers

I am having an issue understanding the concept of the sum of the first $2n$ and $3n$ natural numbers.



So as I have gathered, the sum of the first $2n$ natural numbers is $n(2n+1)$, and the sum of the first $3n$ natural numbers is $(3n(3n+1))/2$.



But this is what is confusing me, $2n$ implies that it is $2+4+6 + \ldots$ or does it mean $2$ lots of natural numbers. It is really the way it worded is confusing me.

Thursday 23 October 2014

limits - Avoid L'hopital's rule




$$\lim_{x\to 0} {\ln(\cos x)\over \sin^2x} = ?$$



I can solve this by using L'Hopital's rule but how would I do this without this?


Answer



$$\frac{\log\left(\cos\left(x\right)\right)}{\sin^{2}\left(x\right)}=\frac{1}{2}\frac{\log\left(1-\sin^{2}\left(x\right)\right)}{\sin^{2}\left(x\right)}=-\frac{\sin^{2}\left(x\right)+O\left(\sin^{4}\left(x\right)\right)
}{2\sin^{2}\left(x\right)}\stackrel{x\rightarrow0}{\rightarrow}-\frac{1}{2}.$$


divisibility - The problem of ten divided by three







I was thinking about this the other day...




if 10/3 = 3.33333... (series)



why doesn't 3.333... * 3 = 10 it can never be 10 it's always almost 10 or 9.9999... (infinity)



I have read about this but then no one has an answer yet we all accept the fact that it is true when the statement is fundamentally false.

calculus - How can I sum the infinite series $frac{1}{5} - frac{1cdot4}{5cdot10} + frac{1cdot4cdot7}{5cdot10cdot15} - cdotsqquad$



How can I find the sum of the infinite series
$$\frac{1}{5} - \frac{1\cdot4}{5\cdot10} + \frac{1\cdot4\cdot7}{5\cdot10\cdot15} - \cdots\qquad ?$$



My attempt at a solution - I saw that I could rewrite it as
$$\frac{1}{5}\left(1 - \frac{4}{10} \left( 1 - \frac{7}{15} \left(\cdots \left(1 - \frac{3k - 2}{5k}\left( \cdots \right)\right)\right.\right.\right.$$
and that $\frac{3k - 2}{5k} \to \frac{3}{5}$ as $k$ grows larger. Using this I thought it might converge to $\frac{1}{8}$, but I was wrong, the initial terms deviate significantly from $\frac{3}{5}$.




According to Wolfram Alpha it converges to $1-\frac{\sqrt[3]{5}}{2}$. How can I get that ?


Answer



\begin{align*}(-1)^{n-1}\frac{1\cdot 4 \dots (3n-2)}{5\cdot 10 \dots 5n}&=
(-1)^{n-1}\frac{3^n}{5^n}(-1)^n\frac{(-\frac{1}{3})\cdot (-\frac{4}{3}) \dots (-\frac{3n-2}{3})}{1\cdot 2 \dots n}\\ &= -(3/5)^n\binom{-1/3}{n}\end{align*}



Therefore, you can obtain



$$\sum_{n=1}^{\infty} -(3/5)^n\binom{-1/3}{n} = 1 - \sum_{n=0}^{\infty} (3/5)^n\binom{-1/3}{n} = 1- (1+\frac35)^{-1/3} = 1- \sqrt[3]{5/8} =1-\frac{\sqrt[3]5}{2}$$


How to evaluate the following series

Determine the sum of



$$\sum_n^\infty \frac{k}{3^k}$$



Can someone teach me how to solve this please thanks.

calculus - Evaluate $limlimits_{ntoinfty} frac{3+sqrt{3}+sqrt[3]{3}+dots+sqrt[n]{3}-n}{ln n}$




Evaluate $$\lim\limits_{n\to\infty} \frac{3+\sqrt{3}+\sqrt[3]{3}+\dots+\sqrt[n]{3}-n}{\ln n}.$$




We haven't been taught how to do asymptotes, I think this problem should be solved using Cesaro-Stolz; $$\lim\limits_{n\to\infty} \frac{a_n}{b_n}=\lim\limits_{n\to\infty} \frac{a_{n+1}-a_n}{b_{n+1}-b_n}$$
Note that this is my second question on this site.It's the first year i learned about limits so don't expect that this problem solves in a hard way.


Answer



Yes, you are correct, we can use Stolz-Cesaro theorem:

$$\lim_{n\to\infty} \frac{\sum_{k=1}^n(\sqrt[k]{3}-1)}{\ln(n)}\stackrel{SC}{=}\lim_{n\to\infty} \frac{\sqrt[n+1]{3}-1}{\ln (n+1)-\ln(n)}=
\lim_{n\to\infty} \frac{e^{\left(\frac{\ln(3)}{n+1}\right)}-1}{\ln\left(1+\frac{1}{n}\right)}.$$
Now note that (see Arnaud Mortier's comment below)
$$\lim_{t\to 0}\frac{e^{t}-1}{t}=\lim_{t\to 0}\frac{e^{t}-e^0}{t-0}=(e^t)'_{t=0}=e^0=1$$
and
$$
\lim_{t\to 0}\frac{\ln(1+t)}{t}=1=\lim_{t\to 0}\frac{\ln(1+t)-\ln(1+0)}{t-0}=(\ln(1+t))'_{t=0}=\frac{1}{1+0}=1.$$
Can you take it from here?


calculus - Function whose integral over $[0, infty)$ is finite, but the limit of the function is $>0$

Let $t \in \mathbb{R},f:\mathbb{R} \rightarrow \mathbb{R}:t \mapsto f(t)$.




$f$ is required to have:




  1. $\displaystyle\lim_{t \rightarrow \infty} f(t) = L$, where $L>0$ (could be $\infty$, so the limit exists, only it could be $\infty$)


  2. $\displaystyle\int_{0}^{\infty} f(t) dt < \infty$




Is that possible to construct such $f$?



Note:




$f$ can be any function continuous or discontinuous.



I found the discussion here which is involving "tent" function but in that example, the limit does not exist.



Note 2: I have edited the title. Thank you for your fedback !!!



Thank you for any answers or comments.

problem solving - Divisibility Rule Proof about Special Numbers




A positive integer's known as a special number if the digits of the number produce a purely growing sequence. Alternatively, 5659 isn't a special number. Same goes for the number 5669. Show that there aren't any 5-digit special numbers that are divisible by 11.



To show that this, I focused on the last digits of the numbers and identified a cycle. My proof:



"To prove that there are no five-digit special numbers divisible by 11, we first need to find the first five-digit number divisible by 11.
11*910 = 10010
As seen above, the first five-digit number divisible by 11 is 10010. Now, we need to list all the possible last two digits of all the five-digit numbers divisible by 11 in order to find a repeating cycle of the last two digits:
enter image description here



Key: The yellow highlight represents the last two digits of the numbers. Sequences of digits that are circled in pink are numbers where the last two digits are increasing, but the digit before causes the number to not be increasing. Sequences of digits that are circled in purple are numbers where the last two digits cause the number to not be increasing as it repeats. Sequences of digits that are circled in green colour are there to show that the last two digits return to the beginning of the cycle and to show where all the possibilities of the last two digits terminate. Digits that are written in dark green are the third digits of the five-digit numbers divisible by 11. Sequences of digits that are left plain are numbers where the last two digits are not increasing, resulting in the entire number not being increasing.




In the picture above, it is seen that plain sequences of digits (ones without circles) prevent the five-digit number divisible by 11 from being a special number as the last two digits are not increasing. Sequences of digits that are circled in pink prevent the five-digit number divisible by 11 from being a special number as the third digit of the five digit divisible by 11 (written in green) is larger than the following digits. Sequences of digits that are circled in purple prevent the five-digit number divisible by 11 from being a special number as the last two digits repeat. Therefore, it has been proven that all possible last two digits of all the five-digit divisible by 11 prevent the overall number from being a special number. (Read text circled in orange). This, in turn, proves that no five-digit number divisible by 11 is a special number."



However, I don't know if this is convincing enough as it doesn't use the divisibility rule of 11 in the proof. Any help on finding another proof using the divisibility rule of 11 would be extremely appreciated.



Thanks :)


Answer



Suppose $$abcde$$ is special and divisble by $11$. Then, the number $$a+c+e-b-d$$ must be divisible by $11$ The number $a+c+e-b-d$ must be positive because of $e>d$ and $c>b$



The maximum sum $a+c+e$ is given by $a=5$ , $c=7$ , $e=9$ , being smaller than $22$.




Hence, we must have $$e=(b-a)+(d-c)+11$$ This is obviously impossible because of $b>a$ and $d>c$


Wednesday 22 October 2014

logarithms - Logs - changing the base to evaluate



Just a bit confused about how to evaluate the following





$$\log_3 8\times \log_5 9\times \log_2 5$$




What I have done so far:



I have used the change of base rule to change each log to base $3$, so I ended up with this after cancelling:
$$\log_3 8\cdot \log_3 9 \cdot \frac{1}{\log_32}$$



First of all, I'm not even sure that I have made the right decision in changing to base 3, does it matter what you change to? Second of all, I'm just unsure where to go from here.


Answer




To systematically attack such questions, use $\log_x y = \frac{\log y}{ \log x}$ i.e.
$$\log_3 8 = \frac{\ln 8}{\ln 3}, \quad \log_5 9 = \frac{\ln 9}{\ln 5}, \quad \log_2 5 = \frac{\ln 5}{\ln 2}$$
multiply and simplify
$$\log_3 8 \times \log_5 9 \times \log_2 5 =
\frac{\ln 8}{\ln 3} \times \frac{\ln 9}{\ln 5} \times \frac{\ln 5}{\ln 2}\\
=\frac{\ln 8 \times \ln 9}{\ln 3 \times \ln 2}
=\frac{\ln 8}{\ln 2} \times \frac{\ln 9}{\ln 3} =
3\times2 = 6$$


abstract algebra - Showing numbers are irrational.



I was playing around and thought of the following question:




If given $a \not=1 \in \mathbb{R}^{+}$, prove that there are infinitely many integers $n$, such that $\sqrt[n]a$ is irrational.








I have proven a very simple case:



If $1 < m \in\mathbb{N}$, then there are infinitely many positive integer $n$ such that $\sqrt[n]m$ is irrational. To show this, I argue by noting the polynomial, $$ p(x) = x^n -m$$ is irreducible in $\mathbb{Z}$[x] for $n > m$, hence it is irreducible in $\mathbb{Q}[x]$ as it is also primitive. So there are no rational solutions to $x^n = m$. And $\sqrt[n]m$ is irrational.






How do I prove the general case? As it does seem quite intuitive.


Answer




The “simple” case is all you need.



Suppose $a>0$ is irrational; then $\sqrt[n]{a}$ is irrational as well, otherwise $a=\bigl(\sqrt[n]{a}\bigr)^n$ would be rational.



For a general rational $a=p/q$, consider that
$$
\sqrt[n]{a}=\frac{\sqrt[n]{pq^{n-1}}}{q}
$$
Therefore $\sqrt[n]{a}$ is rational if and only if $\sqrt[n]{pq^{n-1}}$ is rational.


calculus - Suppose that $(s_n)$ converges to s. Prove that $(s_n^2)$ converges to $s^2$



I am really struggling with my proofs class, I don't really understand how to prove a statement like this, or what the epsilon is standing for..




Suppose that $(s_n)$ converges to s. Prove that $(s_n^2)$ converges to $s^2$ directly without using the fact that $lim(s_nt_n)=st$



Suppose that $(s_n)$ converges to s. Then, since $(s_n)$ is convergent, there exists an $M_1$ such that $|S_n|0$, there exists N such that n>N implies that $|s_n-s|<ε/M$. Thus for n>N, we have $|s_n^2-s^2|=|s_n-s|*|s_n+s|<(ε/M)(M)=ε$. Hence, $s_n^2$ converges to $s^2$.


Answer



Case (i): $s \neq 0$. Since $s_n \to s$, there exists $M$ such that for all $n>M$, we have $\vert s_n - s \vert < s/2$, i.e., $s/2 < s_n < 3s/2$. Again, since $s_n \to s$, given an $\epsilon > 0$, there exists $N$, such that $\vert s_n - s \vert < \dfrac{2\epsilon}{5s}$. Hence, choosing $K = \max(M,N)$, we get that
$$\vert s_n^2 - s^2 \vert = \vert s_n-s \vert \vert s_n + s \vert < \dfrac{2\epsilon}{5s} \left(\dfrac{3s}2 + s\right) = \epsilon$$



Case (ii): $s=0$. Since $s_n \to 0$, given any $\epsilon>0$, there exists $M$ such that $\vert s_n \vert < \min(\epsilon,1)$ for all $n>M$. Hence, $$\vert s_n^2 \vert < \min(\epsilon,1)^2 \leq \epsilon$$


Tuesday 21 October 2014

combinatorics - Probability distribution in the coupon collector's problem



I'm trying to solve the well known Coupon Collector's Problem by explicitly finding the probability distribution (so far all the methods I read involve using some sort of trick). However, I'm not having much luck getting anywhere as combinatorics is not something I'm particularly good at.



The Coupon Collector's Problem is stated as:




There are $m$ different kinds of coupons to be collected from boxes. Assuming each type of coupon is equally likely to be found per box, what's the expected amount of boxes one has to buy to collect all types of coupons?



What I'm attempting:



Let $N$ be the random variable associated with the number of boxes one has to buy to find all coupons. Then $P(N=n)=\frac{|A_n|}{|\Omega _n|}$, where $A_n$ is the set of all outcomes such that all types of coupons are observed in $n$ buys, and $\Omega _n$ is the set of all the possible outcomes in $n$ buys. I think $|\Omega _n| = m^n$, but I'm not even sure about that anymore, as all my attempts so far led to garbage probabilities that either diverged or didn't sum up to 1.


Answer



As Henry pointed out in a comment, the probability has been determined elsewhere as



$$
\def\stir#1#2{\left\{#1\atop#2\right\}}

P(N=n)=\frac{m!}{m^n}\stir{n-1}{m-1}\;,
$$



where



$$\stir nk=\frac1{k!}\sum_{j=0}^k(-1)^{k-j}\binom kjj^n$$



is a Stirling number of the second kind and counts the number of partitions of a set of $n$ labeled objects into $k$ non-empty unlabeled subsets.



To get the expected value, it's slightly more convenient to work with the probability




$$
P(N\gt n)=1-\frac{m!}{m^n}\stir nm\;,
$$



which can be derived in much the same manner: There are $m^n$ sequences of length $n$; choose one of $\stir nm$ partitions into $m$ non-empty subsets and one of $m!$ assignments of the coupons types to the subsets.



Then



\begin{align}

E[N]={}&\sum_{n=0}^\infty P(N\gt n)\\
={}&\sum_{n=0}^\infty\left(1-\frac{m!}{m^n}\stir nm\right)\\
={}&\sum_{n=0}^\infty\left(1-\frac{m!}{m^n}\frac1{m!}\sum_{j=0}^m(-1)^{m-j}\binom mjj^n\right)\\
={}&\sum_{n=0}^\infty\frac1{m^n}\sum_{j=0}^{m-1}(-1)^{m-j+1}\binom mjj^n\\
={}&\sum_{j=0}^{m-1}(-1)^{m-j+1}\binom mj\sum_{n=0}^\infty\frac{j^n}{m^n}\\
={}&\sum_{j=1}^m(-1)^{j+1}\binom mj\sum_{n=0}^\infty\frac{(m-j)^n}{m^n}\\
={}&\sum_{j=1}^m(-1)^{j+1}\binom mj\frac mj\\
={}&-m\sum_{j=1}^m\int_0^{-1}\mathrm dq'\binom mjq'^{j-1}\\
={}&-m\int_0^{-1}\mathrm dq'\sum_{j=1}^m\binom mjq'^{j-1}\\
={}&-m\int_0^{-1}\mathrm dq'\frac{(1+q')^m-1}{q'}\\

={}&-m\int_0^{-1}\mathrm dq'\frac{(1+q')^m-1}{(1+q')-1}\\
={}&-m\int_0^{-1}\mathrm dq'\sum_{j=0}^{m-1}(-q')^j\\
={}&-m\sum_{j=0}^{m-1}\int_0^{-1}\mathrm dq'(-q')^j\\
={}&m\sum_{j=1}^m\frac1j\;.
\end{align}



I'll leave it to you to decide whether this counts as "using some sort of trick". :-)


sequences and series - The sum of all the odd numbers to infinity

We have this sequence:




S1: 1+2+3+4+5+6.. (to infinity)



It has been demonstrated, that S1 = -1/12.



Now, what happens if i multiply by a factor of 2?



S2: 2+4+6+8+10+12.... (to infinity).



I have 2S1, which is equal to -1/6




On this, we can create a equation for the odd numbers:



S3: 1+3+5+5+7+9+11... (to infinity)



We know that for every term in S2, every term in S3 is just (n-1)



Or, The sum of the even numbers, Minus , the sum of infinitely many (-1)s



So S3 = -1/6 - ∞




However, we also know that the odd numbers + the even numbers = The natural numbers.



So let's try it.



-1/6 - (-1/6 -∞)



We have -1/6 + 1/6 + ∞



Which is just ∞




So, there we have it. a paradox. S1 cannot be both -1/12 or ∞

linear algebra - The arithmetic-geometric mean for symmetric positive definite matrices

A while back, I wanted to see if the notion of the arithmetic-geometric mean could be extended to a pair of symmetric positive definite matrices. (I considered positive definite matrices only since the notion of the matrix square root is a bit intricate for other kinds of matrices.)



I expected that some complications would arise since, unlike scalar multiplication, matrix multiplication is noncommutative. Another complication would be that the product of two symmetric matrices need not be symmetric (though the positive definiteness is retained, so one can still speak of a principal matrix square root).



By analogy with the scalar AGM, I considered the iteration



$$\mathbf A_0=\mathbf A \; ; \; \mathbf B_0=\mathbf B$$ $$\mathbf A_{i+1}=\frac12(\mathbf A_i+\mathbf B_i) \; ; \; \mathbf B_{i+1}=\sqrt{\mathbf A_i \mathbf B_i}$$



I cranked up a short Mathematica routine:




matAGM[u_, v_] := First[FixedPoint[
{Apply[Plus, #]/2, MatrixPower[Apply[Dot, #], 1/2]} &, {u, v}]] /;
MatrixQ[u, InexactNumberQ] && MatrixQ[v, InexactNumberQ]


and decided to try it out on randomly generated SPD matrices.



(A numerical note: Mathematica uses the numerically stable Schur decomposition in computing matrix functions like the matrix square root.)



I found that for all of the randomly generated pairs of SPD matrices I tried, the process was convergent (though the rate of convergence is apparently not as fast as the scalar AGM). As expected, the order matters: matAGM[A, B] and matAGM[B, A] are usually not equal (and both results are unsymmetric) unless A and B commute (for the special case of diagonal A and B, the result is the diagonal matrix whose entries are the arithmetic-geometric means of the corresponding entries of the pair.)




I now have three questions:




  1. How do I prove or disprove that this process converges for any pair of SPD matrices? If it is convergent, what is the rate of convergence?


  2. Is there any relationship between matAGM[A, B] and matAGM[B, A] if the two matrices A and B do not commute?


  3. Is there any relationship between this matrix arithmetic-geometric mean and the usual scalar arithmetic-geometric mean? Would, say, arithmetic-geometric means of the eigenvalues of the two matrices have anything to do with this?








(added 8/12/2011)



More digging around has me convinced that I should indeed be considering the formulation of the geometric mean by Pusz and Woronowicz:



$$\mathbf A\#\mathbf B=\mathbf A^{1/2}(\mathbf A^{-1/2}\mathbf B\mathbf A^{-1/2})^{1/2}\mathbf A^{1/2}$$



as more natural; the proof of convergence is then simplified, as shown in the article Willie linked to. However, I'm still wondering why the original "unnatural" formulation seems to be convergent (or else, I'd like to see a pair of SPD matrices that cause trouble for the unnatural iteration). I am also interested in how elliptic integrals might crop up in here, just as they did for the scalar version of the AGM.

computability - Existence of a basis in constructive vector spaces

As I was trying to review forgotten knowledge on Vector Spaces in
wikipedia, I read that the existence of a basis follows from Zorn
lemma, hence equivalently from the axiom of choice. Actually, the
answer to another question shows that all three are equivalent.



However, my current (amateur) interest in mathematics is oriented
towards constructive mathematics (though I could hardly say I have much
competence for it). The axiom of choice is not constructive, though
I understand that weaker versions of it, such as those proposed in

intuitionistic theories, are constructive. So I assume the same holds for other equivalent statements.



So my question is: what are constructive versions of the existence of a
basis for Vector Spaces?



To make my question more precise, following the first comments, there
could be constraints that are specifically related to the fact that
everything must be computable anyway in a constructive context.



The fact that no one has yet found a basis for the vector space of

continuous functions from $[0,1]\rightarrow \mathbb R$ does not worry
me too much, and I would not mind if existence of a basis for such a
vector space were not considered by the replacement axiom.



But I am more concerned if you consider the
same, but restricted to continuous computable function over computable
reals from $[0,1]_C\rightarrow \mathbb R_C$ (the subscript $C$ is
intended to indicate that only computable numbers are to be considered.
(though I am afraid the situation may be as bad in the computable
case). Computable reals are denumerable. And, as I understand it,

computable mathematics will have to deal only with denumerable sets.

elementary number theory - West German Mathematics Olympiad 1981



If $n=2^k$ then from a set of $2n-1$ elements we can select $n$ numbers such that their sum is divisible by $n$.




I divided it in two case, first if any $n$ of them have the same remainder mod $n$ the we are done, in the other case i am having difficulty.



What if $n$ is any natural number?


Answer



Induction on $k$:



The claims is clear for $n=2^0$.



Let $k>0$, assume the claim holds for $n'=2^{k-1}$ and let $S$ be a set of $2n-1$ integers.

By induction hypothesis, we can pick $n'$ numbers from the first $2n'-1$ elements of $S$, such that their sum is a multiple of $n'$.
After removing these from $S$, we still have $2n-1-n'=3n'-1$, so from the first $2n'-1$ of these, we can again pick $n'$ such that their sum is a multiple of $n'$. After removing these as well, we are left with $2n-1-2n'=2n'-1$ numbers and again we can pick $n'$ elements such that their sum is a multiple of $n'$.



So now we have three disjoint subsets of $S$ of size $n'$ each and such that each sums to a multiple of $n'$. Note that a multiple of $n'$ is either a multiple of $n$ or an odd multiple of $n'$; one of these two types must occur at least twice among our three subsets. By combining two such like subsets, we arrive at a set of size $n$ with sum a multiple of $n$.


number theory - How to find integer linear combination

The question said:



Use the Euclidean Algorithm to find gcd $(1207,569)$ and write $(1207,569)$ as an integer linear combination of $1207$ and $569$



I proceeded as follows:



$$ 12007 = 569(2) +69$$




$$569 = 69(8) +17$$



$$69 = 17(4) +1$$



$$17 = 1(17) + 0$$



Thus the gcd = 1



The part I am having problems with is how calculate and write it was a linear combination. Can someone help?

Monday 20 October 2014

calculus - Improper integral of a function involving square root and absolute value.

$$\int_{-2}^{8}\dfrac{dx}{\sqrt{|2x\|}}$$



I understand that you have to split this into two integrals because at $x=0$, the function is not defined. The example showed that they split up the integral like this:



$$\lim_{b\to0^{-}}\int_{-2}^{b}\dfrac{dx}{\sqrt{-2x}}+\lim_{c\to0^{+}}\int_{c}^{8}\dfrac{dx}{\sqrt{2x}}$$



I understand how they split up the integral but why is the denominator different in each one? I figured that the first integral has negative limits of integration so a negative has to be put in the square root to make it positive. The integral on the right hand side remains positive because the limits are never negative. Is this the right assumption?




If my assumption is correct, is there a general rule for dealing with absolute values in problems like these? I appreciate anyone that would take the time to explain this to me. Thank you in advance!

polynomials - Factor $x^8-x$ in $Bbb Z[x]$ and in $Bbb Z_2[x]$




Factor $x^8-x$ in $\Bbb Z[x]$ and in $\Bbb Z_2[x]$



Here what I get is $x^8-x=x(x^7-1)=x(x-1)(1+x+x^2+\cdots+x^6)$ now what next? Help in both the cases in $\Bbb Z[x]$ and in $\Bbb Z_2[x]$



Edit: I think $(1+x+x^2+\cdots+x^6)$ is cyclotomic polynomial for $p=7$ so it is irred over $\Bbb Z$. Now the problem remains for $\Bbb Z_2[x]$


Answer



After my edit I finally got the answer it was under my nose the polynomial $1+\cdots +x^6$ does not have zeros at $0$ and $1$ so it can't be factored as a multiple of a one degree polynomial and one other as a whole.



Edit: But as Lubin mentioned in the comment $1+\cdots +x^6=(1+x+x^3)(1+x^2+x^3)$



Solve the radical equation $ xsqrt{x^2+5} + (2x+1)sqrt{4x^2+4x+6}=0.$




Solve the following equation:
$$ x\sqrt{x^2+5} + (2x+1)\sqrt{4x^2+4x+6}=0.$$




I wanted to solve this equation. First I tried to change the equations under the roots to the complete square to simplify them out, but it just became more complicated.




Can someone help me with this equation, please?


Answer



Assuming you want real roots, you can use this trick . . .


Let $f\colon \mathbb{R} \to \mathbb{R}$ be given by $f(t) = t\sqrt{t^2+5}$.


Then $f$ is an odd function.


Also, $f$ is strictly increasing, hence $f$ is one-to-one.



Then, letting $u=2x+1$,
\begin{align*}
&x\sqrt{x^2+5}+(2x+1)\sqrt{4x^2 + 4x + 6}=0
\qquad\qquad\qquad\qquad\;\;
\\[4pt]
\iff\;&x\sqrt{x^2+5}+u\sqrt{u^2 + 5}=0\\[4pt]
\iff\;&f(x) + f(u)=0\\[4pt]
\iff\;&f(x) = -f(u)\\[4pt]
\iff\;&f(x) = f(-u)\qquad\text{[since $f$ is odd]}\\[4pt]
\iff\;&x = -u\qquad\qquad\;\;\text{[since $f$ is one-to-one]}\\[4pt]

\iff\;&x = -(2x+1)\\[4pt]
\iff\;&3x+1 = 0\\[4pt]
\iff\;&x = -{\small{\frac{1}{3}}}\\[4pt]
\end{align*}
As an alternative, using the same trick, you can get a factored form:
\begin{align*}
&x\sqrt{x^2+5}+u\sqrt{u^2 + 5}=0\\[4pt]
\implies\;&x\sqrt{x^2+5}=-u\sqrt{u^2 + 5}\\[4pt]
\implies\;&x^2(x^2+5)=u^2(u^2 + 5)\\[4pt]
\implies\;&x^4+5x^2=u^4+5u^2\\[4pt]

\implies\;&(u^4-x^4)+5(u^2-x^2)=0\\[4pt]
\implies\;&(u^2-x^2)(u^2+x^2)+5(u^2-x^2)=0\\[4pt]
\implies\;&(u^2-x^2)(u^2+x^2+5)=0\\[4pt]
\implies\;&(u-x)(u+x)(u^2+x^2+5)=0\\[4pt]
\implies\;&(x+1)(3x+1)(5x^2+4x+6)=0\qquad\text{[replacing $u$ by $2x+1$]}
\end{align*}
The two candidate real roots, $x=-1,\;x=-{\large{\frac{1}{3}}}\;$need to be verified against the original equation since, when squaring both sides, extraneous real roots were potentially introduced. In this case, as it turns out, the candidate root $x=-{\large{\frac{1}{3}}}$ is ok, but the candidate root $x=-1$ fails, so is not an actual root.


continuity - To show $f$ is continuous




Let $f:[0,1]\rightarrow \mathbb{R}$ is such that for every sequence $x_n\in [0,1]$, whenever both $x_n$ and $f(x_n)$ converges , we have $$\lim_{n\rightarrow\infty} f(x_n)=f(\lim_{n\rightarrow\infty}x_n),$$ we need to prove $f$ is continuous



well, I take $x_n$ and $y_n$ in $[0,1]$ such that $|(x_n-y_n)|\rightarrow 0$, and the given condition holds,Now enough to show $|f(x_n)-f(y_n)|\rightarrow 0$



I construct a new sequence $$z_1=x_1$$ $$z_2=y_1$$ $$\dots$$ $$z_{2n-1}=x_n$$ and $$ z_{2n}=y_n$$



We see, that subsequence of $f(z_n)$ converges so it must be convergent to the same limit. Am I going in right path? please help.


Answer



I will prove a different claim because I have pointed out that what is mentioned here is wrong by a counter-example.




Let $f:[0,1]→\mathbb{R}$ is such that for every sequence $x_n∈[0,1]$ whenever $(x_n)$ converges , we have $ \lim\limits_{n→∞}f(x_n)=f \left(\lim\limits_{n→∞}x_n \right)$ then $f$ is continous on $[0,1]$.



I think the best way is to use proof by contradiction. Assume $f$ is not continuous at $c \in [0,1]$ then there exist $ \epsilon_{0} > 0 $ such that for all $ n \in \mathbb{N} $ there exist $ x_{n} \in (c-1/n,c+1/n) \cap [0,1] $ such that $|f(x_n)-f(c)| \geq \epsilon_{0}>0 $



Obviously $ ( x_n )$ converges to $c$ but $(f(x_n)) $ does not converge to $f(c)$ ( Note that all the terms of $(f(x_n))$ are a positive distance away from $f(c)$ ) which is a contradiction with the given property of the function.



Since our choice of $c$ was arbitrary, we have that $f$ is continuous on $[0,1]$


asymptotics - Behaviour and Limit of Cumulative Distribution as Variance Grows to Infinity

Please suggest:





What are some reasonable assumptions regarding the limit of the Cumulative Distribution as the Variance grows to infinity.



$$
\lim_{\sigma\rightarrow\infty} F\left(t,\sigma\right) = \text{??}

$$





Also, is it a reasonable assumption to expect that the cumulative distribution will decrease in value with growing variance, as shown below?



$$\frac{\partial F(t,\sigma)}{\partial\sigma}\leq0$$



Here, $F(t,\sigma)$ are a family of distributions with parameter governed by $\sigma$, the variance. If helpful, we can make another simplifying assumption that $t\geq0$.




Please note, the questions are not specific to any particular distribution, but any general distribution and what are some valid forms of this limit and the behaviour of the distribution as variance increases.



Please point out any specific references on this topic and also list any additional assumptions made to arrive at any results.



Please let me know if the question is not clear or if you need any further information.

Sunday 19 October 2014

calculus - Getting conflicting answers with the first derivative test...

I'm working with critical points and the 1st derivative test and am having a bit of confusion.



The original function



$$f(x) = x(2x-3)^{1/4}$$



has a derivative of




$$f'(x) = \frac{5x-6}{2(2x-3)^{(3/4)}}$$



Setting the numerator to 0 gives me a critical point of 6/5, while a 3/2 in the denominator makes it undefined. But, the graph can't exist at 6/5 = 1.2 because x must be greater than 1.5.



The red line is the original function, the purple curve is the derivative:
Graph of this function and its derivative



Furthermore, the derivative never equals 0, so it seems like there can be no critical points except for where f'(x) is undefined, namely, at 3/2. So there is only a single C.P. here.



When I go to make a line graph, all values to the left of 3/2 come up undefined, but all those to the right show than f(x) is increasing. So how can 6/5 be a critical point of the numerator in this case?




Doesn't this mean that there is no place where f(x) is decreasing? Perhaps there is no conflict here but I'm still not sure why the results are so weird. All the examples and practice problems using the line graph have always had either positive or negative values next to the critical point, instead of being undefined to one side.



Line graph with test values greater and less than 3/2:



DNE DNE  ...  ++++++++++++

<---|----------|--------|---->

0 3/2 2

real analysis - If $f$ is absolutely continuous $sqrt(f)$ may not be

The problem reads:



Prove that if $ f:[0,1]\rightarrow(0,\infty) $ is absolutely continuous $ \sqrt{f} $ may not be.




I am having trouble figuring out how to show this. I found that $x^2\sin\left(\frac{1}{x^2}\right)$ is not absolutely continuous, but then I need to show that $\left[x^2\sin\left(\frac{1}{x^2}\right)\right]^2$ is absolutely continuous and I don't think that it is. Is there a more general way to show this or is there a counterexample that works?

Saturday 18 October 2014

An example for a calculation where imaginary numbers are used but don't occur in the question or the solution.




In a presentation I will have to give an account of Hilbert's concept of real and ideal mathematics. Hilbert wrote in his treatise "Über das Unendliche" (page 14, second paragraph. Here is an English version - look for the paragraph starting with "Let us remember that we are mathematicians") that this concept can be compared with (some of) the use(s) of imaginary numbers.



He thought probably of a calculation where the setting and the final solution has nothing to do with imaginary numbers but that there is an easy proof using imaginary numbers.
I remember once seeing such an example but cannot find one, so:



Does anyone know about a good an easily explicable example of this phenomenon?



("Easily" means that enigneers and biologists can also understand it well.)


Answer



Once nice example is the sum




$$ \cos x + \cos 2x + \cos 3x + \cdots + \cos nx $$



This can be worked out using trigonometric identities, but it turns out to be surprisingly simple with this neat trick:



$$ \sum_{k=1}^n \cos(kx) = \sum_{k=1}^n\mathscr{Re}\{e^{ikx} \}
= \mathscr{Re} \sum_{k=1}^n e^{ikx} = \mathscr{Re}\left\{ \frac{e^{i(n+1)x} - e^{ix}}{e^{ix} - 1} \right\}
$$



because the sum turns into a geometric series. (computing the real part to get an answer in terms of trigonometric functions is not difficult, but is a little tedious)



algebra precalculus - Can you use matrices to solve simultaneous equations with powers?



I'm working on problems of finding absolute minima and maxima of functions of two variables. The process of finding critical points by setting the partial derivatives to zero takes too long. I have mostly been trying to do it analytically but that tends to be an error-prone process (at least for me). I noticed that these equations are essentially slightly complicated simultaneous equations. I remember using matrices to solve simultaneous equations back in high school. It would be nice if I could use it here but I'm wondering if it can be done when the variables have powers. I've tried to find tutorials on how to do this but they all involve equations without powers.


Answer



Generally speaking the matrix methods you have learned are only appropriate for linear equations. The matrix is multiplied by a column vector of variables and set equal to the constant terms of the equations. It is a handy way of representing the equations. If one of your variables, say $x$, appears squared it would seem natural to have entries in the column vector for $x$ and $x^2$. You could then represent the equations nicely. The problem is that this does not capture the relationship between $x$ and $x^2-$ it might as well be $x$ and $y$.




If only one variable appears to a higher power, you can do the matrix approach. You will have $n$ equations in $n+1$ unknowns because your unknowns will include both $x$ and $x^2$. Solve it with $x$ as a parameter and you will get a single equation that is $x^2=f(x)$. Now use the quadratic formula for $x$.


calculus - Evaluate $int_0^infty frac{dx}{x^2+2ax+b}$



For $a^2



$$\int_0^\infty \frac{dx}{x^2+2ax+b}$$




What about:




$$\int_0^\infty \frac{dx}{(x^2+2ax+b)^2}$$





My attempt is using partial fractions and completing the square, but I still failed to obtain a nice result.


Answer



Let the first integral be $I$ and the second one be $J$, then by putting $x=y-a$ and $y=z\sqrt{b-a^2}$ we have
\begin{align}
I(a,b)&=\int_0^{\infty}\frac{dx}{(x+a)^2+b-a^2}\\[10pt]
&= \int_a^{\infty}\frac{dy}{y^2+b-a^2}\\[10pt]
&=\frac{1}{\sqrt{b-a^2}}\int_{\large\frac{a}{\sqrt{b-a^2}}}^{\infty}\frac{dz}{z^2+1}\\[10pt]
&=\frac{1}{\sqrt{b-a^2}}\left(\frac{\pi}{2}-\arctan\left(\frac{a}{\sqrt{b-a^2}}\right)\!\right)\\[15pt]
\end{align}

and the 2nd integral is just a first derivative of $I$ with respect to $b$ times $-1$
\begin{equation}
\\[15pt]J(a,b)=-\frac{\partial I}{\partial b}=\frac{\partial }{\partial b}\left[\frac{1}{\sqrt{b-a^2}}\left(\arctan\left(\frac{a}{\sqrt{b-a^2}}\right)-\frac{\pi}{2}\right)\right]\\[10pt]
\end{equation}
Can you take it from here?


calculus - How find this sum of $lim_{ntoinfty}sum_{k=1}^{n}left(frac{1}{2^k}sum_{i=0}^{2^{k-1}-1}ln{left(frac{2^k+2+2i}{2^k+1+2i}right)}right)$

Find the sum of the limit
$$\lim_{n\to\infty}\sum_{k=1}^{n}\left(\frac{1}{2^k}\sum_{i=0}^{2^{k-1}-1}\ln{\left(\frac{2^k+2+2i}{2^k+1+2i}\right)}\right)$$



My try: since
$$\sum_{i=0}^{2^{k-1}-1}\ln{\left(\dfrac{2^k+2+2i}{2^k+1+2i}\right)}=\sum_{i=0}^{2^{k-1}-1}\left(\ln{(2^k+2+2i)}-\ln{(2^k+1+2i)}\right)$$



My friend tells me this sum has an analytical solution. But
I can't find it. Thank you.

infinity - There's a real between any two rationals, a rational between any two reals, but more reals than rationals?

The following statements are all true:




  • Between any two rational numbers, there is a real number (for example, their average).

  • Between any two real numbers, there is a rational number (see this proof of that fact, for example).

  • There are strictly more real numbers than rational numbers.




While I accept each of these as true, the third statement seems troubling in light of the first two. It seems like there should be some way to find a bijection between reals and rationals given the first two properties.



I understand that in-between each pair of rationals there are infinitely many reals (in fact, I think there's $2^{\aleph_0}$ of them), but given that this is true it seems like there should also be in turn a large number of rationals between all of those reals.



Is there a good conceptual or mathematical justification for why the third statement is tue given that the first two are as well?



Thanks! This has been bothering me for quite some time.

Friday 17 October 2014

real analysis - Definition of signed measure (assume at most one values of $pm infty$)




In the real analysis (Folland), the definition of signed measure is following:



enter image description here



I am confused about the second one. My professor said that if there were no second requirement, then countable additivity does not mean anything since we can rearrange the sum to get whatever I want.



I have no idea what he was talking about. Could anyone provide me with a concrete example to say anything about this?


Answer



As ForgotALot said, the issue stems from dealing with unions of a set of infinite positive measure and infinite negative measure. While we can handle things like $3+\infty = \infty$, allowing $-\infty$ at the same time is an issue: is $\infty-\infty = 0$ or maybe it is $(3+ \infty)-\infty = 3$?




Here is a concrete example: for subsets of $\mathbb Z$, define $\nu(A)=\sum_{a\in A} \operatorname{sign} (a)$. Since
$$
\mathbb{Z} = \{0\} \cup\bigcup_{n=1}^\infty \{n,-n\}
$$
where each set on the right has zero measure, it follows that $\nu(\mathbb{Z})=0$. But we can also write
$$
\mathbb{Z} = \{0 , 1 \} \cup\bigcup_{n=1}^\infty \{n+1,-n\}
$$
and conclude that $\nu(\mathbb{Z})=1$. So we can't have nice things like countable additivity if both $\pm \infty$ are allowed.



calculus - Why is $f(x,y)$ said to be discontinuous at $(0,0)$?



Why is
$f(x,y)=\begin{cases}
\frac{x^2y}{x^4+y^2}, & \text{if $(x,y)\neq (0,0)$}\\[2ex]
0, & \text{if $(x,y)=(0,0)$}
\end{cases}$ said to be discontinuous at $(0,0)$?




I am supposed to show that this function is not continuous at (0,0), but as $(x,y)$ approaches $(0,0)$, $f(x,y)$ approaches $0=f(0,0)$. So what did I miss here?


Answer



Let $y=x^{2}$. Consider $f(x,x^{2})=\frac{x^{4}}{2x^{4}}=\frac{1}{2}$.So it's not continuous at $(0,0)$. (Even it does not have a limit, you can plug $y=0$)


elementary number theory - Prove that $(1+p)^{p^{n-1}} equiv 1 pmod{p^n}$ but $(1+p)^{p^{n-2}} notequiv 1pmod{p^n}$, deduce $text{ord}_{p^n}(p+1)=p^{n-1}$



I need some hints for this problem from Dummit and Foote. (edit: added the full question verbatim for context)



Let $p$ be an odd prime and let $n$ be a positive integer. Use the binomial theorem to show that $(1+p)^{p^{n-1}} \equiv 1 \pmod{p^n}$ but $(1+p)^{p^{n-2}} \not\equiv 1\pmod{p^n}$. Deduce that $1+p$ is an element of order $p^{n-1}$ in the multiplicative group $(\mathbb{Z}/p^n\mathbb{Z})^{\times}$.



For the first part I just need to show that $$p^n \;\left|\; \sum\limits_{k=1}^{p^{n-1}}\binom{p^{n-1}}{k}p^k\right.$$ and it seems obvious enough that it should, since I think I can factor a $p^n$ out of each term, but Im having trouble actually proving this.




For the second part I am unsure what to do.



For reference it is problem 21. on page 60 of the third edition. (section 2.3).



Edit Ok I have managed to show the first part. From the lemma in the post Bill Dubuque linked to, which I could prove easily, if $z\equiv 1$ mod $p^n$ and $z = p+1$ then $z^p \equiv 1$ mod $p^{n+1}$. Since $z^p-1 = (z-1)(1+z+...+z^{p-1})$ and $(1+z+...+z^{p-1})\equiv 1+1+...+1 = p \equiv 0$ mod $p$.



So it follows by induction that $(p+1)^{p^{n-1}} \equiv 1$ mod $p^n$.



My problem here is that what Ive proven really only shows that $ord_p(z^p-1) \geq ord_p(z-1)+1$ as far as I can tell, so Im still not sure how I can show that $(1+p)^{p^{n-2}} \not\equiv 1\pmod{p^n}$. I think Im missing something here.




Edit II Ok, I snuck a peek at Pete's notes and I see what I was missing. Really the crucial step was writing $z = 1 + xp$, then I could see how to do it.


Answer



Hint: use the binomial formula to establish the following fact.



Let $p$ be an odd prime number and $z$ an integer which is $1$ mod $p$. Then



$\operatorname{ord}_p(z^p − 1) = \operatorname{ord}_p(z − 1) + 1.$



Here, for a nonzero integer $N$, $\operatorname{ord}_p(N)$ is the largest power of $p$ which divides $N$.




(This is actually taken from a result in some notes of mine from an elementary number theory course, but since the OP asked for a hint only, I wish to obey by not posting the link.)



Note: the hint should serve to solve both parts of the question.



Added: Okay, for more help see Lemma 3 of these notes.


Thursday 16 October 2014

real analysis - Uniform limit of functions with intermediate value property has intermediate value property? Where is the error in this proof?



This is the question that I posed myself and set out to solve before I came to MSE:




Suppose that $\{f_n\}$ is a sequence of real-valued functions defined on $[a,b]$ such that each function has the intermediate value property. Suppose there is another function $f:[a,b]\rightarrow\mathbb{R}$ such that the sequence $\{f_n\}$ approaches $f$ uniformly.



Does it follow that $f$ has the intermediate value property?





I then came to MSE and found this question and answer which is easily adapted to answer my problem in the negative---$f$ need not have the IVP.



However, before I came here I thought I came up with a proof for the opposite; that is, I thought I proved that $f$ must have the IVP. But, I cannot find the error in my reasoning. Could someone please find it?




Suppose that $x$ and $y$ are two elements of $[a,b]$ such that $f(x)

Now choose $N$ from $\mathbb{N}$ so large such that $f_n(x)

Since every function in the sequence $\{f_n\}$ has the IVP, there is a sequence $\{c_n\}$ that begins its indexing at $N$ such that

$$x for every $n\geq N$.



By Bolzano-Weierstrass there is a subsequence $\{c_{n_k}\}$ and an element $z$ of $[x,y]$ such that $c_{n_k}\rightarrow z$ as $k$ tends to infinity.



Clearly, the subsequence $\{f_{n_k}(c_{n_k})\}$ approaches $c$. However, the subsequence $\{f_{n_k}(c_{n_k})\}$ also approaches $f(z)$ since the convergence is uniform.



Therefore $f(z)=c$.



Answer




You seem to be invoking a property different from uniform convergence that also happens to imply that $f$ is continuous. Some problem I did for fun:
enter image description here



Since none of the functions are assumed or otherwise known to be continuous for all we know we could just as well have $f_n(z)=f(y)+1>f(y)$ for all $n \in \mathbb{N}$. In this hypothetical scenario, we would of course have that $f(z)=f(y)+1 \neq c$ ( $f_n \to f$ uniformly).






I can't think of a counterexample right now but I will post in here if I come up with one.



So every counterexample that I have tried to cook up somehow fails. If there is a counterexample, then I want to say it is not obvious. The statement could always be undecidable which is not to say that I have considered trying to even prove the claim that you mention but merely just throwing this out there as a possibility.



convergence divergence - Interesting Power Series

The series is $\sum_{n=1}^{\infty} r(n)x^n$ , where $r(n)$ is defined as the divisor function. The question is , what is the radius of convergence of the power series?
Maybe it is not that interesting , but I am stuck with this one since we haven't had this function in our entire lecture and it came up in the latest worksheet.

How find this limit $lim_{xto+infty}frac{e^{-2x}(cos{x}+2sin x)+e^{-x^2}(sin{x})^2}{e^{-x}(cos{x}+sin{x})}$



Find this limit.



$$\lim_{x\to+\infty}\dfrac{e^{-2x}(\cos{x}+2\sin x)+e^{-x^2}(\sin{x})^2}{e^{-x}(\cos{x}+\sin{x})}$$




This wolf can't have reslut. link



maybe this limit is not exsit? so how prove it?
Thank you


Answer



There are arbitrarily large $x$ at which our function is not even defined. And by taking $x$ close enough to $2n\pi+\frac{3\pi}{4}$, we can make our function arbitrarily large positive or negative.


Wednesday 15 October 2014

linear algebra - a question how to compute the eigenvalues of a matrix

I have a question:
Suppose I have a $n\times n$ matrix:
$$
\begin{bmatrix}
1 & 1 &...& 1 \\
1 & 1 &...&1 \\
\vdots&\vdots &\ddots & \vdots&\\
1 & 1 & ...&1 \\

\end{bmatrix}
$$
,then is there a easy way to compute the eigenvalues of the matrix?



How can I compute this matrix eigenvalue?

calculus - Function grows slower than $ln(x)$



What function grows slower than $\ln(x)$ as $x \rightarrow\infty$? How am I supposed to find it besides just trying finding limits of all known functions?


Answer



$$f(x)=\sqrt{\ln x}$$



should do, for $x\ge 1$.


Tuesday 14 October 2014

set theory - About a paper of Zermelo



This about the famous article



Zermelo, E., Beweis, daß jede Menge wohlgeordnet werden kann, Math. Ann. 59 (4), 514–516 (1904),




available here. Edit: Springer link to the original (OCR'ed, may be behind a paywall)



An English translation can be found in the book Collected Works/Gesammelte Werke by Ernst Zermelo. Alternative source: the book From Frege to Gödel: a source book in mathematical logic, 1879-1931, by Jean Van Heijenoort.



[See also this interesting text by Dan Grayson.]



I don't understand the paragraph on the last page whose English translation is




Accordingly, to every covering $\gamma$ there corresponds a definite well-ordering of the set $M$, even if the well-orderings that correspond to two distinct coverings are not always themselves distinct. There must at any rate exist at least one such well-ordering, and every set for which the totality of subsets, and so on, is meaningful may be regarded as well-ordered and its cardinality as an "aleph". It therefore follows that, for every transfinite cardinality,

$$\mathfrak m=2\mathfrak m=\aleph_0\,\mathfrak m=\mathfrak m^2,\mbox{and so forth;}$$
and any two sets are "comparable"; that is, one of them can always be mapped one-to-one onto the other or one of its parts.




It seems to me Zermelo says that the fact that any set can be well-ordered immediately implies that any infinite cardinal equals its square. Is this interpretation correct? If it is, what is the argument?



Side question: What is, in a nutshell, the history of the statement that any infinite cardinal equals its square? Where was it stated for the first time? Where was it proved for the first time? (In a similar vein: where was the comparability of any two cardinal numbers proved for the first time?)







Edit: German original of the passage in question:




Somit entspricht jeder Belegung $\gamma$ eine ganz bestimmte Wohlordnung der Menge $M$, wenn auch nicht zwei verschiedenen Belegungen immer verschiedene. Jedenfalls muß es mindestens eine solche Wohlordnung geben, und jede Menge, für welche die Gesamtheit der Teilmengen usw. einen Sinn hat, darf als eine wohlgeordnete, ihre Mächtigkeit als ein „Alef“ betrachtet werden. So folgt also für jede transfinite Mächtigkeit
$$\mathfrak m=2\mathfrak m=\aleph_0\,\mathfrak m=\mathfrak m^2\text{ usw.,}$$
und je zwei Mengen sind miteinander „vergleichbar“, d. h. es ist immer die eine ein-eindeutig abbildbar auf die andere oder einen ihrer Teile.



Answer



The following is a theorem of $ZF$: 




The axiom of choice holds if and only if for every infinite set $A$, there exists a bijection of $A$ with $A\times A$. (i.e. $|A|=|A|^2$) 






Let us overview the theorem of Zermelo, namely if the axiom of choice holds then $\kappa=\kappa^2$ for every infinite $\kappa$.



This is fairly simple, by the canonical well ordering of pairs.



Consider $\alpha\times\beta$, this can be well ordered as ordinal multiplication (that is $\beta$ copies of $\alpha$, i.e. lexicographical ordering), or it can be ordered as following:




$$(x,y)<(w,z)\iff\begin{cases} \max\{x,y\}<\max\{w,z\}\\ \max\{x,y\}=\max\{w,z\}\land x

This is a well-ordering (can you see why?). Now we will prove that $\kappa\times\kappa$ has the same order type as $\kappa$, this is a proof that the two sets have the same cardinality, since similar order types induce a bijection.



Firstly, it is obvious that $\kappa$ is at most of the order type of $\kappa\times\kappa$ since the order type of $\kappa$ can be simply be written as $\alpha\mapsto (\alpha,\alpha)$. The other direction we prove by induction on $\alpha$ that for the initial ordinal $\omega_\alpha$ it is true: $\omega_\alpha=\omega_\alpha\times\omega_\alpha$.



Fact: If $\delta<\omega_\alpha$ (where $\omega_\alpha$ is the $\alpha$-th initial ordinal) then $|\delta|<\aleph_\alpha$.



The claim is true for $\omega_0=\omega$ since for any $k$ the set $\{(n,m)\mid (n,m)<(k,k)\}$ is finite. Therefore the order type of $\omega\times\omega$ is the supremum of $\{k_n\mid n\in\omega\}$ and $k_n$ are finite. Simply put, the order type is $\omega$.




Now assume (by contradiction) $\alpha$ was the least ordinal such that $\omega_\alpha$ was a counterexample to this claim, i.e. $\omega_\alpha$ is strictly less than the order type of $\omega_\alpha\times\omega_\alpha$.



Let $(\gamma,\beta)<\omega_\alpha\times\omega_\alpha$ be the pair of ordinals such that the order type of $\{(\xi,\zeta)\mid (\xi,\zeta)<(\gamma,\beta)\}$ is $\omega_\alpha$.



Take $\delta$ such that $\omega_\alpha>\delta>\max\{\gamma,\beta\}$ then $(\gamma,\beta)<(\delta,\delta)$ and in particular $\{(\xi,\zeta)\mid (\xi,\zeta)<(\delta,\delta)\}$ has cardinality of at least $\omega_\alpha$, as it extends a well order of the type $\omega_\alpha$.



However, $\delta<\omega_\alpha$ by the fact above it is of smaller cardinality, and thus that set has the cardinality $|\delta|\times |\delta|=|\delta|<\omega_\alpha$ by our induction assumption. Hence, a contradiction.







The other direction, also known as Tarski's theorem (I managed to find that it was published around 1923, but I could not find a proper reference.) is as follows:



Suppose that for all infinite $A$, there exists a bijection of $A$ with $A\times A$ then the axiom of choice holds.



The proof (which I will not bring here, as it would require a few more notations and definitions - I did give it here) uses the concept of Hartogs number (the least ordinal which cannot be injected into $A$). The proof in its essence is:



If $\aleph(A)$ is the Hartog of $A$,
$$A+\aleph(A)=(A+\aleph(A))^2=A^2+2\cdot A\cdot\aleph(A)+\aleph(A)^2\ge A\cdot\aleph(A)\ge A+\aleph(A)$$



We then use (or prove) a theorem that if $A+\aleph(A)=A\cdot\aleph(A)$ then $A$ can be well ordered.




Historically, Tarski came to publish this theorem. It was rejected at first. Polish-American mathematician Jan Mycielsi relates in his article A System of Axioms of Set Theory for the Rationalists Notices AMS, February 2006, p.209:




Tarski told me the following story. He tried to publish his theorem (stated above) in the Comptes Rendus Acad. Sci. Paris but Fréchet and Lebesgue refused to present it. Fréchet wrote that an implication between two well known propositions is not a new result. Lebesgue wrote that an implication between two false propositions is of no interest. And Tarski said that after this misadventure he never tried to publish in the Comptes Rendus.




Found via Wikipedia article on the axiom of choice.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...