Wednesday 31 July 2013

complex analysis - Evaluate $sum_{n=1}^infty frac{(-1)^{n+1}n^2}{n^4+1}$




Evaluate
$$\sum_{n=1}^\infty \frac{(-1)^{n+1}n^2}{n^4+1}$$





Does anyone have any smart ideas how to evaluate such a sum? I know one solution with complex numbers and complex analysis but I'm looking for some more smart or sophisticated methods.


Answer



I would not say that it is elegant, but:



The form $n^4+1$ in the denominator suggests that one should be able to get this series by expanding a combination of a hyperbolic and trigonometric function in a Fourier series.



Indeed, after some trial and error, the following function seems to work:



$$

\begin{gathered}
\left(\cos \left(\frac{\pi }{\sqrt{2}}\right) \sinh \left(\frac{\pi }{\sqrt{2}}\right)-\sin
\left(\frac{\pi }{\sqrt{2}}\right) \cosh \left(\frac{\pi }{\sqrt{2}}\right)\right)\cos \left(\frac{x}{\sqrt{2}}\right) \cosh \left(\frac{x}{\sqrt{2}}\right) \\
+ \left(\cos \left(\frac{\pi }{\sqrt{2}}\right) \sinh \left(\frac{\pi }{\sqrt{2}}\right)+\sin \left(\frac{\pi }{\sqrt{2}}\right) \cosh
\left(\frac{\pi }{\sqrt{2}}\right)\right)\sin \left(\frac{x}{\sqrt{2}}\right) \sinh
\left(\frac{x}{\sqrt{2}}\right)
\end{gathered}
$$



It is even, and its cosine coefficients are

$$
\frac{\sqrt{2}\bigl(\cos(\sqrt{2}\pi)-\cosh(\sqrt{2}\pi)\bigr)(-1)^{n+1} n^2}{\pi(1+n^4)},\quad n\geq 1.
$$
(The zero:th coefficient is also zero). Evaluating at $x=0$ (the series converges pointwise there) gives
$$
\sum_{n=1}^{+\infty}\frac{(-1)^{n+1}n^2}{1+n^4}=
\frac{\pi\left(\sin
\left(\frac{\pi }{\sqrt{2}}\right) \cosh \left(\frac{\pi }{\sqrt{2}}\right)-\cos \left(\frac{\pi }{\sqrt{2}}\right) \sinh \left(\frac{\pi }{\sqrt{2}}\right)\right)}{\sqrt{2}\bigl(\cosh(\sqrt{2}\pi)-\cos(\sqrt{2}\pi)\bigr)}\approx 0.336.
$$


real analysis - Graph of discontinuous linear function is dense



$f:\mathbb{R}\rightarrow\mathbb{R}$ is a function such that for all $x,y$ in $\mathbb{R}$, $f(x+y)=f(x)+f(y)$. If $f$ is cont, then of course it has to be linear. But here $f$ is NOT continous. Then show that the set $\{{(x,f(x)) : x {\rm\ in\ } \mathbb{R}\}}$ is dense in $\mathbb{R}^2$.



Answer



Let $\Gamma$ be the graph.



If $\Gamma$ is contained in a $1$-dimensional subspace of $\mathbb R^2$, then it in fact coincides with that line. Indeed, the line will necessarily be $L=\{(\lambda,\lambda f(1)):\lambda\in\mathbb R\}$, and for all $x\in\mathbb R$ the line $L$ contains exactly one element whose first coordinate is $x$, so that $\Gamma=L$. This is impossible, because it clearly implies that $f$ is continuous.



We thus see that $\Gamma$ contains two points of $\mathbb R^2$ which are linearly independent over $\mathbb R$, call them $u$ and $v$.



Since $\Gamma$ is a $\mathbb Q$-subvector space of $\mathbb R^2$, it contains the set $\{au+bv:a,b\in\mathbb Q\}$, and it is obvious that this is dense in the plane.


calculus - Sum to $n$ terms the series $frac{1}{3cdot9cdot11}+frac{1}{5cdot11cdot13}+frac{1}{7cdot13cdot15}+cdots$.




Q:Sum to n terms the series :
$$\frac{1}{3\cdot9\cdot11}+\frac{1}{5\cdot11\cdot13}+\frac{1}{7\cdot13\cdot15}+\cdots$$





This was asked under the heading using method of difference and ans given was
$$S_n=\frac{1}{140}-\frac{1}{48}\left(\frac{1}{2n+3}+\frac{1}{2n+5}+\frac{1}{2n+7}-\frac{3}{2n+9} \right)$$




My Approach:First i get $$U_n=\frac{1}{(2n+1)(2n+7)(2n+9)}$$
In order to make $U_n$ is the reciprocal of the product of factors in A.P i rewrite it
$$U_n=\frac{(2n+3)(2n+5)}{(2n+1)(2n+3)(2n+5)(2n+7)(2n+9)}=\frac{(2n+7)(2n+9)-48}{(2n+1)(2n+3)(2n+5)(2n+7)(2n+9)}=\frac{1}{(2n+1)(2n+3)(2n+5)}-\frac{48}{(2n+1)(2n+3)(2n+5)(2n+7)(2n+9)}$$
Then i tried to make $U_n=V_n-V_{n-1}$ in order to get $S_n=V_n-V_0$.But i really don't know how can i figure out this.Any hints or solution will be appreciated.
Thanks in advance.




Answer



Let
$$S_n=\sum_{k=1}^n\frac{1}{(2k+1)(2k+7)(2k+9)}.$$
By the partial fraction decomposition:
$$\frac{1}{(2k+1)(2k+7)(2k+9)}=\frac{1/48}{2k+1}
-\frac{1/12}{2k+7}+\frac{1/16}{2k+9}$$

Then, after letting $O_n=\sum_{k=1}^n\frac{1}{2k+1}$, we have that
\begin{align}
S_n&=\frac{O_n}{48}-\frac{O_{n+3}-O_3}{12}+\frac{O_{n+4}-O_4}{16}\\

&=\frac{4O_3-3O_4}{48}+\frac{O_n-4O_{n+3}+3O_{n+4}}{48}\\
&=\frac{1}{140}-\frac{1}{48}\left(O_{n+3}-O_n-3(O_{n+4}-O_{n+3})\right)\\
&=\frac{1}{140}-\frac{1}{48}\left(\frac{1}{2n+3}+\frac{1}{2n+5}+\frac{1}{2n+7}-\frac{3}{2n+9} \right).
\end{align}


Power of 2 with equal number of decimal digits?



Does there exist an integer $n$ such that the decimal representation of $2^n$ have an equal number of decimal digits $\{0,\dots,9\}$, each appearing 10% of the time?



The closest I could find was $n=1,287,579$ of which $2^n$ has 387,600 digits broken down as



0  38,808   10.012%
1 38,735 9.993%

2 38,786 10.007%
3 38,751 9.997%
4 38,814 10.014%
5 38,713 9.987%
6 38,731 9.992%
7 38,730 9.992%
8 38,709 9.986%
9 38,823 10.016%

Answer




No. If each digit appears $x$ times, then the sum of all the digits will be $45x$; this implies $3|2^n$ which cannot be the case.


abstract algebra - Is there an algebraic extension $K / Bbb Q$ such that $text{Aut}_{Bbb Q}(K) cong Bbb Z$?




Is there an algebraic field extension $K / \Bbb Q$ such that $\text{Aut}_{\Bbb Q}(K) \cong \Bbb Z$?





Here I mean the field automorphisms (which are necessarily $\Bbb Q$-algebras automorphisms) of course.



According to this answer, one can find some extension of $\Bbb Q$ whose automorphism group is $\Bbb Z$. But I've not seen that one can expect this extension to be algebraic.



At least such an extension can't be normal, otherwise $\Bbb Z$ would be endowed with a topology turning it into a profinite group, which can't be countably infinite.
(So typically, if we replace $\Bbb Q$ by $\Bbb F_p$, then the answer to the above question is no, because any algebraic extension of a finite field is Galois).



Thank you!


Answer




Let $L$ be the fixed field of $\text{Aut}_{\Bbb Q}(K)$, so $\Bbb Q \subsetneq L \subset K$, and $K/L$ is a normal extension with Galois group $\Bbb Z$, which is impossible.


algebraic geometry - Equivalent condition for being a Jacobson ring



(Atiyah-Macdonald, Ex. 5.25)




Let $A$ be a ring. Show that the following are equivalent:
i) $A$ is a Jacobson ring;
ii) Every finitely generated $A$-algebra $B$ which is a field is finite over $A$.




I'm trying to solve $i)\Rightarrow ii)$. Almost done, but I need some help. My trial:




Let $f:A\to B$, $B$ a finitely generated $A$-algebra which is a field. Since $f(A)$ is Jacobson and $A$-algebra/module actually means that $f(A)$-algebra/module, we can assume that $A \subseteq B$. Pick $s\in A$ as in Ex. 5.21 (see below). Because $A$ is Jacobson, $J(A)=0$ so we can find a maximal ideal $m$ not containing $s$. Let $k:=A/m$ and $f:A \twoheadrightarrow k \hookrightarrow \bar{k}$, then $f(s) \neq 0$, so $f$ extends to $g:B\to \bar{k}$. Because $B$ is a field, $g$ is either injective or trivial. But $g(s)=f(s) \neq 0$ so $g$ is injective and $B \simeq g(B)$. Suppose $B=A[z_1,\cdots,z_m]$, then $g(B)$ $=$ $g(A)[g(z_1),\cdots,g(z_m)]$ $=$ $k[g(z_1),\cdots,g(z_m)]$. Each $g(z_i)$ is in $\bar{k}$ hence integral over $k$, so $g(B)$ is finitely generated $k$-module.



Now the question:
1. How can I deduce that $B$ is a finitely generated $A$-module?
2. $g(A)=f(A)=k$ but $g$ is injective. How can it be possible? I think that $A$ strictly(?) contains $k$.



(Ex. 5.21) Let $A$ be a subring of an integral domain $B$ such that $B$ is finitely generated over $A$. Show that there exists $s \neq 0$ in $A$ such that, if $\Omega$ is an algebraically closed field and $f:A\to \Omega$ is a homomorphism for which $f(s) \neq 0$, then $f$ can be extended to a homomorphism $B\to \Omega$.


Answer



Since $A$ is assumed to be a subring of $B$, we have $A\hookrightarrow B \hookrightarrow \bar{k}$, so the composition is injective. But the composition is nothing but $A\twoheadrightarrow k$. It follows that $A=k$ and the maximal ideal $m$ is $0$. Now $B=k(z_1, \ldots, z_m)$ with all $z_i$ algebraic over $k$, hence a finite extension of $k=A$.


real analysis - Drawing large rectangle under concave curve



Let $f$ be a continuous concave function on $[0,1]$ with $f(1)=0$ and $f(0)=1$. Does there exist a constant $k$ for which we can always draw a rectangle with area at least $k\cdot \int_0^1f(x)dx$, with sides parallel to the axes, in the area bounded by the two axes and the curve $f$?



If concavity is not required, it is possible to adapt from this example by using the curve $c/x$ to ensure that any rectangle has sufficiently small area. But with concavity, we know that $f$ lies above the line connecting the points $(0,1)$ and $(1,0)$, hence must have area at least $1/2$. If $f$ is exactly that line, then $k=1/2$ exactly. Otherwise, if $f$ is above the line, it looks like the rectangle will even get larger compared to the area under the curve.


Answer




Let $t\in[0,1]$ be a value such that $f(x)\le f(t)$ for all $x\in[0,1]$. (Such a value exists since $f$ is unimodal; I don't think we need continuity for this.) The triangle formed by $(0,0)$, $(1,0)$ and $(t,f(t))$ lies under the curve. Its area is $\frac12f(t)$, and it contains an axis-parallel rectangle with half its area, $\frac14f(t)$. The area under the curve is at most $f(t)$. Thus $k=\frac14$ suffices. I'm not sure this is the best possible constant, though.


calculus - Prove that polynomial of degree $4$ with real roots cannot have $pm 1$ as coefficients (IITJEE)



So I was going through my 11th class package on Quadratic equations and I saw a question to prove that a polynomial of $4$th degree with all real roots cannot have $\pm 1$ as all its coefficients.



I tried proving it using calculus, by showing that at least one consecutive maxima and minima will lie either above or below the x axis, but couldn't solve it using that.




I also tried using Descartes Rule of Signs but couldn't solve it with that too.
Any help?


Answer



Let $f(x)$ be any quartic polynomial with coefficients from $\{ -1, +1 \}$. Replacing $f(x)$ by $-f(x)$ if necessary, we can assume $f(x)$ is monic. i.e.



$$f(x) = x^4 + ax^3 + bx^2 + cx + d\quad\text{ with }\quad a,b,c,d \in \{ -1, +1 \}$$



If $f(x)$ has $4$ real roots $\lambda_1,\lambda_2,\lambda_3,\lambda_4$, then by Vieta's formula, we have



$$\sum_{i=1}^4 \lambda_i = -a, \sum_{1\le i < j\le 4} \lambda_i\lambda_j = b

\quad\text{ and }\quad\prod_{i=1}^4 \lambda_i = d$$
Notice
$$\sum_{i=1}^4 \lambda_i^2 = \left(\sum_{i=1}^4\lambda_i\right)^2 - 2\sum_{1\le i < j \le 4}\lambda_i\lambda_j = a^2 - 2b = 1 -2b$$



Since $\sum_{i=1}^4 \lambda_i^2 \ge 0$, we need $b = -1$. As a result, $$\sum_{i=1}^4 \lambda_i^2 = 3$$
By AM $\ge$ GM, this leads to



$$\frac34 = \frac14\sum_{i=1}^4 \lambda_i^2 \ge \left(\prod_{i=1}^4 \lambda_i^2\right)^{1/4} = (d^2)^{1/4} = 1$$
This is impossible and hence $f(x)$ cannot has 4 real roots.


Tuesday 30 July 2013

Limit without Taylor expansion

$\lim_{x\to0}\frac{\sin x - x}{x^3}$



I know it can be easily done by using Taylor expansion of sine function and L'Hopital. However, can we come up with a way to solve the limits using properties.

exponentiation - Units in geometric calculus?




I'm very confused about how geometric calculus handles units. With classical calculus, units seem to function without too many problems. For example, consider the analytic function $f:X \rightarrow Y$ such that the values in $X$ are measured in seconds and the values in $Y$ are measured in meters. The limit-based definitions of a derivative and integral



$\lim_{h \to 0} \frac{f(x + h) - f(x)}{h} = \frac{df(x)}{dx}$



and (not an exact definition here, I know, but LaTeX is hard to work with)



$\lim_{\Delta x \to 0} \sum f(x) \Delta x = \int f(x) dx$



easily illustrate why the units of a derivative are meters/second and the units of an integral are meters*seconds. But the geometric derivative is defined as




$\lim_{h \to 0} (\frac{f(x + h)}{f(x)})^{1/h} = f^{*}(x)$



$h$ is measured in seconds. How are we able to set the fraction $(\frac{f(x + h)}{f(x)})$ to the power of something with units? Correct me if I'm wrong, but that seems like an illegal operation.



Furthermore, the multiplicative (geometric) integral takes every value in the image of $f$ between two bounds, sets them to the power of $dx$, and then multiplies them together. Once again, $dx$ is measured in seconds. How does this work?



What if we put these questions on hold and look at the conversions to the classical versions of the geometric derivative and integral? These are $e^{f'(x)/f(x)}$ and $e^{\int_{x_{0}}^{x_f} ln(f(x)) dx}$. Both of these have screwy units as well.



I can think of a couple of solutions. We could claim that geometric integrals and derivatives are only defined on functions from fields to fields, but I don't like this solution as geometric integrals have applications to many areas of applied mathematics that involve units.




Another option is to use an implicit unit scale as the physicists are wont to do--instead of sending $h$ to zero and $\Delta x$ to zero, we could send $h/u$ to zero and $\Delta x / u$ to zero, where $u$ is a constant with the same units as the elements of $X$. This is the approach I've taken when using geometric calculus in my own work, but I can't help but feel there must be a more elegant way to deal with the problem of units. I also have only figured out how to do this for geometric integrals, not geometric derivatives.



There are other options as well, but they're far more mathematically sketchy. I've experimented with "stripping out" the units from the geometric integrals and derivatives so that all units are contained in a single coefficient, then defining that coefficient as a new unit of measurement, but I have serious doubts about both the rigor and legitimacy of this method.



At present, option #2 seems to work rather well for specific cases, such as modeling non-continuous compounding of interest payments where the interest rate varies over time--one can set the implicit time scale to the rate of compounding and the math works out perfectly. But given that I'm not an expert mathematician, I want to know how one is actually supposed to handle units.



My question is composed of two parts: 1) Can anyone tell me the proper way to deal with units in geometric calculus, and if the "implicit unit scale" method is correct how to apply it geometric derivatives as well as integrals? And 2) if we do need an implicit unit scale, why don't functions from $\mathbb{R}$ to $\mathbb{R}$ need a dimensionless scale for consistency's sake? It seems to me that applying a "unit scale" to the real numbers would just be a scalar in $\mathbb{R}$; what's to stop that value from being something other than $1$?


Answer



Units are an inherently linear phenomenon; it only makes sense to talk about them in the context of doing homogeneous (multi-)linear operations.




In my opinion, the best mathematical structure for dealing with units is that each kind of 'dimension' is associated to its own separate one-dimensional real vector space. For example, SI units would have a vector space of lengths, a vector space of masses, a vector space of accelerations, and so forth — including, of course, the vector space $\mathbb{R}$ of 'unit free' quantities.



"Multiplication" is through bilinear forms, such as the tensor product. e.g. the tensor product of the vector space of times with the vector space of velocities has values in the vector space of lengths. And, in particular, $s \otimes \frac{m}{s} = m$.



When you're not doing linear operations, it simply doesn't make sense to talk about units; the geometric derivative is only defined on $\mathbb{R}$, the space of 'unit free' quantities.






But if you follow wikipedia, there is the bigeometric derivative, defined by




$$ \lim_{h \to 0} \left( \frac{f((1+h) x)}{f(x)} \right)^{1/h}
= \lim_{k \to 1} \left( \frac{f(kx)}{f(x)} \right)^{1/\ln(k)} $$



Here, $h$ and $k$ are taken to be scalars, and this makes sense when $x$ and $f(x)$ have units. Note that the derivative is also a scalar.


linear algebra - Constructing an orthonormal basis with complex numbers?

I have this problem I recently ran into and am wondering how to overcome it: I have $V$, which has a basis of $[e^{ikx}:0\leq k\leq 3]$, and inner product $(f,g) \int_{0}^{a}f(x)\bar{g(x)} $ where $a$ is chosen between $(0,\infty)$. I need to construct an orthonormal basis (going from $v$ to $u$, as $u$ are the orthonormal vectors) of this inner product space. So far, I have performed the following calculations:



I have rewritten $[e^{ikx}:0\leq k\leq 3]$ as $[1, e^{ix}, e^{2ix}, e^{3ix}]$.



Step 1: $||v_1||^2 = \int_{0}^{a}(v_1(x))^2dx = a$ so $u_1=\frac{1}{||v_1||}v_1 = \frac{1}{\sqrt{a}}$.




Step 2: $\tilde{v_2} = v_2 - \langle v_2,u_1 \rangle u_1$



I start this step by calculating the inner product of $v_2$ and $u_1 $ by doing the following (this is where I think I may be making a mistake):



$ \langle v_2,u_1 \rangle=\int_{0}^{a}v_2(x)u_1dx = \frac{1}{\sqrt{a}}\int_{0}^{a} e^{ix}dx = \frac{1}{\sqrt{a}}(-ie^{ia} + i)$. Upon this, $\langle v_2,u_1 \rangle u_1 = \frac{1}{a}(-ie^{ia} + i)$.



Now we can say that $\tilde{v_2} = e^{ix} - \frac{1}{a}(-ie^{ia} + i)$. I use this to calculate $||\tilde{v_2}||^2 = \int_{0}^{a} \tilde{v_2(x)}^2dx$ which I got to be $(\frac{-i}{2}e^{2ia} + \frac{i}{2}) - (\frac{2(-ie^{ia} + i)}{a})$ and therefore I said $u_2 = \frac{e^{ix} - \frac{1}{a}(-ie^{ia} + i)}{\sqrt{\frac{-i}{2}e^{2ia} + \frac{i}{2} - \frac{2(-ie^{ia} + i)}{a}}}$.



I feel like this problem is becoming overly complicated in terms of calculations and I am questioning where I did something wrong so far. Could someone please guide me in the right direction?

trigonometry - Computing $sum_{k=0}^n{2nchoose 2k}(-1)^ksin^{2k}theta cos^{2n-2k}theta$ using Euler's formula


Compute the following sum by using Euler's formula, $
e^{i \theta} = \cos \theta + i \sin \theta$
,
$$cos^{2n}\theta-{2n\choose 2}cos^{2n-2}\theta\ sin^2\theta\ +...+(-1)^{n-1}{2n\choose 2n-2}cos^2\theta\ sin^{2n-2}\theta\ +(-1)^nsin^{2n}\theta$$





I have tried to rewrite the expression as:



$$\sum_{k=0}^n{2n\choose 2k}(-1)^ksin^{2k}\theta\ cos^{2n-2k}\theta$$



But I have no certain idea about how to continue. Could you give me some hints? Thanks!

algebra precalculus - Prove $left(1+frac1{n}right)^{n}

I am trying to prove the following by mathematical induction:

$$\left(1+\frac1{n}\right)^{n}<\left(1+\frac1{n+1}\right)^{n+1}$$
Other proofs without induction are found here: I have to show $(1+\frac1n)^n$ is monotonically increasing sequence.
But I am curious whether it can be proved by induction as well.



What I've tried so far:



The original inequality is equivalent to
$$(n+1)^{2n+1}So I have to show:
$$(n+2)^{2n+3}<(n+1)^{n+1}(n+3)^{n+2}$$

And,
$$(n+1)^{n+1}(n+3)^{n+2}=(n+1)\color{red}{n^n}\left(1+\frac1n\right)^n\cdot(n+3)\color{red}{(n+2)^{n+1}}\left(1+\frac1{n+2}\right)^{n+1}$$$$>(n+1)(n+3)\cdot\color{red}{(n+1)^{2n+1}}\left(1+\frac1n\right)^n\left(1+\frac1{n+2}\right)^{n+1}$$$$=(n+1)(n+3)\left(n+2+\frac1n\right)^n\left(n+1+\frac{n+1}{n+2}\right)^{n+1}$$
and I am stuck.

reference request - Text recommendation for introduction to linear algebra

I have learned linear algebra before, but with my right brain - basically nothing left, so, anyone can recommend me good textbook so I can learn it again?



The text should subject to the conditions listed below.



Currently, I have found, from other questions asked here, Linear Algebra Done Right by Sheldon Axler, and Advanced Linear Algebra by Steven Roman.




It will be used for preparing an exam, (Don't take me wrong, I like math), and the later one has an "advanced" in its name, so I'd better pass it.



The first one is good, but got a little greedy, want a better one~



Video course is OK too.



Conditions:





  • It should be a introduction, but not too elementary.

  • Don't begin with linear equations, matrices or determinant. (The one I used did so, these stuffs are so boring, and the experience is really terrible.)

  • It should cover: (I don't really believe this condition can be satisfied, so, the more the better)




    • Matrices (inverse, rank, Jordan canonical form etc.)

    • Determinant

    • Kernel space, dimension of vector space and quotient space

    • Linear map and its relation with matix

    • Other "common topics" a LA introduction text should cover, and have some "famous results". (Don't let me define "common topics", I assume the answerer knows so much about LA that he knows which topics should be covered, and which should not be. Same for the "famous results" :-)



  • Don't care real world applications and numeric computation.

  • Do it in a more "mathematical"/modern/abstract way, i.e., every term is formally defined, proofs are rigorous, etc..

  • Prefer formal proofs than intuitive ones.

  • Prefer "proof-like" proofs than those in a computation style.

  • Having good exercises is a bonus.

  • Doesn't have, or has little contents about group, ring, etc.. (I have limited time)

  • Clear written.




Thanks.

Monday 29 July 2013

calculus - What exactly is a differential?

I've seen the formula for differentials alot, namely




$$dy=f'(x)dx$$



but what I think when I see this is that someone is manipulating the "formula"



$$f'(x)=\frac{dy}{dx}$$



When I think of "differential equation", I think of



$$f\left(x,y,\frac{dy}{dx},\frac{d^2y}{dx^2},\cdots,\frac{d^ny}{dx^n}\right)=0$$




not



$$f(x,y,dy,dx,\cdots)=0$$



I've heard that $\Delta y$ and $\Delta x$ can be approximated by $dy$ and $dx$ (or maybe its the other way around?), but that doesn't make much sense to me. If you replace $dy$ and $dx$ by $\Delta y$ and $\Delta x$, you sort of have Euler's method, but this still doesn't clear much up for me. So,



What exactly is a differential, and why is it useful?

integration - Definite integral involving Fresnel integrals




I am seeking to evaluate



$\int_0^{\infty} f(x)/x^2 \, dx$



with



$f(x)=1-\sqrt{\pi/6} \left(\cos (x) C\left(\sqrt{\frac{6 x}{\pi }} \right)+S\left(\sqrt{\frac{6 x}{\pi }} \right) \sin
(x)\right)/\sqrt{x}$.



$C(x)$ and $S(x)$ are the Fresnel integrals. Numerical integration suggests that the integral equals $2 \pi/(3 \sqrt{3})$, which would also be desirable within the (physical) context it arose. How can this be proved?



Answer



First of all,
$$
C(x)=\int_{0}^{x}\cos\left(\frac{1}{2}\pi t^{2}\right)dt=\sqrt{\frac{2}{\pi}}\int_{0}^{\sqrt{\pi/2} \,\left (x\right )}\cos(z^{2})\, dz,
$$
and
$$
S(x)=\int_{0}^{x}\sin\left(\frac{1}{2}\pi t^{2}\right)dt=\sqrt{\frac{2}{\pi}}\int_{0}^{\sqrt{\pi/2} \,\left (x\right )}\sin(z^{2})\, dz.
$$
So we have

$$
C\left(\sqrt{\frac{6x}{\pi}}\right)=\sqrt{\frac{2}{\pi}}\int_{0}^{\sqrt{3x}}\cos(z^{2})\, dz,
$$
and
$$
S\left(\sqrt{\frac{6x}{\pi}}\right)=\sqrt{\frac{2}{\pi}}\int_{0}^{\sqrt{3x}}\sin(z^{2})\, dz.
$$
Using these we can write
$$
\begin{eqnarray*}

f(x) & = & 1-\sqrt{\pi/6}\frac{\cos(x)C\left(\sqrt{\frac{6x}{\pi}}\right)+\sin(x)S\left(\sqrt{\frac{6x}{\pi}}\right)}{\sqrt{x}} \\ & = & \frac{\int_{0}^{\sqrt{3x}}(1-\cos(x-z^{2}))\, dz}{\sqrt{3x}}
\end{eqnarray*}
$$
and
$$
\int_{0}^{\infty}\frac{f(x)}{x^{2}}\, dx=\frac{2}{\sqrt{3}}\int_{0}^{\infty}\int_{0}^{\sqrt{3x}}\frac{\sin^{2}((x-z^{2})/2)}{x^{5/2}}\, dz\, dx.
$$
Let's introduce the new variable $z=t\sqrt{x}$. Then $dz=\sqrt{x}dt$ and
$$
\begin{eqnarray*}

\int_{0}^{\infty}\int_{0}^{\sqrt{3x}}\frac{\sin^{2}((x-z^{2})/2)}{x^{5/2}}\, dz\, dx & = & \int_{0}^{\infty}\int_{0}^{\sqrt{3}}\frac{\sin^{2}(x(1-t^{2})/2)}{x^{2}}\, dt\, dx\\
& = & \frac{1}{2}\int_{0}^{\infty}\int_{0}^{\sqrt{3}}\frac{\sin^{2}(x(1-t^{2}))}{x^{2}}\, dt\, dx\\
& = & \frac{1}{2}\int_{0}^{\infty}\int_{0}^{1}\frac{\sin^{2}(x(1-t^{2}))}{x^{2}}\, dt\, dx\\
& & +\frac{1}{2}\int_{0}^{\infty}\int_{1}^{\sqrt{3}}\frac{\sin^{2}(x(t^{2}-1))}{x^{2}}\, dt\, dx\\
& = & \frac{1}{2}\int_{0}^{1}\int_{0}^{\infty}\frac{\sin^{2}(x(1-t^{2}))}{x^{2}}\, dx\, dt\\
& & +\frac{1}{2}\int_{1}^{\sqrt{3}}\int_{0}^{\infty}\frac{\sin^{2}(x(t^{2}-1))}{x^{2}}\, dx\, dt\\
& = & \int_{0}^{1}\frac{\pi}{4}(1-t^{2})\, dt+\int_{1}^{\sqrt{3}}\frac{\pi}{4}(t^{2}-1)\, dt\\
& = & \frac{\pi}{3},
\end{eqnarray*}
$$

where the formula
$$
\int_0^{\infty}\frac{\sin^2(Ax)}{x^2}\,dx=\frac{1}{2}A\pi,\quad(A>0)
$$
was applied. Your conjecture was excellent.


integration - Justification for $u$-substitution method

I am currently learning how to find antiderivatives using the "$u$-substitution" or "integration by substitution" method. A key component of this is setting some expression in the indefinite integral as "$u$", and then also finding $du/dx$.



How can we then write $du = dx \cdot $ (some expression)? Isn't $du/dx$ defined as the derivative of $u$ and is not a fraction?

Sunday 28 July 2013

eigenvalues eigenvectors - Eigen values of special matrix

I have a matrix $A \in \mathbb{R}^{n\times n}$ whose diagonal elements are all "$1$" and all other elements are of the form $\frac{-1}{n}$ where $n \in \mathbb{N}$ and $n >1$ is the number of rows or columns of $A$. Since $A$ is symmetric and diagonally dominant with positive diagonal entries, it is positive definite.

Can we get an expression for its eigenvalues using some kind of decomposition or by other standard approach ?



This matrix clearly has a special structure but I am unable to utilize its properties.

real analysis - How to define the $0^0$?










According to Wolfram Alpha:





$0^0$ is indeterminate.




According to google:
$0^0=1$



According to my calculator: $0^0$ is undefined



Is there consensus regarding $0^0$? And what makes $0^0$ so problematic?



Answer



This question will probably be closed as a duplicate, but here is the way I used to explain it to my students:



Since $x^0=1$ for all non-zero $x$, we would like to define $0^0$ to be 1. but ...



since $0^x = 0$ for all positive $x$, we would like to define $0^0$ to be 0.



The end result is that we can't have all the "rules" of indices playing nicely with each other if we decide to chose one of the above options, it might be better if we decided that $0^0$ should just be left as "undefined".


integration - How to solve the following integral $int_0^{frac{pi}{2}}sqrt[3]{sin^8xcos^4x}dx$?



How to solve the following integral?



$$\int_0^{\frac{\pi}{2}}\sqrt[3]{\sin^8x\cos^4x}\,dx$$



Preferably without the universal substitution $$\sin(t) = \dfrac{2\tan(t/2)}{1+\tan^2(t/2)}$$


Answer



Using $\operatorname{B}(a,\,b)=2\int_0^{\pi/2}\sin^{2a-1}x\cos^{2b-1}xdx$, your integral is$$\frac12\operatorname{B}\left(\frac{11}{6},\,\frac{7}{6}\right)=\frac{\Gamma\left(\frac{11}{6}\right)\Gamma\left(\frac{7}{6}\right)}{2\Gamma(3)}=\frac{5}{144}\Gamma\left(\frac{5}{6}\right)\Gamma\left(\frac{1}{6}\right)=\frac{5\pi}{144}\csc\frac{\pi}{6}=\frac{5\pi}{72}.$$Here the first $=$ uses $\operatorname{B}(a,\,b)=\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}$, the second $\Gamma(a+1)=a\Gamma(a)$, the third $\Gamma(a)\Gamma(1-a)=\pi\csc\pi a$.


algebra precalculus - Can you explain this please $T(n) = (n-1)+(n-2)+…1= frac{(n-1)n}{2}$








Can you explain this please
$$T(n) = (n-1)+(n-2)+…1= \frac{(n-1)n}{2}$$



I am really bad at maths but need to understand this for software engineering.

real analysis - Summation Symbol: Changing the Order




I have some questions regarding the order of the summation signs (I have tried things out and also read the wikipedia page, nevertheless some questions remained unanswered):



Original 1. wikipedia says that:



$$\sum_{k=1}^m a_k \sum_{\color{red}{k}=1}^n b_l = \sum_{k=1}^m \sum_{l=1}^n a_k b_l$$



does not necessarily hold. What would be a concrete example for that?



Edited 1. wikipedia says that:




$$\sum_{k=1}^m a_k \sum_{\color{red}{l}=1}^n b_l = \sum_{k=1}^m \sum_{l=1}^n a_k b_l$$



does not necessarily hold. What would be a concrete example for that?



2.As far as I see generally it holds that:



$$\sum_{j=1}^m \sum_{i=1}^n a_ib_j = \sum_{i=1}^n \sum_{j=1}^m a_ib_j $$



why is that? It is not due to the property, that multiplication is commutative, is it?




3.What about infinite series, when does:
$$\sum_{k=1}^{\infty}\sum_{l=1}^{\infty} a_kb_l = \sum_{k=1}^{\infty}a_k \sum_{l=1}^{\infty}b_l$$ hold?
And does here too $$\sum_{k=1}^{\infty}\sum_{l=1}^{\infty} a_kb_l = \sum_{l=1}^{\infty}\sum_{k=1}^{\infty} a_kb_l$$ hold?



Thanks


Answer



For the *original first question where $l = k$, let $m=n=2$, $a_1=b_1=1$, and $a_2=b_2=2$; then



$$\sum_{k=1}^2a_k\sum_{k=1}^2b_k=\sum_{k=1}^2a_k(1+2)=1\cdot3+2\cdot3=9\;,$$




but $$\sum_{k=1}^2\sum_{k=1}^2a_kb_k=\sum_{k=1}^2(1^2+2^2)=5+5=10\;.$$



For the second question, imagine arranging the terms $a_ib_j$ in an $n\times m$ array:



$$\begin{array}{ccccc|c}
a_1b_1&a_1b_2&a_1b_3&\dots&a_1b_m&\sum_{j=1}^ma_1b_j\\
a_2b_1&a_2b_2&a_2b_3&\dots&a_2b_m&\sum_{j=1}^ma_2b_j\\
a_3b_1&a_3b_2&a_3b_3&\dots&a_3b_m&\sum_{j=1}^ma_3b_j\\
\vdots&\vdots&\vdots&&\vdots&\vdots\\
a_nb_1&a_nb_2&a_nb_3&\dots&a_nb_m&\sum_{j=1}^ma_nb_j\\ \hline

\sum_{i=1}^na_ib_1&\sum_{i=1}^na_ib_2&\sum_{i=1}^na_ib_3&\dots&\sum_{i=1}^na_ib_m
\end{array}$$



For each $j=1,\dots,m$, $\sum_{i=1}^na_ib_j$ is the sum of the entries in column $j$, and for each $i=1,\dots,n$, $\sum_{j=1}^ma_ib_j$ is the sum of the entries in row $i$. Thus,



$$\begin{align*}
\sum_{j=1}^m\sum_{i=1}^na_ib_j&=\sum_{j=1}^m\text{sum of column }j\\
&=\sum_{i=1}^n\text{sum of row }i\\
&=\sum_{i=1}^n\sum_{j=1}^ma_ib_j\;.
\end{align*}$$




For infinite double series the situation is a bit more complicated, since an infinite series need not converge. However, it is at least true that if either of



$$\sum_{j=1}^m\sum_{i=1}^n|a_ib_j|\quad\text{and}\quad\sum_{i=1}^n\sum_{j=1}^m|a_ib_j|$$



converges, then the series without the absolute values converge and are equal. This PDF has much more information on double sequences and series.


abstract algebra - If a field extension of prime degree $p$ contains two roots of an irreducible polynomial then it contains all the roots.

Let $f(x) \in F[x]$ be an irreducible separable polynomial of degree p, with distinct roots $\alpha_1, \ldots, \alpha_p$. I want to show that if $F(\alpha_1) = F(\alpha_1, \alpha_2)$, then $F(\alpha_1) = F(\alpha_1, \ldots, \alpha_p)$, ie. $F(\alpha_1)$ is the splitting field of $f(x)$.



So far, what I know is that $F(\alpha_1) \simeq F(\alpha_i)$ for all $1 \leq i \leq p$, can I somehow extend this to show that $(F(\alpha_1))(\alpha_1) \simeq (F(\alpha_1))(\alpha_i)$? Thanks

Saturday 27 July 2013

calculus - Relation between real roots of a polynomial and real roots of its derivative



I have this question which popped in my mind while solving questions of maxima and minima.
First Case:Let $f(x)$ be an $n$ degree polynomial which has $r$ real roots.

Using this can we say anything about the number of real roots of $f'(x)$?



Second Case:Suppose, $f(x)$ has all $n$ real roots. Then will all of its derivatives also have all real roots?



Also, if any of its derivatives do not have all real roots, then will $f(x)$ also have not all real roots?
If the above is true then what about its converse?



Comment Case:For the third case:Suppose f'(x) is a 5 degree polynomial with 3 real roots.Then f(x) will be a 6 degree polynomial.(correct me if I am wrong).What are the possible no. of roots that f(x) can have(3,4,5 etc.?).Basically I am asking for an example.Also it would be great if you follow all cases with an example like in the 4th case.


Answer



First case: If the number of real roots $r$ of $f(x)$ is greater than one, then $f'(x)$ has at least $r-1$ real roots. (The limitation "greater than one" is not necessary but the statement is trivial if $r\le 1$.) Given any two roots $a


There may be more roots of $f'(x)$ than those between roots of $f(x)$, so the only upper bound is the obvious one of $n-1$. Ask if you need examples. It seems to me that if multiplicity is taken into account that the number of real roots of $f'(x)$ has the same parity (even/odd) as the number of real roots of $f(x)$, but I haven't proven it yet. If multiplicity is not taken into account, the parity can be anything.



Second case: If $f(x)$ has degree $n$ and has $n$ real roots, then each consecutive pair of roots of $f(x)$ defines a root of $f'(x)$, which makes $n-1$ roots of $f'(x)$. Since $f'(x)$ is a polynomial of degree $n-1$, this is all possible roots. This continues for all later derivatives, so you are correct: all its derivatives will have all real roots.



Third case: The contrapositive of the second case tells us that if any of its derivatives have any non-real roots, then $f(x)$ also has some non-real roots.



Fourth case: The converse of the third case is not true. For example, $f(x)=x^2+1$ has two non-real roots, but its derivative $f'(x)=2x$ has one real root.



Comment case: You asked, "Suppose $f'(x)$ is a $5$ degree polynomial with $3$ real roots. What are the possible no. of roots that $f(x)$ can have($3,4,5$ etc.?)."




enter image description here



The formulas for $f(x)$ and $f'(x)$ are given in the diagram, where $C$ is a real constant, zero in the graph. You can see that $f'(x)$ is a degree $5$ polynomial with $3$ real roots.



The dashed horizontal lines show the possible number of real roots of $f(x)$ for varying values of $C$. There are $0$ real roots for $C=3$, $1$ real root for $C\approx. 2.638$, $2$ real roots for $C=1$, $3$ real roots for $C\approx -1.757$, and $4$ real roots for $C=-1.9$. My discussion for the first case shows that there cannot be more than $4$ real roots since $f'(x)$ has $3$ real roots.


real analysis - Limit of convergent monotone sequence

Looking for a nice proof for this proposition:




Let $\{ x_n \}$ be a convergent monotone sequence. Suppose there
exists some $k$ such that $\lim_{n\to\infty} x_n = x_k$, show that
$x_n = x_k$ for all $n \geq k$.





I have the intuition for why it's true but am having a tough time giving a rigorous proof.

integration - Integral $int_0^infty frac{arctan(x^2)}{x^4+x^2+1}dx$




Accidentally while trying to evaluate a similar integral, I think originally found here, I have taken the denominator instead of $x^4+4x^2+1$ as $x^4+x^2+1$ and stumbled into the following integral:
$$J=\int_0^\infty \frac{(x^2-1)\arctan(x^2)}{x^4+x^2+1}dx$$ I think this have a closed form because the linked one has a simple closed form: $\displaystyle{\frac{\pi^2}{12\sqrt{2}}}$, also if there is $\arctan x $ instead of $\arctan(x^2) $ then we have: $$\int_0^\infty \frac{\arctan x}{x^4+x^2+1}dx=\frac{\pi^2}{8\sqrt{3}}-\frac{2}{3}G+\frac{\pi}{12}\ln(2+\sqrt{3})$$



Some proofs are found here: Using $\int_0^{\infty} \frac{\tan^{-1}(x^2)}{x^2+a^2} \ dx$ or using residues.
Anyway I started by splitting into two integrals and substituting $x=\frac{1}{t}$:
$$\int_0^\infty \frac{x^2\arctan(x^2)}{x^4+x^2+1}dx=\int_0^\infty \frac{\frac{\pi}{2}-\arctan\left(x^2\right)}{x^4+x^2+1}dx\Rightarrow J=\frac{\pi^2}{4\sqrt{3}}-2\int_0^\infty \frac{\arctan(x^2)}{x^4+x^2+1}dx$$
Well, now the main issue is to evaluate:
$\displaystyle{I=\int_0^\infty \frac{\arctan(x^2)}{x^4+x^2+1}dx}$




Using the same method as in the second link I arrived at:
$$I=\left(\frac{1-i\sqrt 3}{2}\right)f\left(\sqrt{\frac{1+i\sqrt 3}{2}}\right)+\left(\frac{1+i\sqrt 3}{2}\right)f\left(\sqrt{\frac{1-i\sqrt 3}{2}}\right)$$ Where $\displaystyle{f(a)=\int_0^{\infty} \frac{\tan^{-1}(x^2)}{x^2+a^2}=\frac{\pi}{2a}\left(\tan^{-1}(\sqrt{2}a+1)+\tan^{-1}(\sqrt{2}a-1)-\tan^{-1}(a^2)\right)},\,$ but I don't see how to simplify further.



I also tried the "straight forward" way, by employing Feynman's trick to the following integral:
$$I(b)=\int_0^\infty \frac{\arctan(bx)^2}{x^4+x^2+1}dx\rightarrow \frac{d}{db}I(b)=\int_0^\infty \frac{2bx^2}{(x^4+x^2+1)(1+b^4x^4)}dx$$
$$=\frac{2b}{b^8-b^4+1}\int_0^\infty \frac{x^2}{x^4+x^2+1}dx-\frac{b^5}{b^8-b^4+1}\int_0^\infty \frac{dx}{x^2+x+1}+$$$$+\frac{2b^5}{b^8-b^4+1}\int_0^\infty\frac{dx}{1+b^4x^4}+\frac{b^9-b^5}{b^8-b^4+1}\int_0^\infty \frac{x^2}{1+b^4x^4}dx$$
$$=\frac{\pi}{\sqrt 3}\frac{b}{b^8-b^4+1}-\frac{2\pi}{3\sqrt 3}\frac{b^5}{b^8-b^4+1}+\frac{\pi}{\sqrt 2}\frac{b^4}{b^8-b^4+1}+\frac{\pi}{2\sqrt 2}\frac{b^6-b^2}{b^8-b^4+1}$$
Now since $I(0)=0$ we have: $ \displaystyle{I=I(1)-I(0)=\int_0^1 \left(\frac{d}{db}I(b)\right)db}$



Integrating the first two parts is okay-ish, but for the last two I have no idea on how to proceed, also it seems that only elementary constants appear thus I believe the integral can be approached in a nicer way. I would love to get some help, if it's possible without using residues since I am not great there.



Answer



Complete answer now!$$I=\int_0^\infty \frac{{\arctan(x^2)}}{x^4+x^2+1}dx {\overset{x=\frac1{t}}=}
\int_0^\infty \frac{\arctan\left(\frac{1}{t^2}\right)}{\frac{1}{t^4}+\frac{1}{t^2}+1}\frac{dt}{t^2}\overset{t=x}=\int_0^\infty \frac{{x^2\left(\frac{\pi}{2}-\arctan(x^2)\right)}}{x^4+x^2+1}dx $$

Now if we add the result with the original integral $I$ we get:
$$2I=\frac{\pi}{2}\int_0^\infty \frac{x^2}{x^4+x^2+1}dx+\int_0^\infty \frac{(1-x^2)\arctan(x^2)}{x^4+x^2+1}dx$$
$$\Rightarrow I = \frac12 \cdot \frac{\pi}{2}\cdot \frac{\pi}{2\sqrt 3}-\frac12 \int_0^\infty \frac{(x^2-1)\arctan(x^2)}{x^4+x^2+1}dx=\frac{\pi^2}{8\sqrt 3} -\frac12 J$$






Now in order to calculate $J\,$ we start by performing IBP:

$$J=\int_0^\infty \frac{(x^2-1)\arctan\left(x^2\right)}{x^4+x^2+1}dx =\int_0^\infty \arctan(x^2) \left(\frac12 \ln\left(\frac{x^2-x+1}{x^2+x+1}\right)\right)'dx=$$
$$=\underbrace{\frac{1}{2}\ln\left(\frac{x^2-x+1}{x^2+x+1}\right)\arctan(x^2)\bigg|_0^\infty}_{=0}+\int_0^\infty \frac{x}{1+x^4}\ln\left(\frac{x^2+x+1}{x^2-x+1}\right)dx$$
Substituting $x=\tan\left(t\right)$ and doing some simplifications yields:
$$J=\int_0^\frac{\pi}{2} \frac{2\sin (2t)}{3+\cos(4t)}\ln\left(\frac{2+\sin (2t)}{2-\sin (2t)}\right)dt\overset{2t=x}=\int_0^\pi \frac{\sin x}{3+\cos(2x)}\ln\left(\frac{2+\sin x}{2-\sin x}\right)dx=$$
$$=2\int_0^\frac{\pi}{2}\frac{\sin x}{3+\cos(2x)}\ln\left(\frac{2+\sin x}{2-\sin x}\right)dx=\int_0^\frac{\pi}{2}\frac{\cos x}{1+\sin^2 x}\ln\left(\frac{2+\cos x}{2-\cos x}\right)dx =$$
$$=\frac12\int_0^\pi \frac{\cos x}{1+\sin^2 x}\ln\left(\frac{2+\cos x}{2-\cos x}\right)dx\overset{\large{\tan\left(\frac{x}{2}\right)=t}}=\int_0^\infty \frac{1-t^2}{t^4+6t^2+1}\ln\left(\frac{\color{blue}{t^2+3}}{\color{red}{3t^2+1}}\right)dt$$



Splitting the integral into two parts followed by the substitution $\,\displaystyle{t=\frac{1}{x}}\,$ in the second part gives:
$$\int_0^\infty \frac{1-t^2}{t^4+6t^2+1}\ln(\color{red}{3t^2+1})dt =\int_0^\infty \frac{x^2-1}{x^4+6x^2+1}\ln\left(\color{red}{\frac{x^2+3}{x^2}}\right)dx$$
$$\Rightarrow J=\int_0^\infty \frac{1-x^2}{x^4+6x^2+1} \ln(\color{blue}{x^2+3})dx - \int_0^\infty \frac{1-x^2}{x^4+6x^2+1} {\left(\ln(\color{red}{x^2})-\ln(\color{red}{x^2+3})\right)}dx=$$

$$=2\int_0^\infty \frac{1-x^2}{x^4+6x^2+1}\ln\left(\color{purple}{\frac{x^2+3}{x}}\right)dx=2\int_0^\infty \left(\frac12\arctan\left(\frac{2x}{1+x^2}\right)\right)'\ln\left(\frac{x^2+3}{x}\right)dx=$$
$$=\underbrace{\arctan\left(\frac{2x}{1+x^2}\right)\ln\left(\frac{x^2+3}{x}\right)\bigg|_0^\infty}_{=0}-\int_0^\infty \arctan\left(\frac{2x}{1+x^2}\right)\left(\frac{2x}{x^2+3}-\frac{1}{x}\right)dx$$
$$\Rightarrow J=\int_0^\infty \arctan\left(\frac{2x}{1+x^2}\right)\frac{dx}{x}-\int_0^\infty \arctan\left(\frac{2x}{1+x^2}\right) \frac{2x}{x^2+3}dx=J_1-J_2$$






$$J_1=\int_0^\infty \arctan\left(\frac{2x}{1+x^2}\right)\frac{dx}{x}\overset{\large{x=\tan\left(\frac{t}{2}\right)}}=\int_0^\pi \frac{\arctan( \sin t)}{\sin t} dt\overset{t=x}=2\int_0^\frac{\pi}{2} \frac{\arctan( \sin x)}{\sin x} dx$$
In general, we have the following relation: $$\frac{\arctan x}{x}=\int_0^1 \frac{dy}{1+(xy)^2} \Rightarrow \color{red}{\frac{\arctan(\sin x)}{\sin x}=\int_0^1 \frac{dy}{1+(\sin^2 x )y^2}}$$
$$J_1 = 2\color{blue}{\int_{0}^{\frac{\pi}{2}}} \color{red}{\frac{\arctan\left(\sin x\right)}{\sin x}}\color{blue}{dx}=2\color{blue}{\int_0^\frac{\pi}{2}}\color{red}{\int_0^1 \frac{dy}{1+(\sin^2 x )y^2}}\color{blue}{dx}=2\color{red}{\int_0^1} \color{blue}{\int_0^\frac{\pi}{2}}\color{purple}{\frac{1}{1+(\sin^2 x )y^2}}\color{blue}{dx}\color{red}{dy}$$
$$=2\int_0^1 \frac{\arctan\left(\sqrt{1+y^2}\cdot\tan(x)\right) }{\sqrt{1+y^2}} \bigg|_0^\frac{\pi}{2}=\pi\int_0^1 \frac{dy}{\sqrt{1+y^2}}=\boxed{\pi\ln\left(1+\sqrt 2\right)}$$







In order to evaluate $J_2$ we return the integral before was integrated by parts.
$$J_2=2\int_0^\infty \arctan\left(\frac{2 x} {x^2 +1}\right)\frac{x}{x^2 +3}dx=2\int_0^{\infty}\frac{(x^2-1)\ln(x^2+3)}{x^4+6x^2+1} dx=$$
$$=(\sqrt 2+1)\int_0^{\infty} \frac{\ln(x^2+3)}{x^2+\left(\sqrt 2+1\right)^2} \ dx - (\sqrt 2-1)\int_0^{\infty} \frac{\ln(x^2+3)}{x^2+\left(\sqrt 2-1\right)^2} dx$$
Using the following identity that is valid for $a\ge 0, b>0$:$$\int_0^{\infty} \frac{\ln(x^2+a^2)}{x^2+b^2} \ dx = \frac{\pi}{b}\ln(a+b)$$ $$\Rightarrow J_2=\pi\ln\left(\frac{\sqrt{3}+\sqrt{2}+1}{\sqrt{3}+\sqrt{2}-1}\right)=\boxed{\frac{\pi} {2}\ln(2+\sqrt 3)}$$
So we found that:$$J=\boxed{\pi \ln(1+\sqrt 2) - \frac{\pi} {2} \ln(2+\sqrt 3)}\Rightarrow I= \large\boxed{\frac{\pi^2} {8 \sqrt 3}+\frac{\pi}{4}\ln(2+\sqrt 3)-\frac{\pi}{2} \ln(1+\sqrt 2)}$$


linear algebra - Function that satisfies $f(x+ y) = f(x) + f(y)$ but not $f(cx)=cf(x)$




Is there a function from $ \Bbb R^3 \to \Bbb R^3$ such that $$f(x + y) = f(x) + f(y)$$ but not $$f(cx) = cf(x)$$ for some scalar $c$?




Is there one such function even in one dimension? I so, what is it? If not, why?



I came across a function from $\Bbb R^3$ to $\Bbb R^3$ such that $$f(cx) = cf(x)$$ but not $$f(x + y) = f(x) + f(y)$$, and I was wondering whether there is one with converse.



Although there is another post titled Overview of the Basic Facts of Cauchy valued functions, I do not understand it. If someone can explain in simplest terms the function that satisfy my question and why, that would be great.


Answer



Take a $\mathbb Q$-linear function $f:\mathbb R\rightarrow \mathbb R$ that is not $\mathbb R$-linear and consider the function $g(x,y,z)=(f(x),f(y),f(z))$.



To see such a function $f$ exists notice that $\{1,\sqrt{2}\}$ is linearly independent over $\mathbb Q$, so there is a $\mathbb Q$-linear function $f$ that sends $1$ to $1$ and $\sqrt{2}$ to $1$. So clearly $f$ is not $\mathbb R$-linear. ( Zorn's lemma is used for this).



trigonometry - Alternate form of cosine + sine into cosine

I am trying to rewrite $\frac{2500}{13} \cos(1000t) - \frac{1200}{65} \sin(1000t)$ into a phrase that only is only a cosine. What trig identities are there to help me do this in general? I am not sure where to start to reformulate this.

Friday 26 July 2013

real analysis - Recursive relation in the square root form.

I apologize if this has been asked before, but I have been struggling to show the convergence and the limit value of the following recursive relation:




$x_{n+2}=\sqrt{x_n x_{n+1}} $ with initial values $x_1=a$ and $x_2=b$ for $0




I showed by induction that $ x_{2n-1}. I know that if I could show that $\lim_{n\to\infty} x_{2n}-x_{2n-1}=0$, then this implies convergence. However, I could not see the pattern. Moreover, I could not find the actual limit either. Thanks for any help!

Induction proof inequality

So I got this induction proof question but I can't seem to make a logical statement in one part of it:




The question is , $a_{n + 1} = 5 - \frac{6}{a_n + 2}$ with
$a_1 = 1$ . Prove by induction that $a_n < 4$ for $n \geq 1$



I reached up to the proof where I need to prove $a_{k+1} <4$



Proof



$a_k <4 \implies a_k + 2<6 $



The next step I want to put is:




$\frac{6}{ a_k +2} >1$



However I can only justify this statement if $a_k > -2$ but I can't seem to prove that or find any info in the question to suggest that it.



Can anyone help me with the proof or my theory?

arithmetic - integral factors of an irrational number



If the radicand of a square root is a non-square (making the root an irrational), and if the non-square is either a prime number, or a composite number that does not have a square divisor (other than 1), does this mean that the square root is not divisible by an integral divisor (that it does not have an integral factor)?



For example, $\sqrt{200} = \sqrt{100\times2} = \sqrt{100}\times\sqrt{2}=10\sqrt{2}$, so $\sqrt{200}$ has an integral factor of 10 (is divisible by 10). However, $\sqrt{6} = \sqrt{3\times2} = \sqrt{3}\times\sqrt{2}$



I'm mainly wondering for the purpose of reducing fractions that contain radicals and integers.


Answer




The phrase "does not have an integral factor" is a little imprecise, since anything has an integral factor: we have $x=17\left(\frac{x}{17}\right)$.



So we rephrase the question. Suppose that $n$ is a square-free integer. Do there exist integers $d$ and $k$, with $k\gt 1$, such that
$$\sqrt{n}=k\sqrt{d}?\tag{1}$$
Once we express the question like that, the answer is quick. Suppose to the contrary that there are integers $k,d$, with $k\gt 1$, such that (1) holds. Squaring both sides, we get $n=k^2d$, so $n$ is divisible by a square greater than $1$.


Fourier series on general interval $[a,b]$

Currently I'm studying Fourier series and the first thing I've read is the definition of the series for a function $f : [-\pi,\pi]\to \mathbb{R}$. In that case the Fourier series is



$$F_{[-\pi,\pi]}[f](x) = \dfrac{a_0}{2} + \sum_{n=1}^\infty a_n \cos (nx) + b_n \sin (nx)$$



with coefficients




$$a_0 = \dfrac{1}{\pi}\int_{-\pi}^{\pi} f(x)dx, \qquad a_n = \dfrac{1}{\pi}\int_{-\pi}^{\pi} f(x)\cos (nx) dx, \qquad b_n=\dfrac{1}{\pi}\int_{-\pi}^{\pi}f(x)\sin (nx)dx.$$



As I know, the idea behind this is that defining $f_n(x) = \cos(nx)$ and $g_n(x)=\sin (nx)$ the set of functions $\{1,f_n,g_n : n\in \mathbb{N}\}\subset L^2[-\pi,\pi]$ is complete and orthogonal with respect to the inner product



$$\langle f,g\rangle = \int_{-\pi}^{\pi} f(x)g(x)dx.$$



Now, the text I'm reading also delas with extending this to one interval $[-L,L]$. The text just states that the Fourier series of a function $f : [-L,L]\to \mathbb{R}$ would be



$$F_{[-L,L]}[f](x) = \dfrac{a_0}{2} + \sum_{n=1}^\infty a_n \cos \left(\dfrac{n\pi x}{L}\right) + b_n \sin \left(\dfrac{n\pi x}{L}\right)$$




with the new coefficients



$$a_0 = \dfrac{1}{L}\int_{-L}^{L} f(x)dx, \qquad a_n = \dfrac{1}{L}\int_{-L}^{L} f(x)\cos \left(\dfrac{n\pi x}{L}\right) dx, \qquad b_n=\dfrac{1}{L}\int_{-L}^{L}f(x)\sin \left(\dfrac{n\pi x}{L}\right)dx.$$



I didn't understand however, how to get to this. There are also some exercises which asks to compute the series of a function $f : [0,2\pi]\to \mathbb{R}$ for example.



In that case, given a general interval $[a,b]$ and $f : [a,b]\to \mathbb{R}$, what is the Fourier series of $f$, and how does it relate to the usual Fourier series defined on $[-\pi,\pi]$?

special functions - Summation of factorial products



Can someone give me a hint on how to evaluate the following expression:



$$\sum_{r=0}^{n} \frac{(k-2r)!}{(n-r)!(2r)!}, \ \ \ \text{where }k > n$$



or equivalently (up to a multiplicative constant, not sure of the usefulness of this formulation):



$$\sum_{r=0}^{n} \binom{n}{r} \frac{(k-2r)!}{2^n(2r-1)!!}$$




I have been trying to evaluate it for a while but made no progress, so I would appreciate any help you can give me.


Answer



Let's start with a hypergeometric function and modify it so as to reach closer to your sum expression.



Hypergeometric function is defined for $|z|<1$ by the power series,



$\,_2F_1(a,b;c;z) = \sum\limits_{r=0}^{r=\infty}\frac{(a)_r (b)_r}{(c)_r} \frac{z^r}{r!}$



which is undefined if $c$ is nonpositive and $(q)_n$ is (rising) Pochhammer symbol.




So, first we will replace $c$ with a nonpositive number which probably should be $\frac{1}{2}$ as per your sum expression. Then, $(c)_r$ will become



$(\frac{1}{2})_r$
$ = \frac{1}{2}(\frac{1}{2}+1)...(\frac{1}{2}+r-2)(\frac{1}{2}+r-1)$



$ = \frac{1}{2^r} 1(2+1)...(2(r-2)+1)(2(r-1)+1)$



$ = \frac{1}{2^r} 1(2+1)...(2r-3)(2r-1)$



$ = \frac{1}{2^r} \frac{1(2)(2+1)...(2r-4)(2r-3)(2r-2)(2r-1)}{(2)(4)...(2r-4)(2r-2)}$




$ = \frac{1}{2^r} \frac{(2r-1)!}{2^{r-1}\, (r-1)!}$



$ = \frac{1}{2^r} \frac{(2r)!}{2^{r}\, r!}$



$ = \frac{1}{2^{2r}} \frac{(2r)!}{r!}$



Replacing $(c)_r = (\frac{1}{2})_r$ in the power series we get,



$\,_2F_1(a,b;\frac{1}{2};z) = \sum\limits_{r=0}^{r=\infty}\frac{(a)_r (b)_r}{(2r)!} 2^{2r}z^r$




Now, let's replace $a$ by $-n$. Then, $(a)_r$ will become



$(-n)_r=(-n)(-n+1)...(-n+r-1)$



$=(-1)^r n(n-1)...(n-(r-1))$ (which becomes $0$ if $r>n$)



$=(-1)^r \frac{n!}{(n-r)!}$



Replacing $(c)_r = (\frac{1}{2})_r$ in the power series and changing limits appropriately, we get,




$\,_2F_1(-n,b;\frac{1}{2};z) = \sum\limits_{r=0}^{r=n}\frac{(-1)^r \, n! \, (b)_r}{(n-r)! (2r)!} 2^{2r}z^r = n! \, \sum\limits_{r=0}^{r=n}\frac{(b)_r}{(n-r)! (2r)!} 2^{2r}(-z)^r$



Now, I am stuck at this point. Will edit soon once solved.


Thursday 25 July 2013

real analysis - Partial sums of exponential series



What is known about $f(k)=\sum_{n=0}^{k-1} \frac{k^n}{n!}$ for large $k$?



Obviously it is is a partial sum of the series for $e^k$ -- but this partial sum doesn't reach close to $e^k$ itself because we're cutting off the series right at the largest terms. In the full series, the $(k+i-1)$th term is always at least as large as the $(k-i)$th term for $1\le i\le k$, so $f(k)< e^k/2$. Can we estimate more precisely how much smaller than $e^k$ the function is?



It would look very nice and pleasing if, say, $f(k)\sim e^{k-1}$ for large $k$, but I have no real evidence for that hypothesis.




(Inspired by this question and my answer thereto).


Answer



This appears as problem #96 in Donald J Newman's excellent book: A Problem Seminar.



The problem statement there is:




Show that




$$ 1 + \frac{n}{1!} + \frac{n^2}{2!} + \dots + \frac{n^n}{n!} \sim
\frac{e^n}{2}$$




Where $a_n \sim b_n$ mean $\lim \frac{a_n}{b_n} = 1$.



Thus we can estimate your sum (I have swapped $n$ and $k$) as



$$ 1 + \frac{n}{1!} + \frac{n^2}{2!} + \dots + \frac{n^{n-1}}{(n-1)!} \sim
\frac{e^n}{2}$$




as by Stirling's formula, $\dfrac{n^n}{n!e^n} \to 0$.



The solution in the book proceeds as follows:



The remainder term for a the Taylor Series of a function $f$ is



$$ R_n(x) = \int_{0}^{x} \frac{(x-t)^n}{n!} f^{n+1}(t) \ \text{d}t$$



which for our purposes, comes out as




$$\int_{0}^{n} \frac{(n-t)^n}{n!} e^t \ \text{d}t$$



Making the substitution $n-t = x$ gives us the integral



$$ \int_{0}^{n} \frac{x^n}{n!} e^{-x} \ \text{d}x$$



In an earlier problem (#94), he shows that



$$\int_{0}^{\infty} \left(1 + \frac{x}{n}\right)^n e^{-x} \ \text{d}x \sim \sqrt{\frac{\pi n}{2}}$$




which using the substitution $n+x = t$ gives



$$ \int_{n}^{\infty} t^n e^{-t} \ \text{d}t \sim \frac{n^n}{e^n} \sqrt{\frac{\pi n}{2}}$$



Using $\int_{0}^{\infty} x^n e^{-x}\ \text{d}x = n!$ and Stirling's formula now gives the result.



To prove that



$$\int_{0}^{\infty} \left(1 + \frac{x}{n}\right)^n e^{-x} \ \text{d}x \sim \sqrt{\frac{\pi n}{2}}$$




He first makes the substitution $x = \sqrt{n} t$ to obtain



$$ \int_{0}^{\infty} \left(1 + \frac{x}{n}\right)^n e^{-x} \ \text{d}x \ = \sqrt{n} \int_{0}^{\infty} \left(1 + \frac{t}{\sqrt{n}}\right)^n e^{-\sqrt{n} t} \ \text{d}t$$



Now $$\left(1 + \frac{t}{\sqrt{n}}\right)^n e^{-\sqrt{n} t} \le (1+t)e^{-t}$$ and thus by the dominated convergence theorem,



$$ \lim_{n\to \infty} \frac{1}{\sqrt{n}} \int_{0}^{\infty} \left(1 + \frac{x}{n}\right)^n e^{-x} \ \text{d}x $$



$$= \int_{0}^{\infty} \left(\lim_{n \to \infty}\left(1 + \frac{t}{\sqrt{n}}\right)^n e^{-\sqrt{n} t}\right) \ \text{d}t$$




$$ = \int_{0}^{\infty} e^{-t^2/2} \ \text{d}t = \sqrt{\frac{\pi}{2}} $$


real analysis - Summary of my understanding of sequences and series' convergence and divergence?




I'm trying to summarise my understanding of infinite sequences, series, and their relationships with respect to convergence at the fundamental level. Here is what I know. How much of this is correct?



First off, here's a table of the notations that I use, and their corresponding meaning.



enter image description here



My understanding is that:





  • The sequence $\lbrace a_n \rbrace _{n=0}^{\infty}$ converges if
    $$\lim\limits_{n\to\infty}a_n=L_{a}.$$


  • The infinite series $\sum\limits_{n=0}^{\infty}a_n$ converges if its
    sequence of partial sums, $\lbrace s_n \rbrace _{n=0}^{\infty}$, has
    a limit, i.e.$$\lim\limits_{n\to\infty}s_n=L_{s}.$$


  • If the infinite series $\sum\limits_{n=0}^{\infty}a_n$ converges,
    then the limit of the sequence $\lbrace a_n \rbrace _{n=0}^{\infty}$
    is $0$, i.e. $$\sum\limits_{n=0}^{\infty}a_n \: converges \rightarrow
    \lim\limits_{n\to\infty}a_n=0.$$


  • The divergence test:




    If the the limit of the sequence $\lbrace a_n \rbrace
    _{n=0}^{\infty}$ is NOT $0$ or does not exist, then the infinite series diverges, i.e. $$\lim\limits_{n\to\infty}a_n\neq0 \rightarrow \sum\limits_{n=0}^{\infty}a_n \: diverges$$




Would seriously appreciate it if anyone could verify whether the above is accurate or incorrect in any way.



EDIT: I've modified the two limits notation that were mentioned in the comments and answers below, as well as adding the additional condition (limit does not exist or does not equal zero) for the divergence test. I appreciate all the answers/comments.


Answer



Everything is correct, though the notation $\lim\limits_{n\to\infty}a_n=L_{a_n}$ is nonstandard. Perhaps an improvement would be to write it as $\lim\limits_{n\to\infty}a_n=L_{a}.$ Normally just an $L$ will suffice but if you're working with multiple sequences (such as $a_n, b_n, c_n$) then $L_a$ is a good notation for the limit of the sequence $a$.




The reason why $L_{a_n}$ is not so good is because $n$ is just a free variable that has no importance whatsoever; if you write $a_n$ or $a_k$ it's the same thing. What matters is the sequence whose name is "$a$".



Also note that your last two statements are trivially equivalent; they are contrapositives of each other. In general "If $p$ then $q$" is stating the same thing as "If not $q$ then not $p$".



Therefore, just from knowing




$$\sum\limits_{n=0}^{\infty}a_n \: converges \rightarrow
\lim\limits_{n\to\infty}a_n=0.$$





you can deduce




$not (\lim\limits_{n\to\infty}a_n=0 ) \rightarrow
not (\sum\limits_{n=0}^{\infty}a_n \: converges) $




or in other words





$\lim\limits_{n\to\infty}a_n \not =0 \text{ or the limit doesn't exist} \rightarrow \sum\limits_{n=0}^{\infty}a_n \: diverges$



soft question - Book recommendations for highschool algebra for concepts and hard problems



I am looking for a book recommendations for learning algebra for high school.



Usually my exams(national level competitions) may even ask some of the things that are a bit beyond the syllabus, and have really hard problems. So I want a book that goes over both theory in some detail and also contains hard problems and maybe a few tricks.




I am looking for a book in algebra that goes over topics like:



[This is the prescribed syllabus]




Algebra



Algebra of complex numbers, addition, multiplication, conjugation, polar representation, properties of modulus and principal argument, triangle inequality, cube roots of unity, geometric interpretations.



Quadratic equations with real coefficients, relations between roots and coefficients, formation of quadratic equations with given roots, symmetric functions of roots.




Arithmetic, geometric and harmonic progressions, arithmetic, geometric and harmonic means, sums of finite arithmetic and geometric progressions, infinite geometric series, sums of squares and cubes of the first n natural numbers.



Logarithms and their properties.



Permutations and combinations, binomial theorem for a positive integral index, properties of binomial coefficients




Please can someone help me ?? :)




Thank You


Answer



There exists the relics from the past which go from the very beginning to such topics as complex numbers in deep.
$$.$$
G Chrystal, an elementary texbook volume 1 and 2.$$.$$
Hall and Knight, elementary algebra volume 1 and 2. (though it's also named as "elementary algeba for schools")
$$.$$
For an advanced standpoint, you can read B.D Bunday and H. Mulholland "Pure mathematics for advanced level", though in my opinion you should read Chrystal's algebra
first for it is one of those books that never age and deals with many interesting topics (even series, interesting identities and so on) so you should do fine just with it, since the book is kinda old you can find it online.


real analysis - If $f(x+y)=f(x)+f(y) ,forall;x,yinBbb{R}$, then if $f$ is continuous at $0$, then it is continuous on $Bbb{R}.$




I know that this question has been asked here before but I want to use a different approach. Here is the question.



A function $f:\Bbb{R}\to\Bbb{R}$ is such that
\begin{align} f(x+y)=f(x)+f(y) ,\;\;\forall\;x,y\in\Bbb{R}\qquad\qquad\qquad(1)\end{align}

I want to show that if $f$ is continuous at $0$, it is continuous on $\Bbb{R}.$



MY WORK



Since $(1)$ holds for all $x\in \Bbb{R},$ we let \begin{align} x=x-y+y\end{align}
Then,
\begin{align} f(x-y+y)=f(x-y)+f(y)\end{align}
\begin{align} f(x-y)=f(x)-f(y)\end{align}
Let $x_0\in \Bbb{R}, \;\epsilon>$ and $y=x-x_0,\;\;\forall\,x\in\Bbb{R}.$ Then,
\begin{align} f(x-(x-x_0))=f(x)-f(x-x_0)\end{align}

\begin{align} f(x_0)=f(x)-f(x-x_0)\end{align}
\begin{align} f(y)=f(x_0)-f(x)\end{align}



HINTS BY MY PDF:



Let $x_0\in \Bbb{R}, \;\epsilon>$ and $y=x-x_0,\;\;\forall\,x\in\Bbb{R}.$ Then, show that \begin{align} \left|f(x_0)-f(x)\right|=\left|f(y)-f(0)\right|\end{align}
Using this equation and the continuity of $f$ at $0$, establish properly that
\begin{align}\left|f(y)-f(0)\right|<\epsilon,\end{align}
in some neighbourhood of $0$.




My problem is how to put this hint together to complete the proof. Please, I need assistance, thanks!


Answer



We want to show that



$$\forall \epsilon>0, \exists r>0:|x-y|

But $f(x)-f(y)=f(x-y)$ because $f(y)+f(x-y)=f(y+(x-y))=f(x)$ as you have noticed.



Now, take $u=x-y$. By continuity at $0$, we can write:




$$\forall \epsilon>0, \exists r>0:|u-0|

It's easy to see that $f(0)=0$, because $f(0)=f(0+0)=f(0)+f(0)$. Hence



$$\forall \epsilon>0, \exists r>0:|(x-y)-0| $$\forall \epsilon>0, \exists r>0:|x-y| Hence, $f$ is continuous at any $y \in \mathbb{R}$.


Wednesday 24 July 2013

infinity - Balls and vase $-$ A paradox?

Question



I have infinity number of balls and a large enough vase. I define an action to be "put ten balls into the vase, and take one out". Now, I start from 11:59 and do one action, and after 30 seconds I do one action again, and 15 seconds later again, 7.5 seconds, 3.75 seconds...



What is the number of balls in the vase at 12:00?



My attempt




It seems like that it should be infinity (?), but if we consider the case:



Number each balls in an order of positive integers. During the first action, I put balls no. 1-10 in, and ball no.1 out, and during the $n^{\text{th}}$ action I take ball no. $n$ out.



In this way, suppose it is at noon, every ball must have been taken out of the vase. So (?) the number of balls in the vase is



Zero???



My first question: if I take the ball randomly, what will be the result at noon? (I think it may need some probability method, which I'm not familiar enough with.)




Second one: is it actually a paradox?



Thanks in advance anyway.

integration - What does it mean the notation $int{Rleft( cos{x}, sin{x} right)mathrm{d}x} $

Sometimes I find this notation and I get confused:
$$\int{R\left( \cos{x}, \sin{x} \right)\mathrm{d}x} $$



Does it mean a rational function or taking rational operations between $\cos{x}$ and $\sin{x}$ ?




Can you explain please?



Update: I think you did not understand the question well,
Here is an example (maybe it is a lemma or a theorem):




All the integrals of the form $\int{R\left( \cos{x}, \sin{x}
\right)\mathrm{d}x} $ can be evaluated using the substitution
$u=\tan{\dfrac{x}{2}} $.





I think that $R$ here does not stand for a rational function but for taking rational operations(addition, subtraction, multiplication, division) between $\cos{x} $ and $\sin{x}$



Update : I did not noticed that $R$ is a rational function of two variables and that means exactly that we are taking rational operations.

Many other solutions of the Cauchy's Functional Equation



By reading the Cauchy's Functional Equations on the Wiki, it is said that




On the other hand, if no further conditions are imposed on f, then (assuming the axiom of choice) there are infinitely many other functions that satisfy the equation. This was proved in 1905 by Georg Hamel using Hamel bases. Such functions are sometimes called Hamel functions.




Could anyone give a more explicit explanation of these many other solutions?




Besides the trivial solution of the form $f(x)=C x$, where $C$ is a constant, and the solution above constructed by the Hamel Basis, are there any more solutions existing?


Answer



All solutions are "Hamel basis" solutions. Any solution of the functional equation is $\mathbb{Q}$-linear. Let $H$ be a Hamel basis. Given a solution $f$ of the functional equation, let $g(b)=f(b)$ for every $b\in H$, and extend by $\mathbb{Q}$-linearity. Then $f(x)=g(x)$ for all $x$.



There are lots of solutions because a Hamel basis has cardinality the cardinality $c$ of the continuum. A solution can assign arbitrary values to elements of the basis, and be extended to $\mathbb{R}$ by $\mathbb{Q}$-linearity. So there are $c^c$ solutions. There are only $c$ linear solutions, so "most" solutions of the functional equation are non-linear.


radicals - Assuming convergence of the following series, find the value of $sqrt{6+sqrt{6+sqrt{6+...}}}$




Assuming convergence of the following series, find the value of $\sqrt{6+\sqrt{6+\sqrt{6+...}}}$




I was advised to proceed with this problem through substitution but that does not seem to help unless I am substituting the wrong parts. If i substitute the $6$, well then i am just stuck with above.



Any ideas on how to proceed. Also, what is the purpose of stating that it is convergent.


Answer



Let $x=\sqrt{6+\sqrt{6+\sqrt{6+...}}}$, then observe that $x=\sqrt{6+x}$. Squaring both sides yields
$$
x^2=x+6
$$
, which is a quadratic formula. Solve it normally and choose the wise answer out of the 2 roots.


probability - Expectation of Independent Variables Equals Zero?



Given $n$ independent random variables, $X_1, X_2, ..., X_n$ , each having a normal distribution, why is it that the following expectation holds?



$$E[(X_i - \mu)(X_j - \mu)] = 0$$



where $i \neq j$



I saw this statement in a proof explaining why we divide by $n-1$ when computing the sample variance and of course there was no explanation. An intuitive explanation and/or a link to more detailed information about why this is true would be greatly appreciated



Answer



Since the random variables are independent, \begin{align}\operatorname{E}[(X_i-\mu)(X_j-\mu)]&=\operatorname{E}[X_i-\mu] \cdot \operatorname{E}[X_j-\mu]\\
&= (\operatorname{E}[X_i] - \mu)(\operatorname{E}[X_j]-\mu) \\
&= (\mu-\mu)(\mu-\mu)\\
&=0 \cdot 0\\
& = 0.\end{align}


Tuesday 23 July 2013

calculus - Use L'Hôpital's rule to solve $lim_{xto 0^{+}}sin(x)ln(x)$



Use L'Hôpital's rule to solve



$$\lim_{x\to 0^{+}}\sin(x)\ln(x)$$




My attempt:



$$\lim_{x\to 0^{+}}\sin(x)\ln(x) = \lim_{x\to 0^{+}} \frac{\ln(x)}{\sin(x)}$$



$\frac{\ln(0)}{\sin(0)}$ is in the form $\frac{-\infty}{0}$, it is indeterminate, and as such, using L'Hôpital's rule:



$$\lim_{x\to 0^{+}} \frac{\ln(x)}{\sin(x)} = \lim_{x\to0^{+}}\frac{1}{x\cos(x)}$$



I would have applied L'Hôpital's rule again, but to my horror, I realise that $\frac{1}{0}$ is not an indeterminate form, according to Wikipedia.




I realized that my reasoning, while it could let me get the correct answer, is wrong! How do you solve this question now?



EDIT: Is $\frac{-\infty}{0}$ indeterminate as well? I couldn't find it in Wikipedia. If it is not indeterminate, I couldn't use L'Hôpital's rule too!


Answer



It would be re-written as : $$\lim_{x \to 0^+} \frac{\ln (x)}{\csc (x)}$$



Instead of $$ \frac{\ln(x)}{\sin(x)}$$
Now use L'Hopital's rule.




Also note that the form $\dfrac{\infty}{0}$ isn't indeterminate, it already tends to $\infty$. The problem in your solution is that you accidentally wrote : $$\frac{1}{\sin x}= \sin x$$


proof writing - Proving sequence statement using mathematical induction, $d_n = frac{2}{n!}$



I'm stuck on this homework problem. I must prove the statement using mathematical induction




Given: A sequence $d_1, d_2, d_3, ...$ is defined by letting $d_1 = 2$ and for all integers k $\ge$ 2.
$$
d_k = \frac{d_{k-1}}{k}
$$



Show that for all integers $n \ge 1$ , $$d_n = \frac{2}{n!}$$







Here's my work:



Proof (by mathematical induction). For the given statement, let the property $p(n)$ be the equation:



$$
d_n = \frac{2}{n!}
$$



Show that $P(1)$ is true:
The left hand side of $P(1)$ is $d_n$ , which equals $2$ by definition of the sequence.

The right hand side is:



$$ \frac{2}{(1)!} =2 $$



Show for all integers $k \geq 1$, if $P(k)$ is true, then $p(k+1)$ is true.
Let k be any integer with $k \geq 1$, and suppose $P(k)$ is true. That is, suppose: (This is the inductive hypothesis)



$$ d_{k} = \frac{2}{k!} $$



We must show that $P(K+1)$ is true. That is, we must show that:




$$ d_{k+1} = \frac{2}{(k+1)!} $$



(I thought I was good until here.)



But the left hand side of $P(k+1)$ is:



$$ d_{k+1} = \frac{d_k}{k+1} $$



By inductive hypothesis:




$$ d_{k+1} = \frac{(\frac{2}{2!})}{k+1} $$



$$ d_{k+1} = \frac{2}{2!}\frac{1}{k+1} $$



but that doesn't seem to equal what I needed to prove: $ d_n = \frac{2}{n!}$


Answer



The following is not true $$d_{k+1} = \frac{(\frac{2}{2!})}{k+1}$$ since $d_k=\frac{2}{k!}$ not $\frac{2}{2!}$, you actually have $$d_{k+1} = \frac{(\frac{2}{k!})}{k+1}=\frac{(\frac{2}{k!(k+1)})}{1}=\frac{2}{(k+1)!}$$


Monday 22 July 2013

calculus - Integrating $frac{x^k }{1+cosh(x)}$



In the course of solving a certain problem, I've had to evaluate integrals of the form:



$$\int_0^\infty \frac{x^k}{1+\cosh(x)} \mathrm{d}x $$



for several values of k. I've noticed that that, for k a positive integer other than 1, the result is seemingly always a dyadic rational multiple of $\zeta(k)$, which is not particularly surprising given some of the identities for $\zeta$ (k=7 is the first noninteger value).



However, I've been unable to find a nice way to evaluate this integral. I'm reasonably sure there's a way to change this expression into $\int \frac{x^{k-1}}{e^x+1} \mathrm{d}x$, but all the things I tried didn't work. Integration by parts also got too messy quickly, and Mathematica couldn't solve it (though it could calculate for a particular value of k very easily).




So I'm looking for a simple way to evaluate the above integral.


Answer



Just note that
$$ \frac{1}{1 + \cosh x} = \frac{2e^{-x}}{(1 + e^{-x})^2} = 2 \frac{d}{dx} \frac{1}{1 + e^{-x}} = 2 \sum_{n = 1}^{\infty} (-1)^{n-1} n e^{-n x}.$$
Thus we have
$$ \begin{eqnarray*}\int_{0}^{\infty} \frac{x^k}{1 + \cosh x} \, dx
& = & 2 \sum_{n = 1}^{\infty} (-1)^{n-1} n \int_{0}^{\infty} x^{k} e^{-n x} \, dx \\
& = & 2 \sum_{n = 1}^{\infty} (-1)^{n-1} \frac{\Gamma(k+1)}{n^k} \\
& = & 2 (1 - 2^{1-k}) \zeta(k) \Gamma(k+1).

\end{eqnarray*}$$
This formula works for all $k > -1$, where we understand that the Dirichlet eta function $\eta(s) = (1 - 2^{1-s})\zeta(s)$ is defined, by analytic continuation, for all $s \in \mathbb{C}$.


calculus - Finding real and imaginary part of exponential function

Can someone explain to me how I find the real and the imaginary part of $e^{\theta i}$?



I'm learning complex numbers but I don't quite understand how $e$ is intertwined in all this.

algebra precalculus - Prove that $frac{2sin(a)+sec(a)}{1+tan(a)}$ = $frac{1+tan(a)}{sec(a)}$





Prove that $\frac{2\sin(a)+\sec(a)}{1+\tan(a)}$ = $\frac{1+\tan(a)}{\sec(a)}$






My attempt using the LHS



$$\frac{2\sin(a)+\sec(a)}{1+\tan(a)}$$



$$ \frac{2\sin(a)+\frac{1}{\cos(a)}}{1+\frac{\sin(a)}{\cos(a)}} $$



$$ \frac{\frac{2\sin(a)+1}{\cos{a}}}{\frac{\cos(a)+\sin(a)}{\cos(a)}} $$



$$ {\frac{2\sin(a)+1}{\cos{a}}} * {\frac{\cos(a)}{\cos(a)+sin(a)}} $$




$$ \frac{2\sin(a)+1}{\cos(a)+sin(a)} $$



Now I am stuck...


Answer



Hint: What does $(\cos a+ \sin a)^2$ simplify to?


calculus - Evaluating $int xsin^{-1}x dx$

I was integrating $$\int x\sin^{-1}x dx.$$
After applying integration by parts and some rearrangement I got stuck at $$\int \sqrt{1-x^2}dx.$$
Now I have two questions:





  1. Please, suggest any further approach from where I have stuck;


  2. Please, provide an alternative way to solve the original question.






Please help!!!

sequences and series - The sum $1+frac{1}{3}+frac{1}{5}+frac{1}{7}+cdots-(frac{1}{2}+frac{1}{4}+frac{1}{6}+cdots)$ does not exist.

What are the argument(s) that I can use proving that



$$1+\frac{1}{3}+\frac{1}{5}+\frac{1}{7}+\cdots-(\frac{1}{2}+\frac{1}{4}+\frac{1}{6}+\cdots)$$



does not exist.




The question was:



Find a arrangement of $\sum\frac{(-1)^{n-1}}{n}$ for which the new sum is not exist(even not $+\infty$ or $-\infty$)

geometry - How is it that this shape can converge to what looks like a triangle but has a different perimeter?

I had this strange notion some time ago, and I recently wrote a blog post about it, as a mere curiosity. I don't really consider it a "serious" mathematical question; but out of interest, I wondered if someone on this site could shed some light on what principle might be underlying the idea.



Basically, I envisioned a "pseudo-triangle" consisting of two straight edges and one jagged "edge" (not really an edge, since it's jagged, but I'm calling it that anyway):



Pseudo-triangle with a 4-step jagged edge




The above shape has 4 steps, its area is 10, and its perimeter is 16. Now let's increase the number of steps to 8:



Pseudo-triangle with an 8-step jagged edge



This shape has an area of 9 and a perimeter of 16. Now, without me having to write out a formal proof, I think it's pretty clear that as the number of steps increases, the area will approach 8 while the perimeter will remain constant at 16. And the resulting shape will look like this:



Pseudo-triangle with N steps (approaching infinity) along its jagged edge



Ultimately, there's nothing really "mysterious" about this; the shape above is not a triangle, and so it shouldn't be surprising that it doesn't have quite the same properties as a triangle. However, it does approach the same area as an analogous triangle; and, more to the point, it just seems odd.




Is there a concept in mathematics that describes this phenomenon (for lack of a better word)? That is, the effect of some kind of mathematical entity (e.g., a shape) converging to what resembles another entity but differs from it in a critically important and counter-intuitive way (in this case, having a completely different perimeter)?



If it seems that I'm having trouble articulating this question, that's because I am. But hopefully someone out there can see what I'm getting at and shed some light on the issue for me.

Sunday 21 July 2013

If you roll two fair six-sided dice, what is the probability that the sum is 4 or higher?





If you roll two fair six-sided dice, what is the probability that the sum is $4$ or higher?




The answer is $\frac{33}{36}$ or $\frac{11}{12}$. I understand how to arrive at this answer. What I don't understand is why the answer isn't $\frac{9}{11}$? When summing the results after rolling two fair six sided dice, there are $11$ equally possible outcomes: $2$ through $12$. Two of these outcomes are below four, meaning $9$ are greater than or equal to four which is how I arrived at $\frac{9}{11}$. Can someone help explain why that is wrong?


Answer



It is wrong because it is not $11$ equally possible outcome.



There is exactly $1$ way to get the sum to be $2$. ($1+1=2$)



but there is more than one way to get $3$. ($1+2=3, 2+1=3$)



real analysis - limit of $left( 1-frac{1}{n}right)^{n}$

limit of $$\left( 1-\frac{1}{n}\right)^{n}$$



is said to be $\frac{1}{e}$ but how do we actually prove it?




I'm trying to use squeeze theorem



$$\frac{1}{e}=\lim\limits_{n\to \infty}\left(1-\frac{1}{n+1}\right)^{n}>\lim\limits_{n\to \infty}\left( 1-\frac{1}{n} \right)^{n} > ??$$

definition - Given real numbers: define integers?



I have only a basic understanding of mathematics, and I was wondering and could not find a satisfying answer to the following:




Integer numbers are just special cases (a subset) of real numbers. Imagine a world where you know only real numbers. How are integers defined using mathematical operations?




Knowing only the set of complex numbers $a + bi$, I could define real numbers as complex numbers where $b = 0$. Knowing only the set of real numbers, I would have no idea how to define the set of integer numbers.




While searching for an answer, most definitions of integer numbers talk about real numbers that don't have a fractional part in their notation. Although correct, this talks about notation, assumes that we know about integers already (the part left of the decimal separator), and it does not use any mathematical operations for the definition. Do we even know what integer numbers are, mathematically speaking?


Answer



There are several ways to interpret this question.



The naive way would be simply to give some sort of definition to the set of integers. This is not very difficult, we can recognize $1$ in the real numbers, because it is the unique number $x$ such that $r\cdot x=r$ for all $r\in\mathbb R$.



Now consider the set obtained by repeatedly adding $1$ to itself, or subtracting $1$ from itself. Namely $\{0,1,-1,1+1,-1-1,1+1+1,-1-1-1,\ldots\}$. One can show that this set is indeed the integers. Every integer is a finite summation of $1$ or the additive inverse of such set.



One could also define, like Nameless did, what is being an inductive set, then define $\mathbb N$ as the least inductive set, and $\mathbb Z$ as the least set containing $\mathbb N$ and closed under addition and subtraction.







However one could also interpret this question by asking "Is the set $\mathbb Z$ first-order definable in $\mathbb R$ in the language of ordered fields?", namely if you live inside the real numbers, is there some formula in the language $0,1,+,\cdot,<$ such that only integers satisfy it?



The answer to this question is negative. The integers are not first-order definable in $\mathbb R$. This is a nontrivial result in model theory. But it is important to note it, because it is a perfectly valid interpretation of the question, and it results in a completely different answer than the above one.



In particular it helps understanding first-order definability vs. second-order definability, and internal- vs. external-definability.







I am adding some more to the answer because of a recent comment by the OP to the original question. I feel that some clarification is needed to my answer here.



First we need to understand what does "define" mean from a logic-oriented point of view. It means that there is a language such that the real numbers interpret the language in a coherent way, and there is a formula in one free variable which is true if and only if we plug an integer into it.



For example we cannot define $\mathbb R$ within $\mathbb C$ if we only have $0,1,+,\times$, but we can do that if we also have the conjugation map in our vocabulary - as shown in the question.



Naively speaking, when we approach to mathematics we may think that everything is available to us, which is true to some extent. But when we want to talk about logic, and in particular first-order logic, then we need to first understand that only things within a particular language are available to us and we cannot expect people to guess what this language is if we don't specify that.



This question did not specify the language, which makes it not unreasonable to think that we are talking about the real numbers in the language of ordered fields. So in our language we only have $0,1,+,\times,<$ (and equality, we always have equality). In this language we cannot define the integers within the real numbers with a first-order formula.




Ah, but what is a first-order formula? Well, generally in a formula we can have variables which are objects of our structure (in this case, real numbers) and we can have sets of real numbers and we can have sets of sets of real numbers and so on. First-order formulas allow us only to quantify over objects, that is real numbers. So variables in first-order logic are elements of the universe, which in our case means simply real numbers. Second-order logic would allow us to quantify over sets (of real numbers) as well, but not sets of sets of real numbers, and so on.



So for example, we can write a definition for $2$ using a first-order formula, e.g. $x=1+1$. There is a unique element which satisfies this property and this is $2$. And we can write the definition of an inductive set using a second-order formula, $0\in A\land\forall x(x\in A\rightarrow x+1\in A)$.



But as it turns out we cannot express the property of being an inductive set (and we certainly cannot express the property of being the minimal inductive set) in first-order logic when we restrict ourselves only to the real numbers as an ordered field. The proof, as I remarked, is not trivial.



The comment I referred to says:




@WillieWong I don't know the real number field: to me real numbers form a line. – Virtlink





Which gives yet another way to interpret this approach. We can consider the real numbers simply as an ordered set. We ignore the fact it is a field, and we simply look at the order.



But this language has even less expressive powers than that of an ordered field, for example we cannot even define the addition and multiplication. In fact we can't even define $0$ and $1$ in this language. We only have the order to work with, and that's really not much to work with.



It is much easier to show that simply as an ordered set all the results about undefinability hold, I won't get into this but I'll just point out that definability is "immune to automorphisms", and $\mathbb R$ has plenty of automorphisms which preserve the order and move every element.



Beyond that one runs into philosophy of mathematics pretty quick. What are the real numbers? Are they sets? Are they any structure which interpret a particular language in a particular way? Do we construct the integers from the real numbers or do we construct the real numbers from the integers? (First we have to construct the rational numbers, of course.)




Those are interesting questions, but essentially moot if one simply wishes to talk about first-order definability in a particular language or another. But if one is approaching this in a "naive" way which allows higher order quantification, and the usage of any function we know about then the answer becomes quite simple (although it is possible to run into circularity if one is not careful enough).



I hope this explains, amongst other things, why I began my answer with "There are several ways to interpret this question". We simply can see the real numbers as different structure in different languages, and we may or may not allow formulas to use sets of real numbers. In each of these interpretation of the question we may have a different answer, and different reasons for this answer to be true!



(I should stop writing this monograph now, if you're read this far - you're a braver [wo]man than I am!)



But wait, there's more!




  1. True Definition of the Real Numbers


  2. FO-definability of the integers in (Q, +, <)

  3. What is definability in First-Order Logic?


calculus - Why is it legal to take the antiderivative of both sides of an equation?




first, I must apologize for somewhat misleading a title.



To save both your and my time, I will go straight to the point.



By definition, an indefinite integral, or a primitive, or an antiderivative of a (some condition) function $f(x)$ is any $F(x)$ such that $F'(x)=f(x)$. All well and good.



Because any other primitive can be written as $F(x)+C$ for some constant $C$ (and this requires a proof), if we were to denote by $\int f(x)dx$ an antiderivative of $f(x)$, then
\begin{equation} \int f(x)dx = F(x)+C. \end{equation}




Fine. But here is the part that every textbook seems to have no problem with, but bugs me greatly: Often they say that integrate both sides of the following equation:
\begin{equation} f(x)=g(x),\end{equation}



to obtain
\begin{equation} \int f(x)dx = \int g(x)dx. \end{equation}



This looks like an ABSOLUTE nonsense to be for the following reason: IF both sides of the previous equation are TRULLY equal, then surely



\begin{equation} \int f(x)dx - \int g(x)dx =0. \end{equation}




But



\begin{equation} \int f(x)dx - \int g(x)dx =\int (f(x)-g(x))dx = \int 0dx, \end{equation}



which then equals $C$, any constant. Surely this is not necessarily 0!



So in short, this is my question: IS IT, STRICTLY SPEAKING, LEGAL, TO TAKE THE ANTIDERIVATIVE OF BOTH SIDES OF AN EQUATION?


Answer



The problem here might be the notation $\int f(x)\,dx$. Does it mean one primitive? All primitives? Something else?




If we for a while agree that $F$ is a primitive of $f$ on an interval $I$ and $G$ is a primitive of $g$ on the same interval $I$, and it holds that $f(x)=g(x)$ for all $x\in I$, then we can be sure that $F(x)=G(x)+C$ for all $x\in I$, where $C$ is some constant.


Saturday 20 July 2013

elementary set theory - Bijection between $mathbb{Z}timesmathbb{Z}timesdots$ and $mathbb{R}$

Is there a bijection between $\mathbb{Z}\times\mathbb{Z}\times\dots$ for countably infinitely many $\mathbb{Z}$'s and $\mathbb{R}$? That is, is $\mathbb{Z}\times\mathbb{Z}\times\dots$, repeated countably infinitely many times, uncountable?

integration - Not Riemman integrable derivaitive



Is there a function $F:[a,b]\to\mathbb{R}$ differentiable with $F' = f$, but $f$ is not Riemann integrable on $[a,b]$. $[a,b]$ is a bounded interval?






Motivation
Rudin page 152 Theorem 7.17: Suppose $\{f_n\}$ a sequence of functions, differentiable on $[a,b]$ and such that $\{f_n(x_0)\}$ converges for some point $x_0\in [a,b]$. If $\{f_n'\}$ converges uniformly on $[a,b]$ to $g$, then $\{f_n\}$ converges uniformly on $[a,b]$ to $f$ where $f' = g$.



In the remark, it says if the continuity of the function $f'_n$ is assumed in addition to the above hypothesis, then a much shorter proof can be based on fundamental theorem of calculus and the theorem that $\int_a^b f_n d\alpha\to\int_a^b f d\alpha$ if we have $f_n\to f$ uniformly.




I was thinking that we don't really need $f_n'$ to be continuous, we just need it to be Riemann integrable so we can apply Fundamental Thm of Calculus. From the remark I guess that there should be a derivative that is not integrable.






EDIT
Dominic Michaelis answered my question immediately. A further thought yields a more difficult question, is there a function as said above with $f'$ bounded on $[a,b]$?


Answer



Take $F[-1,1] \to \mathbb{R}$ with
$$F(x)= \begin{cases}
x^2 \sin\left(\frac{1}{x^2}\right) & x \neq 0\\
0 & x=0 \\

\end{cases}
$$
The derivative is not a regulated function, and not bounded at $0$. I don't know a definition of riemann integrable allowing those things.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...