Saturday 31 January 2015

real analysis - How discontinuous can a derivative be?



There is a well-known result in elementary analysis due to Darboux which says if $f$ is a differentiable function then $f'$ satisfies the intermediate value property. To my knowledge, not many "highly" discontinuous Darboux functions are known--the only one I am aware of being the Conway base 13 function--and few (none?) of these are derivatives of differentiable functions. In fact they generally cannot be since an application of Baire's theorem gives that the set of continuity points of the derivative is dense $G_\delta$.



Is it known how sharp that last result is? Are there known Darboux functions which are derivatives and are discontinuous on "large" sets in some appropriate sense?



Answer



What follows is taken (mostly) from more extensive discussions in the following sci.math posts:



http://groups.google.com/group/sci.math/msg/814be41b1ea8c024 [23 January 2000]



http://groups.google.com/group/sci.math/msg/3ea26975d010711f [6 November 2006]



http://groups.google.com/group/sci.math/msg/05dbc0ee4c69898e [20 December 2006]



Note: The term interval is restricted to nondegenerate intervals (i.e. intervals containing more than one point).




The continuity set of a derivative on an open interval $J$ is dense in $J.$ In fact, the continuity set has cardinality $c$ in every subinterval of $J.$ On the other hand, the discontinuity set $D$ of a derivative can have the following properties:




  1. $D$ can be dense in $\mathbb R$.


  2. $D$ can have cardinality $c$ in every interval.


  3. $D$ can have positive measure. (Hence, the function can fail to be Riemann integrable.)


  4. $D$ can have positive measure in every interval.


  5. $D$ can have full measure in every interval (i.e. measure zero complement).


  6. $D$ can have a Hausdorff dimension zero complement.



  7. $D$ can have an $h$-Hausdorff measure zero complement for any specified Hausdorff measure function $h.$




More precisely, a subset $D$ of $\mathbb R$ can be the discontinuity set for some derivative if and only if $D$ is an $F_{\sigma}$ first category (i.e. an $F_{\sigma}$ meager) subset of $\mathbb R.$



This characterization of the discontinuity set of a derivative can be found in the following references: Benedetto [1] (Chapter 1.3.2, Proposition, 1.10, p. 30); Bruckner [2] (Chapter 3, Section 2, Theorem 2.1, p. 34); Bruckner/Leonard [3] (Theorem at bottom of p. 27); Goffman [5] (Chapter 9, Exercise 2.3, p. 120 states the result); Klippert/Williams [7].



Regarding this characterization of the discontinuity set of a derivative, Bruckner and Leonard [3] (bottom of p. 27) wrote the following in 1966: Although we imagine that this theorem is known, we have been unable to find a reference. I have found the result stated in Goffman's 1953 text [5], but nowhere else prior to 1966 (including Goffman's Ph.D. Dissertation).



Interestingly, in a certain sense most derivatives have the property that $D$ is large in all of the ways listed above (#1 through #7).




In 1977 Cliff Weil [8] published a proof that, in the space of derivatives with the sup norm, all but a first category set of such functions are discontinuous almost everywhere (in the sense of Lebesgue measure). When Weil's result is paired with the fact that derivatives (being Baire $1$ functions) are continuous almost everywhere in the sense of Baire category, we get the following:



(A) Every derivative is continuous at the Baire-typical point.



(B) The Baire-typical derivative is not continuous at the Lebesgue-typical point.



Note that Weil's result is stronger than simply saying that the Baire-typical derivative fails to be Riemann integrable (i.e. $D$ has positive Lebesgue measure), or even stronger than saying that the Baire-typical derivative fails to be Riemann integrable on every interval. Note also that, for each of these Baire-typical derivatives, $\{D, \; {\mathbb R} - D\}$ gives a partition of $\mathbb R$ into a first category set and a Lebesgue measure zero set.



In 1984 Bruckner/Petruska [4] (Theorem 2.4) strengthened Weil's result by proving the following: Given any finite Borel measure $\mu,$ the Baire-typical derivative is such that the set $D$ is the complement of a set that has $\mu$-measure zero.




In 1993 Kirchheim [5] strengthened Weil's result by proving the following: Given any Hausdorff measure function $h,$ the Baire-typical derivative is such that the set $D$ is the complement of a set that has Hausdorff $h$-measure zero.



[1] John J. Benedetto, Real Variable and Integration With Historical Notes, Mathematische Leitfäden. Stuttgart: B. G. Teubne, 1976, 278 pages. [MR 58 #28328; Zbl 336.26001]



[2] Andrew M. Bruckner, Differentiation of Real Functions, 2nd edition, CRM Monograph Series #5, American Mathematical Society, 1994, xii + 195 pages. [The first edition was published in 1978 as Springer-Verlag's Lecture Notes in Mathematics #659. The second edition is essentially unchanged from the first edition with the exception of a new chapter on recent developments (23 pages) and 94 additional bibliographic items.] [MR 94m:26001; Zbl 796.26001]



[3] Andrew M. Bruckner and John L. Leonard, Derivatives, American Mathematical Monthly 73 #4 (April 1966) [Part II: Papers in Analysis, Herbert Ellsworth Slaught Memorial Papers #11], 24-56. [MR 33 #5797; Zbl 138.27805]



[4] Andrew M. Bruckner and György Petruska, Some typical results on bounded Baire $1$ functions, Acta Mathematica Hungarica 43 (1984), 325-333. [MR 85h:26004; Zbl 542.26004]




[5] Casper Goffman, Real Functions, Prindle, Weber & Schmidt, 1953/1967, x + 261 pages. [MR 14,855e; Zbl 53.22502]



[6] Bernd Kirchheim, Some further typical results on bounded Baire one functions, Acta Mathematica Hungarica 62 (1993), 119-129. [94k:26008; Zbl 786.26002]



[7] John Clayton Klippert and Geoffrey Williams, On the existence of a derivative continuous on a $G_{\delta}$, International Journal of Mathematical Education in Science and Technology 35 (2004), 91-99.



[8] Clifford Weil, The space of bounded derivatives, Real Analysis Exchange 3 (1977-78), 38-41. [Zbl 377.26005]


algebra precalculus - Is $pi = 4$ really?

Can anyone explain what's wrong with this?




enter image description here

elementary number theory - Find the remainder for $sum_{i=1}^{n} (-1)^i cdot i!$ when dividing by 36 $forall n in Bbb N$



I need to find the remainder $\forall n \in \Bbb N$ when dividing by 36 of:



$$\sum_{i=1}^{n} (-1)^i \cdot i!$$



I should use congruence or the definitions of integer division as that's whave we've seen so far in the course. I don't know where to start. Any suggestions? Thanks!


Answer



Hint:




For $n\geq 6$ one has:



$\sum\limits_{i=1}^n(-1)^ii! = \sum\limits_{i=1}^5(-1)^ii! + \sum\limits_{i=6}^n(-1)^ii!$



Next, notice that for all $i\geq 6$ one has $i!=1\cdot \color{red}{2\cdot 3}\cdot 4\cdot 5 \cdot\color{red}{6}\cdots (i-1)\cdot i$




implying that for $i\geq 6$ one has $36$ divides evenly into $i!$. What does the right sum contribute to the remainder when divided by $36$ then?





From here it should be easy enough to brute force the remainder of the solution.


Friday 30 January 2015

integers - how to proof that $1,9999.... in mathbb{Z}$?





I want to prove that $1,999\dots$ Is an element of $\mathbb{Z}$. Here is my try :



$x = 1,9999\dots \\
10x = 19,9999\dots \\
10x - x = 18 \\

9x = 18 \\
x = 18/9 \\
x = 2 $



So $x \in \mathbb{Z}$



I know something is wrong but where ?


Answer



$$x=1+9\sum_{i=1}^{+\infty}10^{-i} $$




$$=1+9\sum_{i=1}^{+\infty}(\frac {1}{10})^i$$



$$=1+9\frac {1}{10}\frac {1}{1-\frac {1}{10}}$$



$$=1+9\frac{1}{10}\frac {10}{9}=2$$


abstract algebra - What does "characteristic" mean in mathematics?



In Germany, I have heard "ABC is 'characteristic' for XYZ" sometimes from math students. It was used like "if you know ABC, then you know you're talking about XYZ" or "ABC describes XYZ completely".



For example:




The generator is characteristic for a cyclic group.





I know that the characteristic polynomial $CP_A(\lambda) := \det(\lambda E_n - A)$ does not completely describe the matrix $A$ as similar matrices have the same characteristic polynomial. So here "characteristic" seems to mean something like "an important attribute".



Now I've learned the term characteristic of a unit ring $(R, +, \cdot)$:




  • It exists exactly one ringhomomorphism $\varphi: \mathbb{Z} \rightarrow R$. Let $\text{char}(R) := n$ be the non-negative generator of the kernel.

  • According to Wikipedia, it is
    $$\text{char}(R) := \min\{n \in \mathbb{N} | \underbrace{1 + \dots + 1}_{n} = 0\} \text{ (or 0 if no such $n$ exists)}$$




I've also seen that the term characteristic function exists (but I don't know what it is).



Questions



So my questions are:




  • How do mathematicians use the word "characteristic"? What do the three uses I described have in common?

  • In how far is the characteristic of a unit ring interesting? Why is the characteristic of a unit ring "characteristic"? What can you say about the ring, when you only know the characteristic?



Answer



So for the first question, one answer could be




They don't have to have anything in common, although a couple of them do. They don't all stem from a single source, and their uses depend on which field you are in.




Generally




In just plain English, the phrase "is characteristic of" means "is a distinguishing feature of." By extension in mathematics, it could be used this way to describe something that completely describes another thing. The generator of a cyclic group is a good example of this, since you can recover the entire group from a single generator.



More generally, mathematicians like to talk about "what characterizes" a certain thing. So, for example, the Artin-Wedderburn theorem is a characterization of the semisimple rings as finite products of matrix rings over division rings. The Ore condition is what characterizes which domains can be densely embedded in division rings.



Special uses



Characteristic subgroups in group theory are very special normal groups. Essentially, they are unmoved by automorphisms of the group. This makes them even more special than normal subgroups.



The characteristic polynomial doesn't describe the matrix, but it does characterize information about the transformation that the matrix represents. Really the particular matrix representation is secondary to the transformation.




Sometimes eigenvectors and eigenvalues are referred to as "characteristic" rather than "eigen." (This also suggests somewhat that "characteristic" is one of those overloaded math terms like "regular" or "normal" that pops up every time someone invents something new.)



The characteristic of a ring is important, but I don't know if I can tell you a single reason why. First of all, there are a lot of theorems that are first proven for characteristic zero. (I'm thinking in particular of some results in group rings.) This is usually because the very basic rings ($\Bbb Z,\Bbb R,\Bbb Q,\Bbb C$) are all characteristic zero, and maybe our intuition is better with them. Ordered fields, which are in some way bound up with our intuition of geometric length, are all characteristic zero too.



When the characteristic is nonzero, things are harder because you have to cope with a kind of (very interesting!) degeneracy. The characteristic 2 case seems like the "least nice" because of its appearance in some geometrically freak cases. Even though I've said "degeneracy" and "freak" now to describe these things, I still want to stress that they are interesting and important. Positive characteristic fields are the natural environment for algebraic coding theory, after all :)



The first time I encountered characteristic functions was in measure theory, where the characteristic function of a set is one which has value $1$ for elements of the set, and value $0$ elsewhere. Perhaps the motivation for calling it "characteristic" is that it clearly distinguishes which points are inside the set and which points are outside the set.



Another use of "character" is the one from representation theory, where you talk about the character afforded by a representation. I'm not aware of the true origins of the term, but I've always thought of it like this. The character of a representation is a distillation of some of the information carried by the representation. The information is distilled into a function from the group into a field. There, you can divine several important characteristics of the group. You might have also derived them directly from the representation, but the character might make things easier.




There is also something I know nothing about called the Euler characteristic which is a topological invariant. "Invariant" could be considered semantically close to "characteristic." They are both often used to describe qualities that are intrinsic to the object.


Real vs Complex Representations of a Lie algebra




Proposition 3.39 of Hall's Lie Groups, Lie Algebras And Representations:



"Let $\mathfrak{g}$ be a real Lie algebra, $\mathfrak{g}_\mathbb{C}$ its complexification, and $\mathfrak{h}$ an
arbitrary complex Lie algebra. Then every real Lie algebra homomorphism of $\mathfrak{g}$ into $\mathfrak{h}$ extends uniquely to a complex Lie algebra homomorphism of $\mathfrak{g}_\mathbb{C}$ into $\mathfrak{h}$."



In particular this means that any real representation of $\mathfrak{g}$ defines a complex representation of $\mathfrak{g}_\mathbb{C}$.



Question: Does the converse hold? Does any complex representation of $\mathfrak{g}_\mathbb{C}$ define a real representation of $\mathfrak{g}$? Are there any conditions for when this may or may not hold?


Answer



Of course it does. If the Lie algebra $\mathfrak{g}_{\mathbb C}$ acts on a complex vector space $V$, simply consider the restriction of this action to $\mathfrak g$. To be more precise, if $X\in\mathfrak g$ and if $v\in V$, then define $X.v$ as $(X\otimes 1).v$. I am assuming here that Hall defined $\mathfrak{g}_{\mathbb C}$ as $\mathfrak{g}\bigotimes_{\mathbb{R}}\mathbb{C}$. If he used another definition, pleas say which one does he use.



real analysis - Calculate $limlimits_{ xto + infty}xcdot sin(sqrt{x^{2}+3}-sqrt{x^{2}+2})$




I know that $$\lim\limits_{ x\to + \infty}x\cdot \sin(\sqrt{x^{2}+3}-\sqrt{x^{2}+2})\\=\lim\limits_{ x\to + \infty}x\cdot \sin\left(\frac{1}{\sqrt{x^{2}+3}+\sqrt{x^{2}+2}}\right).$$ If $x \rightarrow + \infty$, then $\sin\left(\frac{1}{\sqrt{x^{2}+3}+\sqrt{x^{2}+2}}\right)\rightarrow \sin0 $. However I have also $x$ before $\sin x$ and I don't know how to calculate it.


Answer



Letting $h=\frac1x$:



$$\begin{array}{cl}
&\displaystyle \lim_{x \to \infty} x \sin \left( \sqrt{x^2+3} - \sqrt{x^2+2} \right) \\
=&\displaystyle \lim_{x \to \infty} x \sin \left( \frac 1 {\sqrt{x^2+3} + \sqrt{x^2+2} } \right) \\
=&\displaystyle \lim_{h \to 0^+} \frac1h \sin \left( \frac h {\sqrt{1+3h^2} + \sqrt{1+2h^2} } \right) \\
=&\displaystyle \lim_{h \to 0^+} \frac 1 {\sqrt{1+3h^2} + \sqrt{1+2h^2}} \frac {\sqrt{1+3h^2} + \sqrt{1+2h^2}} h \sin \left( \frac h {\sqrt{1+3h^2} + \sqrt{1+2h^2} } \right) \\

=&\displaystyle \frac12 \times \lim_{h \to 0^+} \frac {\sqrt{1+3h^2} + \sqrt{1+2h^2}} h \sin \left( \frac h {\sqrt{1+3h^2} + \sqrt{1+2h^2} } \right) \\
=&\displaystyle \frac12 \times 1 \\
=&\dfrac12
\end{array}$$


sequences and series - Proving $ sum_{n=1}^{infty} nz^{n} = frac{z}{(1-z)^2}$ for $z in (-1, 1)$





I do not know where to start, any hints are welcome.


Answer



Note that $$\sum_{n=0}^\infty z^n=\frac{1}{1-z} \tag 1$$



for $|z|<1$.



Differentiating $(1)$ and multiplying by $z$ (this is legitimate since for any $r<1$, $\sum_{n=1}^\infty nz^{n-1}$ converges uniformly for $|z|\le r<1$) yields



$$\begin{align}
z \frac{d}{dz}\sum_{n=0}^\infty z^n&=\sum_{n=0}^\infty nz^n\\\\

&=\sum_{n=1}^\infty nz^n\\\\
&=\frac{z}{(1-z)^2}
\end{align}$$



for $|z|<1$.


Thursday 29 January 2015

discrete mathematics - What is the remainder when $N = (1! + 2! + 3! + 4! + ........... + 1000! )^{40}$ is divided by $10?$




What is the remainder when $N = (1! + 2! + 3! + 4! + ........... + 1000! )^{40}$ is divided by $10$ ?




My try:




On watching the pattern as it grows, after $4!$ all are divisible by $10$.



So, infact I am just left with $N = (1! + 2! + 3! + 4! + 0)^{40}$ and I need to check the remainder when this $N$ is divisible by $10$.



Hence, the $N$ sums up to $33^{40}$ when divided by $10$ .



Now, after this I can simply apply Euler's Theorem such that



$33^{4} = 1 (mod 10)$




After all, the remainder comes out to be $1$.






I don't have an answer for this. Is my understanding right or did I miss something?


Answer



Your answer is correct. A few pointers, however:





  1. Note that you can reduce $33$ to just $3$

  2. Euler's theorem says that $3^{4}\equiv 1\pmod{10}$


modular arithmetic - Proving isomorphism on additive group $(Bbb{Z}_4,+)$ and multiplicative group $(Bbb{Z}_5^*, times)$

The question I'm having problems with involves proving the above groups are isometric. Therefore, I have to prove they are bijective (1-1, and onto) and homomorphic. I have done the group operations table for each and come up with:



Set $A = (\mathbb{Z}_4,+) = {0,1,2,3}$



Set $B = (\Bbb{Z}_5^*, \times) = {1,2,3,4}$



So $B$, can be mapped from $A$ with the function: $\alpha(x)=x+1$



We can see no element of $B$ is the image of more than one element in $A$, therefore we have proven a 1-1 correspondence, and onto.




Now I have to prove they are isomorphic by exhibiting a 1-1 corresponds α between their elements such that:



$a+b \equiv c\ (\text{mod}\ 4)$ if and only if $\alpha(a) \cdot \alpha(b)\equiv \alpha(c)(\text{mod}\ 5)$



I'm stuck here... do I just plug in all the possible values of a, b, and c? I suppose there should be 4 ways to do this...



As an example:



$a = 1, b = 2, c = 3$




$1 + 2 = 3(\text{mod}\ 4)$



$3\equiv 3(\text{mod}\ 4)$



and



$\alpha(1) \cdot \alpha(2) \equiv \alpha(3)(\text{mod}\ 5)$



$2\cdot 3 \equiv 4(\text{mod}\ 5)$




$6\equiv 4(\text{mod}\ 5)??????$



I feel like I'm missing a key concept here. Been watching a ton of videos, but I'm missing something!

algebra precalculus - Help needed verifying a trigonometric identity



I have the following identity:




$$ \frac{\tan (t + h) - \tan(t)}{h} = \left( \frac{\tan (h)}{h} \right)\left( \frac{\sec^2(t)}{1 - \tan (t)\tan (h)} \right)$$








Having tried various approaches, which are far too varied and numerous to list all here, the only one that seems to have the most promise is this one (however it still falls quite short); using the right hand side:



$$\begin{align}\text{LHS} &= \left( \frac{\tan (h)}{h} \right)\left( \frac{\sec^2(t)}{1 - \tan (t)\tan (h)} \right)\\
&=\frac{\dfrac{\sin (h)}{\cos(h)}}{h} \cdot \dfrac{\dfrac{1}{\cos^2(t)}}{\dfrac{\cos(t)\cos(h)-\sin(t)\sin(h)}{\cos(t)\cos(h)}}\\
&= \dfrac{1}{h} \cdot \dfrac{\dfrac{\sin(h)}{\cos(h)\cos^2(t)}}{\dfrac{\cos(t)\cos(h)-\sin(t)\sin(h)}{\cos(t)\cos(h)}}\\
&= \frac{1}{h} \cdot \frac{\sin (h)}{\cos(h)\cos ^2(t)}\cdot \frac{\cos (t)\cos(h)}{\cos(t)\cos(h)-\sin(t)\sin(h)}\\
&= \frac{\sin(h)}{h\cdot\cos(t)\cdot\cos(t+h)}\end{align}$$



Can anyone throw me a bone here? I'm stumped. Thanks.



Answer



Apply the rule



$$ \tan(t+h) = \frac{\tan(t) + \tan(h)}{1-\tan(t)\tan(h)} $$



to the LHS and it's straightforward from there.


elementary number theory - Prove that an integer of the form $8n +7$ cannot be expressed as a sum of three integer squares




Consider an integer of the form $8n +7$. Show that it cannot be expressed as a sum of three integer squares.




Hints are welcome. If you wish to post an answer, please post a hint as your answer, especially some fundamental concepts in elementary number theory / abstract algebra that might be relevant.




My work, so far:



As noted by carmichael561 (please see his hints below), the problem makes sense only for $n \in \mathbb{N}$.



Suppose, for contradiction, that



$$8n+7 = a^2 + b^2 + c^2$$



for $a,b,c \in \mathbb{Z}$.




We can rewrite the equation as $$8n = a^2 + b^2 + c^2 -7$$



Now since $8$ divides the LHS, it also divides the RHS. In particular,



the LHS is a number that is congruent to $0$ mod($8)$ but the LHS is congruent to $7$ mod($8$), which is a contradiction.


Answer



Hint: what are the squares mod $8$?


Wednesday 28 January 2015

algebra precalculus - Is there any formula for the series $1 + frac12 + frac13 + cdots + frac 1 n = ?$

Is there any formula for this series?





$$1 + \frac12 + \frac13 + \cdots + \frac 1 n .$$


Proof $sumlimits_{r=1}^{n} r >frac{1}{2}n^2$ using induction




Question:




$$\text{Prove by induction that, for all integers } n, n \geq 1:$$
$$\sum\limits_{r=1}^{n} r >\frac{1}{2}n^2$$




Working:



Step 1 (Prove true for n=1):

$$1>\frac{1}{2}(1)^2$$



Step 2 (Assume true for n=k):
$$ k >\frac{1}{2}k^2$$



Step 3 (Prove true for n=k+1):



And having only faced equations with an equals (=) sign I have no idea what to do next. Right now I have assumed that it stands true for $k$ and I will try to prove for $k+1$. What should be my next step?


Answer



Your second step should read $$\sum_{r=1}^k r > \dfrac{k^2}{2}$$

Then note that $$\sum_{r=1}^{k+1} r = \underbrace{(k+1) + \sum_{r=1}^{k} r > (k+1) + \dfrac{k^2}{2}}_{\text{Induction hypothesis}} = \underbrace{\dfrac{k^2+2k+2}{2} > \dfrac{k^2+2k+1}{2}}_{a+\frac12 > a} = \dfrac{(k+1)^2}{2}$$


trigonometry - Show that $arctanfrac{1}{2}+arctanfrac{1}{3}=frac{pi}{4}$




Show that $\arctan\frac{1}{2}+\arctan\frac{1}{3}=\frac{\pi}{4}$.




Attempt:



I've tried proving it but it's not equating to $\frac{\pi}{4}$. Please someone should help try to prove it. Is anything wrong with the equation? If there is, please let me know.


Answer




Sine addition identity: $$\sin(\alpha + \beta) = \sin \alpha \cos \beta + \cos \alpha \sin \beta.$$



Cosine addition identity: $$\cos(\alpha + \beta) = \cos \alpha \cos \beta - \sin \alpha \sin \beta.$$



From the above, we obtain the tangent addition identity: $$\begin{align*} \tan (\alpha + \beta) &= \frac{\sin (\alpha + \beta)}{\cos (\alpha + \beta)} \\ &= \frac{\sin \alpha \cos \beta + \cos \alpha \sin \beta}{\cos \alpha \cos \beta - \sin \alpha \sin \beta} \\ &= \frac{\tan \alpha + \tan \beta}{1 - \tan \alpha \tan \beta}. \end{align*}$$



From this, we now let $x = \tan \alpha$, $y = \tan \beta$, or equivalently, $\alpha = \tan^{-1} x$, $\beta = \tan^{-1} y$, and substitute: $$\tan(\tan^{-1} x + \tan^{-1} y) = \frac{x+y}{1-xy}.$$ Now taking the inverse tangent of both sides, we obtain the inverse tangent identity: $$\tan^{-1} x + \tan^{-1} y = \tan^{-1} \frac{x+y}{1-xy}.$$ Now let $x = 1/2$, $y = 1/3$, and simplify the right hand side.


abstract algebra - Show that field extension of the rationals by distinct square roots of primes is Galois



Let $p_1, p_2, ..., p_n$ be distinct primes. Prove $\mathbb{Q}(\sqrt{p_1}, \sqrt{p_2}, ..., \sqrt{p_n})$ is Galois.

I think that we need to show it is normal and separable.



For normal, we can say that $\mathbb{Q}$ is a splitting field for the polynomial $f= (x^2-p_1)(x^2-p_2)...(x^2-p_n)$ which is obviously made up of factors that are irreducible over $\mathbb{Q}$



For separable, I don't know. Maybe something to do with Einstein's criterion


Answer



As noted in the comments, this follows because $\mathbb{Q}$ is a field of characteristic $0$. Because of this, any irreducible polynomial must have distinct roots. You may have recall seeing this in the form of a result that a polynomial has multiple roots (over a field of characteristic $0$) if and only if $f(x)$ and $f'(x)$ have a common zero, i.e. have a common factor.



Now each of the $x^2-p_i$ are irreducible over $\mathbb{Q}$, a field of characteristic $0$. Therefore, they have no repeated roots. You can also note that you know the roots of each polynomial, namely $\pm \sqrt{p_i}$, which are distinct. One could also use the fact that you know the degree of the field extension is $2^n$, where $n$ is the number of distinct primes used. Any automorphism is determined by how it acts on the $\sqrt{p_i}$'s. The map $\sqrt{p_i} \mapsto -\sqrt{p_i}$ and fixing all the other $\sqrt{p_j}$ for $j \neq i$ is an automorphism, this is routine to verify. Then you have $2^n$ possible automorphisms, the same as the degree of the extension.




Note: if you are confused why $\mathbb{Q}$ has characteristic $0$, note that $1+1+\cdots$ is never $0$ in $\mathbb{Q}$, so that $\mathbb{Q}$ has characteristic $0$ by definition.


Tuesday 27 January 2015

linear algebra - Characteristic polynomial proof

The trace of a matrix is the sum of the entries on its main diagonal. Prove that if $A$ is a $2 \times 2$ matrix, then the characteristic polynomial of $A$ is $x^2 − {c_1}x + c_2$ where $c_1$ is the trace of $A$ and $c_2$ is the determinant of $A$.



Can anyone explain this to me? So far, I only know that $C_a (x) = \operatorname{det}(A-xI)$, that the product of eigenvalues (counting multiplicity) is the $\operatorname{det}{A}$, and the sum of eigenvalues (counting multiplicity) equals the trace of $A$. I am just lost as to how to apply these.

calculus - Is the usual proof of $lim_{xrightarrow 0}frac {sin(x)}{x} = 1$ an honest proof?



In a lot of textbooks on Calculus a proof that $\lim_{x\rightarrow 0}\frac {\sin(x)}{x} = 1$ is the following:




enter image description here



Comparing the areas of triangles $ABC, ABD$ and circular sector, you get:
$$
\sin(x)$$
from what you have:
$$
\cos(x)<\frac {\sin(x)}{x}<1
$$

from which immidiately follows that $\lim_{x\rightarrow 0}\frac {\sin(x)}{x} = 1$



Don't you think that this kind of proof is not honest. I mean, in the proof we use the fact that the area of sector is $\frac12xr^2$. This formula comes from integrating (an "infinite" sum of "infinitesimal" triangles' areas). So, roughly speaking, we somehow implicitly use that the length of "infinitesimal" chord $BC$ is equal to the length of "infinitesimal" arch $rx$, which is equivalent that $\lim_{x\rightarrow 0}\frac {\sin(x)}{x} = 1$.



Do I miss something?


Answer



It is a perfectly honest proof, but it is based on few assumptions about areas bounded by plane curves and the definitions of trigonometric functions. The main assumption is that a sector of a circle has an area. The proof of this assumption requires real analysis/calculus but it does not require the limit $\lim_{x\to 0}\dfrac{\sin x}{x}=1$. Once this assumption is established, the next step is to define the number $\pi$ as area of unit circle (circle of radius $1$).



Next we consider the same figure given in question. We need to define a suitable measurement of angles. This can be done in many ways, and one of the ways is to define the measure of $\angle CAB$ as twice the area of sector $CAB$. Note that for this definition to work it is essential that radius of sector is $1$ (which is the case here).




Further we define the functions $\sin x,\cos x$ in the following manner. Let $A$ be origin of the coordinate axes and $AB$ represent positive $x$-axis. Also let the measure of $\angle CAB$ be $x$ (so that area of sector $CAB$ is $x/2$). Then we define the coordinates of point $C$ as $(\cos x,\sin x)$.



Now that we know these assumptions, the proof presented in the question is valid and honest. This is one of the easiest routes to a proper theory of trigonometric functions. The real challenge however is to show that a sector of a circle has an area.


Divisibility of a binomial coefficient by a prime

Let $q=p^r,$ where $p\in\mathbb{P}$ is a prime and $r\in\mathbb{N}\setminus\{0\}$ is a natural number (non-zero). How to prove that for each $i\in\{1,2,\ldots,q-1\}$ the binomial coefficient $\binom{q}{i}$ is divisible by $p$?




I find it easy to show that $p|\binom{p}{i},$ but here it's more complicated :/

integration - Integral of sine multiplied by Bessel function with complicated argument

I need a help with integral below,
$$ \int_0^\infty \sin(ax)\ J_0\left(b\sqrt{1+x^2}\right)\ \mathrm{d}x, $$
where $a,b > 0 $ and real, $J_0(x)$ is the zeroth-order of Bessel function of the first kind.




I found some integrals similar to the integral above, but I don't have any idea on how to apply it. Here are some integrals that might help.
$$ \int_0^\infty \cos(ax)\ J_0\left(b\sqrt{1+x^2}\right)\ \mathrm{d}x = \frac{\cos\sqrt{b^2-a^2}}{\sqrt{b^2-a^2}}; \mathrm{~~for~0 < a < b} $$



$$ \int_0^\infty \sin(ax)\ J_0(bx)\ \mathrm{d}x = \frac{1}{\sqrt{a^2-b^2}}; \mathrm{~~for~0 < b < a} $$



The proof of the first integral can be seen here.

algebra precalculus - Time and distance: Buses traveling in opposite directions



Two buses starting from two different places A and B simultaneously, travel towards each other and meet after a specified time. If the bus starting from A is delayed by 20 minutes, they meet 12 minutes later than the usual time. If the speed of the bus starting from A is 60km/hr, what is the speed of the second bus?




My attempt:



Let t be the usual time of meeting and s(A) and s(B) be the speeds of Bus A and Bus B respectively, then



Applying relative velocity concept (stopping bus B i.e. taking it's speed =0 and adding it to speed of A), i got:



s(A)= 60 (given)



(60 + s(B))t = d (distance between the buses)




Now if bus A is delayed by 20 mins, then B's travelling time is 20 mins more than A's to cover the same distance.



t(B) = t(A) + 20/60



hence, t(B)= t(A) + 1/5



distance covered by A = t(A) * 60



distance covered by B = t(B) * s(B) = (t(A) +1/5)*s(B)




The problem: How do i relate the delay of 12 mins in their meeting time?


Answer



ADDED. Solution using your equations and making the necessary corrections and additions.



(60 + s(B))t = d



distance covered by A = t(A) * 60



distance covered by B = t(B) * s(B) = (t(A) +1/5)*s(B)




(distance between buses = d = distance covered by A + distance covered by B)




t(B) = t(A) + 20/60



hence, t(B) = t(A) + 1/5




Correction: 20/60 = 1/3; hence, t(B) = t(A) + 1/3.





How do i relate the delay of 12 mins in their meeting time?




t(B) = t + 1/5.






Let $d$ be the distance from $A$ to $B$, $v_{A}=60$ km/hr be the speed of

the bus departing from $A$ and $v_{B}$ the speed of the bus departing from $B$. If both buses start simultaneously, $v_{A}t$ is the distance traveled by bus $A$ and $v_{B}t$ is the the distance traveled by bus $B$, where $t$ is the usual time they spend to meet each other. We have



$$v_{A}t+v_{B}t=d.$$



(The units have to be consistent: e.g. speed in km/hr, time in hours and distance in km). If the bus starting from $A$ is delayed by 20 minutes =$\frac{1}{3}$ hour,
then bus $A$ travels the distance $v_{A}(t-1/3+1/5)$ and bus $B$ travels $v_{B}(t+1/5)$, because they meet $12$ minutes = $\frac{1}{5}$ hour later than the usual time $t$.



$$v_{A}(t-1/3+1/5)+v_{B}(t+1/5)=d.$$



Equate both equations




$$v_{A}t+v_{B}t= v_{A}(t-1/3+1/5)+v_{B}(t+1/5),$$



and simplify



$$\frac{2}{15}v_{A}-\frac{1}{5}v_{B}=0.$$



Since $v_{A}$ is 60km/hr, we have



$$\frac{2}{15}\left( 60\right) -\frac{1}{5}v_{B}=0,$$




whose solution is $v_{B}=40$ km/hr.


discrete mathematics - How to compute $8x equiv 33 pmod{35}$?



How to compute $8x \equiv 33 \pmod{35}$?



I followed this video to solve this problem. Is there a better way?



My solution steps:



Divide both sides by 8:




$$x \equiv \frac{33}{8}^{-1} \pmod{35}$$



$$35 = \frac{33}{8} \cdot 8 +2 \tag 1$$



$$\frac{33}{8}=2\cdot\frac{16}{8}+\frac{1}{8} \tag 2 $$



$$2=\frac{1}{8}\cdot8+1$$



Now put $1$ by itself:




$$1=2-\frac{1}{8}\cdot 8 \tag 3$$



Now put $2$ by itself from $(1)$:



$$2 = 35 - \frac{33}{8} \cdot 8 \tag 4 $$



Now put $\frac{1}{8}$ by itself from (2):



$$\frac{1}{8} = \frac{33}{8} - 2\cdot\frac{16}{8} \tag 5 $$




Now substitute $\frac{1}{8}$ with (5) in (3) and simplify:



$$1=2-\left(\frac{33}{8} - 2\cdot\frac{16}{8}\right)\cdot8$$



$$1=2-\frac{33}{8}\cdot8 - 2\cdot 16 \tag 6$$



Now substitute $2$ with $(4)$ in $(6)$ and simplify:



$$1=35-\frac{33}{8}\cdot8-\frac{33}{8}\cdot8-\left(35-\frac{33}{8}\cdot8\right) \cdot 16$$




$$1=17\cdot(35)-\frac{33}{8}\cdot\left(8\cdot8\cdot8\cdot16\right)$$



The solution is usually between $0$ to $35$. However, we get $(-8\cdot8\cdot8\cdot16)$, which is it too small..



Can anyone tell me where I went wrong?


Answer



We wish to find $x$ such that $8x \equiv 33 \pmod{35}$.



Since $8$ and $35$ are relatively prime, we can use the extended Euclidean algorithm to express their greatest common divisor $1$ as a linear combination of $8$ and $35$. We first use the Euclidean algorithm to solve for the greatest common divisor of $8$ and $35$.




\begin{align*}
35 & = 4 \cdot 8 + 3\\
8 & = 2 \cdot 3 + 2\\
3 & = 1 \cdot 2 + 1\\
2 & = 2 \cdot 1
\end{align*}



We now work backwards to solve for $1$ in terms of $8$ and $35$.



\begin{align*}

1 & = 3 - 1 \cdot 2\\
& = 3 - 1 \cdot (8 - 2 \cdot 3)\\
& = 3 \cdot 3 - 1 \cdot 8\\
& = 3 \cdot (35 - 4 \cdot 8) - 1 \cdot 8\\
& = 3 \cdot 35 - 13 \cdot 8
\end{align*}
Since $3 \cdot 35 - 13 \cdot 8 = 1$, $$-13 \cdot 8 \equiv 1 \pmod{35}$$ If we multiply both sides of the congruence by $33$, we obtain
\begin{align*}
33 \cdot -13 \cdot 8 & \equiv 33 \pmod{35}\\
-429 \cdot 8 & \equiv 33 \pmod{35}\\

(-13 \cdot 35 + 26) \cdot 8 & \equiv 33 \pmod{35}\\
26 \cdot 8 & \equiv 33 \pmod{35}
\end{align*}
Hence, $x \equiv 26 \pmod{35}$.



Check: If $x \equiv 26 \pmod{35}$, then $8x \equiv 8 \cdot 26 \equiv 208 \equiv 5 \cdot 35 + 33 \equiv 33 \pmod{35}$.



Note that this is a modification of Thomas Andrews' excellent answer and an elaboration on Bernard's answer. The theorem that states that we can express the greatest common divisor of two integers as a linear combination of those integers is known as Bezout's identity.


integration - Integral $int_0^{frac{pi}{2}} x^2 sqrt{sin x} , dx$



Greetings I am trying to find a closed form for: $$I=\int_0^{\frac{\pi}{2}} x^2 \sqrt{\sin x}\,dx$$ If we rewrite the integral as $$I=\int_0^\infty x^2 \sqrt{\frac{1}{\sqrt{1+\cot^2 x}}}\,dx$$ now with $$\cot x =t $$ $$I=\int_0^{\infty} \operatorname{arccot}^2 (x)(1+x^2)^{-\frac{5}{4}}dx$$ and with https://en.wikipedia.org/wiki/Inverse_trigonometric_functions#Logarithmic_forms $$I=\frac{1}{4i}\int_0^{\infty}\log^2\left(\frac{z-i}{z+i}\right)(1+x^2)^{-\frac{5}{4}} \, dx$$ Now for the $\log$ I thought to expand into power series but since the radius of converge is abit smaller, this fails. Also integrating by parts or combining the initial integral with $\int_0^{\frac{\pi}{2}} x^2 \sqrt{\cos x}\,dx$ wasn't much of a help, could you help me evaluate this integral ?


Answer




The substitution $\sin(x) = \sqrt{t}$ leads to the expression
$$I = \frac{1}{2} \int \limits_0^1 \frac{t^{-1/4} \arcsin^2 (\sqrt{t})}{\sqrt{1-t}} \, \mathrm{d} t \, . $$
Now you can use the power series for $\arcsin^2$ (see for example this question) and integrate term by term (monotone convergence). Using the beta function you will find
\begin{align}
I &= \frac{1}{4} \sum \limits_{n=1}^\infty \frac{(2n)!!}{n^2 (2n-1)!!} \int \limits_0^1 t^{n-\frac{1}{4}} (1-t)^{-\frac{1}{2}} \, \mathrm{d} t \\
&= \frac{1}{4} \sum \limits_{n=1}^\infty \frac{(2n)!!}{n^2 (2n-1)!!} \operatorname{B}\left(n+\frac{3}{4},\frac{1}{2}\right) \\
&= \frac{\sqrt{\pi}}{4} \sum \limits_{n=1}^\infty \frac{(2n)!!}{n^2 (2n-1)!!} \frac{\Gamma\left(n+\frac{3}{4}\right)}{\Gamma\left(n+\frac{5}{4}\right)} \\
&= \frac{\sqrt{\pi} \, \Gamma\left(\frac{3}{4}\right)}{\Gamma\left(\frac{1}{4}\right)} \sum \limits_{n=1}^\infty \frac{(2n)!!}{n^2 (2n-1)!!} \frac{\prod_{k=1}^n (4k-1)}{\prod_{l=1}^{n+1} (4l-3)} \\
&= \frac{\pi \sqrt{2 \pi}}{\Gamma\left(\frac{1}{4}\right)^2} \sum \limits_{n=1}^\infty \frac{(2n)!!}{n^2 (4n+1) (2n-1)!!} \prod \limits_{k=1}^n \frac{4k-1}{4k-3} \, .
\end{align}

Mathematica gives the following expression in terms of a hypergeometric function:
$$ I = \frac{6 \pi \sqrt{2 \pi}}{5 \Gamma\left(\frac{1}{4}\right)^2} \, {}_4 \! \operatorname{F}_3 \left(1,1,1,\frac{7}{4};\frac{3}{2},2,\frac{9}{4};1\right) \approx 1.208656578687 \, .$$
Inverse symbolic calculators do not seem to give any expression for this number, so this might be as good as it gets.


Monday 26 January 2015

calculus - Prove $a_1 = 1, a_{n+1} = frac{1+a_n}{2+a_n}$ converges



I need to prove that the sequence defined by

$$a_1 = 1, a_{n+1} = \frac{1+a_n}{2+a_n}$$
converges.



I tried to prove that it's bounded and monotonically decreasing, but I couldn't prove it's monotonically decreasing.
I also managed to find the limit assuming it converges.


Answer



Note that
$$a_{n+1} = \frac{1+a_n}{2+a_n} = 1 - \frac{1}{2 + a_{n}}.$$
So, given that both $a_n$ and $a_{n-1}$ are positive, we have
$$
a_{n} < a_{n-1} \implies \frac {1}{2 + a_n} > \frac 1{2 + a_{n-1}}

\implies 1 - \frac {1}{2 + a_n} < 1 - \frac 1{2 + a_{n-1}} \implies
a_{n+1} < a_n.
$$

So, we can indeed conclude that the sequence is monotonic.


combinatorics - Arrangement of colored cars in traffic

Seeing helicoptor captures of car packed streets, and the sort order of colors can be observed, if for simplicity we limit the scope of the pictures to say 9 or 10 cars, I wondered whether it is possible to count all the unique arrangements of the 9 colored cars in a line (assuming we have 4 blue and 5 green ones) where the order g..g..b..g..b can always be found in the arrangement. That is, an order involving only 5 cars is sought for out of the 9.



I have a feeling it is very similar to introductory problems in probability involving a bowl of colored balls, where we know how the bowl is composed, and we are asked how many arrangements of this and that color can be observed in order, if we pick a number of balls at a time.

elementary number theory - Why is modular inverse notation so ambiguous?



Consider



$$\frac{a}{a}\pmod a,\ \ \ a\in\mathbb Z\setminus\{-1,0,1\}$$



There are two cases:




$1)$ $\frac{a}{a}$ is the notation for the real number $1$.
Then the expression is equivalent to $1$ modulo $a$.



E.g., see this, where the expression on the LHS would not even exist if what would be meant by $99$ in the denominator is modular inverse instead of regular integer division.



$2)$ $\frac{a}{a}$ is the notation for $ax$, where $x$ is in the class of solutions to $ax\equiv 1\pmod{a}$.
Since $x$ does not exist, $\frac{a}{a}$ does not exist too in this case.



So $\frac{a}{a}\mod a$ can be thought of as either equivalent to $1$ modulo $a$ or not existing at all.



Now, $aa^{-1}$ is possibly more commonly used than $\frac{a}{a}$ in modular arithmetic and $\frac{a}{a}$ is more commonly used than $aa^{-1}$ in division in integers, but surely not always.




Such notation was probably created due to its similarities to division in integers, but I think less ambiguous notation should've been created instead.



Also see this.
There's a comment there that points out my problem. Consider:



$$\frac{ac}{bc}\pmod m,$$



where $$\gcd(b,m)=1,\ \ \ \gcd(c,m)>1,\ \ \ m\in\mathbb Z\setminus\{-1,0,1\},\ \ \ c\in\mathbb Z\setminus\{0\},\ \ \ b\in\mathbb Z\{0\},\ \ \ a\in\mathbb Z$$



can either be equivalent to $\frac{a}{b}\pmod{m}$ if the fractional notation denotes integer division or it could not exist at all if the fractional notation denotes modular inverses.



Answer



First, a simple example: note that $\, 1+2+\cdots +n\, =\,\dfrac{n(n+1)}2\ $ remains true modulo $\,2,\,$ but it would be more cumbersome to state without the very convenient fractional notation.



Let's consider an analogous but simpler example of the question in your first link, namely
$\qquad {\rm mod}\ 9\!:\ \ \dfrac{10^n-1}9\, =\, \overbrace{\color{#c00}{11\cdots 11}}^{\large n\ 1's} \,\equiv\, n\ $ by $10\equiv 1,\,$ i.e. by casting out nines



Here, the context is congruences on integers, so the fraction denotes an $\rm\color{#c00}{integer,}$ viz. the integer obtained by cancelling $\,9\,$ from the fraction. The mod applies to that integer. More generally



$\qquad {\rm mod}\ x\!-\!a\!:\ \ \dfrac{f(x)-f(a)}{x-a} \equiv\, f'(a)$




Here the context is polynomials over some ring, so the fraction denotes the unique polynomial obtained by the division (which is exact by the Factor Theorem). This yields a purely algebraic definition of polynomial derivatives.



In all cases, the fraction can be considered to be a convenient notation used to denote a certain element $\,r\,$ of the ambient ring. The notation proves convenient because it conveys information about the ring-theoretic properties of that element, e.g. in the first example above it denotes the (unique!) solution of $\ 9r = 10^n-1\,$ in $\,\Bbb Z.$



Such (universal) cancellation (before evaluation) can prove quite powerful, e.g. we can prove Sylvester's determinant identity $\rm\,\det(1+AB) = \det(1+BA)\,$ by computing the determinant of $\rm\ (1+A\ B)\ A\ =\ A\ (1+B\ A)\ \ $ then cancelling $\rm\ det(A),\,$ all done in $\,\Bbb Z[a_{ij}],\,$ where the matrix entries are indeterminates. Similarly one can compute the adjugate of the adjugate. Beware: it can be perplexing at first glance, esp. understanding why the proof works even when $\,\rm\det(A) = 0.$


complex analysis - How do I find the real and imaginary part of $z+ e^z $




How can I find the imaginary and real part of $z + e^z$ ?



I tried but I only get $$x + yi + e^x e^{yi}$$


Answer



HINT:



You are on the right track. Note that $e^{iy}=\cos(y)+i\sin(y)$.


number theory - Divisibility of binomial coefficient by prime power - Kummer's theorem



Let's say we have binomial coefficient $\binom{n}{m}$. And we need to find the greatest power of prime $p$ that divides it.




Usually Kummer's theorem is stated in terms of the number of carries you perform while adding $m$ and $n-m$ in base $p$.



I found an equivalent statement of this theorem that reads like this: if we write
$$
\binom{n}{m}\equiv\binom{n_0}{m_0}\binom{n_1}{m_1}\ldots\binom{n_d}{m_d}\pmod{p},
$$
where $n = n_0 + n_1p + n_2p^2 + \ldots + n_dp^d$ and $m = m_0 + m_1p + m_2p^2 + \ldots + m_dp^d$, then the power dividing $\binom{n}{m}$ is precisely the number of indicies $i$ for which $n_i

Now let's take an example. Let's look at $\binom{25}{1}$ and $p=5$. We have
$$

\binom{25}{1}\equiv\binom{1}{0}\binom{0}{0}\binom{0}{1}\pmod{5}.
$$
We have only one index $i$ for which $n_i < m_i$, which is the last one. This suggests that $\binom{25}{1}$ can't be divided by $25$, which obviously isn't true.



Where's the problem? In case you wonder where I found this statement of Kummer's theorem, here is the link: http://www.dms.umontreal.ca/~andrew/PDF/BinCoeff.pdf



Thank you!


Answer



I've noticed this, too. I believe that you're correct and that Granville's "equivalent statement of this theorem" is wrong.




As evidence, I give you two items: first, the counter-example you mentioned, and second, the published version of your linked article by Andrew Granville does not contain this line that "the power dividing $n\choose{m}$ is precisely the number of indicies $i$ for which $n_i < m_i$."



The published version is a bit hard to find, but here's a version on Google Books that I found by Googling "Andrew Granville Binomial Coefficients" from within Google Books. You'll noticed that this published version is an edited copy of the article on the Univ. of Montreal website.



When in doubt, look for the official version of the article. My guess is that the article on Andrew's Montral website is a slightly earlier, unedited version.



EDIT: I just got an e-mail from Andrew himself on this question, and he said, "You should believe the published version -- I do vaguely remember making such a a change. Thanks for pointing out the discrepancy."


Sunday 25 January 2015

trigonometry - How to calculate the sine cosine or tangent of an angle(Simply Explained)



I wanted to know how a calculator finds the sine Or any other trig function with only knowing the value of the angle.
I have been looking on the internet for answers because i was really interested in how Archimedes found out what the value of pi is.
Then that led me to calculating the sides of right triangles but i wanted to know since Archimedes didn't have a calculator how did he find the length of the opposite side on the triangle. I searched a lot of pages but they were all so complex(i'm in middle school)to me. So basically can anyone simplify the way you(or Archimedes) can find the length of the opposite side of a right triangle?




Sorry, if this is a broad/vague question in any way or if i have made any mistakes.
This is my first time asking a question online



I thank you in advance for anyone who can help!


Answer



I think there are really two questions here:




  1. How did Archimedes find the length of the side of a right triangle opposite to a specified angle?


  2. How does a calculator evaluate trig functions?



For #1: Honestly, I think he just drew it and measured it. This then leads to the question, "How did they measure length in general?" and maybe that's actually what you were asking. Measurements of length were often based on body parts back in those days. See here for more information, including other techniques/devices.



For #2: A thorough answer is pretty technical and advanced so I'll try to simplify as much as I can.



As stated in another answer, calculators use Taylor series to evaluate trig functions. Basically a Taylor series is a way of expressing a function in terms of the four basic operations of addition, subtraction, multiplication, and division.



Every computer and every (electrically powered) calculator has a central processing unit, called a CPU for short. The CPU is made up of a bunch of tiny wires that carry electric current. When we give the computer or calculator commands (like opening or saving a file, or pressing buttons on the keyboard or calculator), the electricity gets routed through the wires in a way that makes those commands actually happen.




The most basic operations we can do with this electrical routing are addition and subtraction. Multiplication and division must be done with appropriate combinations of addition and subtraction. To put it another way, we can do addition and subtraction with basically one electrical route. But anything more complicated will require more than just one route. For example, when you tell your calculator to do $4 + 5$, it requires just one route to do it. But if you tell your calculator to do $4 \times 5$, the electricity running through the wires is really doing $4 + 4 + 4 + 4 + 4$, which takes four routes (one for each addition, and we have four additions there).



The same thing is true of more complicated operations and functions. They also require more than one electrical route, where each electrical route is basically an addition or subtraction. This is where the Taylor series helps us. The Taylor series tells us how to evaluate these functions using addition, subtraction, multiplication, and division. And remember that multiplication and division are themselves "defined" (in the electrical wiring in the CPU) in terms of addition and subtraction. So when you tell your calculator to evaluate the sine of some number, the electricity gets routed through the wires so that it actually calculates the expression given by the Taylor series.



Note that the Taylor series is an infinite series, which is of course impossible for a CPU to evaluate exactly in general, but calculators and computers have a fixed number of digits they can display anyway. Therefore it's enough to just use the first few terms of the Taylor series.



This sweeps a lot of details under the rug but I hope it clarifies things at least a little bit. If you want more info, Coursera is currently running a really good course on this. It's free. There's also one on EdX but I think it's a bit more advanced. I studied this stuff in school 12 years ago and I'm currently using both of these as refreshers before moving on to more advanced studies. The Coursera course has been really helpful for the basics, so I definitely recommend at least looking at that one.



Good luck and keep the intellectual curiosity going!



complex analysis - Showing $int_{0}^{infty} frac{1}{(x^2+1)^2(x^2+4)}=frac{pi}{18}$ via contour integration



I want to show that:
$$\int_{0}^{\infty} \frac{1}{(x^2+1)^2(x^2+4)}=\frac{\pi}{18}$$

so considering:
$$\int_{\gamma} \frac{1}{(z^2+1)^2(z^2+4)}$$ where gamma is the curve going from $0$ to $-R$ along the real axis, from $-R$ to R via a semi-circle in the upper plane and then from $R$ to 0 along the real axis.



Using the residue theorem we have that:
$$\int_{\gamma} \frac{1}{(z^2+1)^2(z^2+4)}=2\pi i \sum Res$$
so re-writing the integrand as $\displaystyle\frac{1}{(z-2i)(z+2i)(z+i)^2(z-i)^2}$



we can see that there is two simple poles at $2i$,$-2i$ and two poles of order 2 at $i$,$-i$.
Calculating the residues:
$$Res_{z=2i}=\lim_{z\rightarrow 2i} \displaystyle\frac{1}{(z+2i)(z+i)^2(z-i)^2}=\frac{1}{36i}$$




$$Res_{z=-2i}=\lim_{z\rightarrow 2i} \displaystyle\frac{1}{(z-2i)(z+i)^2(z-i)^2}=\frac{-1}{36i}$$



$$Res_{z=i}\lim_{z\rightarrow i} \frac{d}{dz} \frac{1}{(z-2i)(z+2i)(z+i)^2}=\frac{2i}{36}+\frac{2}{24i}$$



$$Res_{z=-i}\lim_{z\rightarrow -i} \frac{d}{dz} \frac{1}{(z-2i)(z+2i)(z-i)^2}=\frac{-2i}{36}+\frac{-2}{24i}$$



But now the sum of the residues is 0 and so when I integrate over my curve letting R go to $\infty$ (and the integral over top semi-circle goes to 0) I will just get 0?



Not sure what I've done wrong?

Thanks very much for any help


Answer



Consider the contour $C$ that spans along $-R$ to $R$ and around the arc $Re^{i\theta}$ for $0\le\theta\le \pi$.



Letting



$$f(z):=\frac{1}{(z^2+1)^2(z^2+4)}=\frac{1}{(z+i)^2(z-i)^2(z+2i)(z-2i)}$$



and we see the poles are located at $\pm i$ and $\pm 2i$. Letting $R \to \infty$, it is very clear that the denominator explodes, causing the integral around the arc to disappear. Then




$$\oint_C f(z)\, dz = 2\pi i(\operatorname*{Res}_{z = i}f(z) + \operatorname*{Res}_{z = 2i}f(z))$$



because $2i$ and $i$ are the only poles in $C$.
The pole of $i$ is of order 2:



$$
\operatorname*{Res}_{z = i}f(z) =
\lim_{z \to i} \frac{1}{1!}\frac{d}{dz} (z-i)^2 f(z)=
\lim_{z \to i} \frac{d}{dz}\frac{1}{(z+i)^2(z^2+4)}=
\lim_{z \to i} \frac{2(2z^2 +iz+4)}{(i+z)^3(4+z^2)^2}=-\frac{i}{36}
$$




The pole of $2i$ is simple:



$$
\operatorname*{Res}_{z = 2i}f(z) =
\lim_{z \to 2i} (z-2i)f(z) = \frac{1}{(-4+1)^2(2i+2i)}=-\frac{i}{36}
$$



So finally




$$
\int_0^\infty f(x)\, dx = \frac{1}{2}\int_{-\infty}^\infty f(x)\, dx = \pi i\left(-\frac{i}{36}-\frac{i}{36}\right) = \frac{\pi}{18}
$$


integration - $int_{-infty}^infty frac{e^{ax}}{1+e^x}dx$ with residue calculus



I'm trying to compute $\displaystyle \int_{-\infty}^\infty \frac{e^{ax}}{1+e^x}dx$, $(0Let $f$ denote the integrand.




I'm using the rectangular contour given by the following curves:
$c_1: z(t) = R+it, t \in [0, 2\pi]$
$c_2: z(t) = -t+2\pi i, t \in [-R, R]$
$c_3: z(t) = -R + i (2\pi - t), t \in [0, 2\pi]$
$c_4: z(t) = t, t \in [-R, R]$



There is one singularity within the contour, at $z = \pi i$.
Expanding out the denominator as a power series shows that it's a simple pole, and allows us to evaluate the residue as
$\displaystyle \lim_{z \rightarrow \pi i} f(z)(z-\pi i) = - e^{a \pi i}$



This is computed by expanding $1+e^z$ as a Taylor series around $\pi i$. The first coefficient will be 0, and the second will be $-1$. The rest will have orders of $(z - \pi i)$ greater than 1, and will thus vanish when we take the limit.



So the integral over the entire contour is $- 2\pi i e^{a \pi i}$



An easy enough estimate on the $c_1$ shows that the integral vanishes as $R \rightarrow \infty$.
With a variable change, c_3 is the same as c_1 and also vanishes.
$c_4$ becomes the integral we want when we take a limit.

$c_2$ becomes $c_4$ with a constant:



\begin{align*} \int_{c_2} f(z)dz &= \int_{-R}^{R} \frac{e^{-at}e^{a 2\pi i}}{1+e^{-t}e^{2\pi i}}dt
\\ &= e^{a 2 \pi i}\int_{-R}^{R} \frac{e^{-at}}{1+e^{-t}}dt
\\ &=e^{a 2 \pi i}\int_{R}^{-R} - \frac{e^{au}}{1+e^{u}}du \ \ \ (u = -t, du = -dt)
\\ &= e^{a 2 \pi i}\int_{-R}^{R} \frac{e^{au}}{1+e^{u}}du
\\ &= e^{a 2 \pi i} I(R)
\end{align*}



Where $I(R)$ is the line integral over $c_4$.
Putting it all together and taking the limit gives us
$\displaystyle \lim_{R \rightarrow \infty} I(R) = \frac{- 2\pi i e^{a \pi i}}{(1 + e^{a 2 \pi i}) }$




But this can't be the value of the integral, because it's a real-valued function integrated over $R$. I can't figure out where I'm going wrong. Note that I've avoided posting all the details of my solution since this is from a current problem set for a class on complex analysis.


Answer



I think you may just have a simple sign error. Using the same contour you describe, I get that



$$\int_{-R}^R dx \frac{e^{a x}}{1+e^x} + i \int_0^{2 \pi} dy \frac{e^{a (R + i y)}}{1+e^{R+i y}} - e^{i a 2 \pi} \int_{-R}^R dx \frac{e^{a x}}{1+e^x} - i \int_0^{2 \pi} dy \frac{e^{a (-R + i y)}}{1+e^{-R+i y}} = -i 2 \pi e^{i a \pi}$$



As $R \to \infty$, the second integral (because $a \lt 1$) and the fourth integral (because $a \gt 0$) vanish. Thus we have



$$\int_{-\infty}^{\infty} dx \frac{e^{a x}}{1+e^x} = - i 2 \pi \frac{e^{i a \pi}}{1-e^{i 2 a \pi}} = \frac{\pi}{\sin{\pi a}}$$



self learning - Proof of the infinite descent principle



Hi everyone I wonder to myself if the next proof is correct. I would appreciate any suggestion.



Proposition: There is not a sequence of natural numbers which is infinite descent.



Proof: Suppose for contradiction that there exists a sequence of natural numbers which is infinite descent. Let $(a_n)$ be such sequence, i.e., $a_n>a_{n+1}$ for all natural numbers n.



We claim that if the sequence exists, then $a_n\ge k$ for all $k, n \in N$.




We induct on $k$. Clearly the base case holds, since each $a_n$ is a natural number and then $a_n \ge 0$ for all $n$. Now suppose inductively that the claim holds for $k\ge 0$, i.e., $a_n\ge k$ for all $n \in N$; we wish to show that also holds for $k+1$ and thus close the induction. Furthermore, we get a contradiction since $a_n \ge k$ for all $k, n \in N$, implies that the natural numbers are bounded.



$a_n>a_{n+1}$ since $(a_n)$ is an infinite descent. By the inductive hypothesis we know that $a_{n+1}\ge k$, so we have $a_n>k$ and then $a_n\ge k+1$.



To conclude we have to show that the claim holds for every $n$. Suppose there is some $n_0$ such that $a_{n_0}

Thanks :)


Answer



I would argue a different way.




By assumption,
for all $n$,
$a_n > a_{n+1}$,
or
$a_n \ge a_{n+1}+1$.



Therefore,
since
$a_{n+1} \ge a_{n+2}+1$,
$a_n \ge a_{n+2}+2$.




Proceeding by induction,
for any $k$,
$a_n \ge a_{n+k}+k$.



But,
set $k = a_n+1$.
We get
$a_n \ge a_{n+a_n+1}+a_n+1
> a_n$
.




This is the desired contradiction.



This can be stated in this form:
We can only go down as
far as we are up.



Note:
This sort of reminds me
of some of the

fixed point theorems
in recursive function theory.


linear algebra - If $A$ is $mtimes n$ matrix and $AA^T$ is non singular show that $text{rank}(A) = m$

$AA^T$ can be non singular only if the columns in $A$ are linear independent and they span the column space of $A$. Because the columns are linear independent then $A$ can be reduced to echelon form and $A$ will have $m$ pivots only if $m < n$. Because we have $m$ pivots, $\text{rank}(A) = m$.




Is this a valid prove for If $A$ is $m\times n$ matrix and $AA^T$ is non singular show that $\text{rank}(A) = m$?



Thanks ^_^

Saturday 24 January 2015

calculus - Prove that $int_0^infty frac{sin nx}{x}dx=frac{pi}{2}$




There was a question on multiple integrals which our professor gave us on our assignment.




QUESTION: Changing order of integration, show that $$\int_0^\infty \int_0^\infty e^{-xy}\sin nx \,dx \,dy=\int_0^\infty \frac{\sin nx}{x}dx$$

and hence prove that $$\int_0^\infty \frac{\sin nx}{x}dx=\frac{\pi}{2}$$







MY ATTEMPT: I was successful in proving the first part.



Firstly, I can state that the function $e^{-xy}\sin nx$ is continuous over the region $\mathbf{R}=\{(x,y): 0

$$\int_0^\infty \int_0^\infty e^{-xy}\sin nx \,dx \,dy$$

$$=\int_0^\infty \sin nx \left\{\int_0^\infty e^{-xy}\,dy\right\} \,dx$$
$$=\int_0^\infty \sin nx \left[\frac{e^{-xy}}{-x}\right]_0^\infty \,dx$$
$$ =\int_0^\infty \frac{\sin nx}{x}dx$$



However, the second part of the question yielded a different answer.



$$\int_0^\infty \int_0^\infty e^{-xy}\sin nx \,dx \,dy$$
$$=\int_0^\infty \left\{\int_0^\infty e^{-xy} \sin nx \,dx\right\} \,dy$$
$$=\int_0^\infty \frac{ndy}{\sqrt{n^2+y^2}}$$




which gives an indeterminate result, not the desired one.



Where did I go wrong? Can anyone help?


Answer



You should have obtained $$\int_{x=0}^\infty e^{-yx} \sin nx \, dx = \frac{n}{n^2 + y^2}.$$ There are a number of ways to show this, such as integration by parts. If you would like a full computation, it can be provided upon request.






Let $$I = \int e^{-xy} \sin nx \, dx.$$ Then with the choice $$u = \sin nx, \quad du = n \cos nx \, dx, \\ dv = e^{-xy} \, dx, \quad v = -\frac{1}{y} e^{-xy},$$ we obtain $$I = -\frac{1}{y} e^{-xy} \sin nx + \frac{n}{y} \int e^{-xy} \cos nx \, dx.$$ Repeating the process a second time with the choice $$u = \cos nx \, \quad du = -n \sin nx \, dx, \\ dv = e^{-xy} \, dx, \quad v = -\frac{1}{y} e^{-xy},$$ we find $$I = -\frac{1}{y}e^{-xy} \sin nx - \frac{n}{y^2} e^{-xy} \cos nx - \frac{n^2}{y^2} \int e^{-xy} \sin nx \, dx.$$ Consequently $$\left(1 + \frac{n^2}{y^2}\right) I = -\frac{e^{-xy}}{y^2} \left(y \sin nx + n \cos nx\right),$$ hence $$I = -\frac{e^{-xy}}{n^2 + y^2} (y \sin nx + n \cos nx) + C.$$ Evaluating the definite integral, for $y, n > 0$, we observe $$\lim_{x \to \infty} I(x) = 0, \quad I(0) = -\frac{n}{n^2 + y^2},$$ and the result follows.


trigonometry - Trouble with a Trig Identity and Euler's Formula



I have the following expression




$\cos^2(s) + 2\sin^2(s)$



and need to show that it is equivalent to the following expression



$1 + \sin^2(s)$



I have no idea where to start, however. I'm familiar with the use of Euler's Formula to derive the angle addition and subtraction formulas--and from there, the double and half angle formulas.



Where do we start to go from the first statement above to the second? Can you tie the trig identities you use back to Euler's Formula?



Answer



$$\cos^2(x) + 2\sin^2(x) = \cos^2(x) + \underbrace{\sin^2(x) + \sin^2(x)}_{2\sin^2(x)}$$



Now, since $\cos^2(x) + \sin^2(x) = 1$



you easily get



$$1 + \sin^2(x)$$


linear algebra - Proof that $(AA^{-1}=I) Rightarrow (AA^{-1} = A^{-1}A)$




I'm trying to prove a pretty simple problem - commutativity of multiplication of matrix and its inverse.



But I'm not sure, if my proof is correct, because I'm not very experienced. Could you, please, take a look at it?






My proof:





  • We know, that $AA^{-1}=I$, where $I$ is an identity matrix and $A^{-1}$ is an inverse matrix.

  • I want to prove, that it implies $AA^{-1}=A^{-1}A$



\begin{align}
AA^{-1}&=I\\
AA^{-1}A&=IA\\
AX&=IA \tag{$X=A^{-1}A$}\\
AX&=A

\end{align}
At this point we can see, that $X$ must be a multiplicative identity for matrix $A \Rightarrow X$ must be an identity matrix $I$.



\begin{align}
X = A^{-1}A &= I\\
\underline{\underline{AA^{-1} = I = A^{-1}A}}
\end{align}


Answer



You claim is not quite true. Consider the example
\begin{align}

\begin{pmatrix}
1 & 0 & 0\\
0 & 1 & 0
\end{pmatrix}
\begin{pmatrix}
1 & 0\\
0 & 1\\
0 & 0
\end{pmatrix} =
\begin{pmatrix}

1 & 0\\
0 & 1
\end{pmatrix}.
\end{align}
Suppose $A, B$ are square matrices such that $AB = I$. Observe
\begin{align}
BA= BIA= BA BA = (BA)^2 \ \ \Rightarrow \ \ BA(I-BA) = 0.
\end{align}
Moreover, using the fact that $AB$ is invertible implies $A$ and $B$ are invertible (which is true only in finite dimensional vector spaces), then it follows
\begin{align}

I-BA=0.
\end{align}



Note: we have used the fact that $A, B$ are square matrices when we insert $I$ between $BA$.


real analysis - Function that is not uniformly differentiable



Define $f: A \to \mathbb R$ ($f$ differentiable)to be uniformly differentiable if and only if for $\varepsilon >0$ there exists $\delta >0$ such that



$$ |h| < \delta \implies \left| {f(x + h) - f(x) \over h} -f'(x) \right | < \varepsilon$$




I am looking for an example of $f$ that is not uniformly differentiable. My idea was to choose $f$ with $f'$ sufficiently steep. For example, $f(x) = {1 \over x}$. But on $[1, \infty)$



$$ \left| {f(x + h) - f(x) \over h} -f'(x) \right | = \left| {h \over x^2 (x+h)} \right | \ge \left| {h \over x+h} \right |$$



and I don't know how to use this. On $(0,1)$ I similarly can't find a lower bound on this either. Maybe $1/x$ is in fact uniformly differentiable but I coudln't bound it from above either.



My questions are: Is $1/x$ uniformly differentiable or not and how to show it. Also can you please give me an example of $f$ that is not uniformly differentiable.


Answer



Let $f(x) = 1/x$. As you showed,




$$
\left| \frac{f(x + h) - f(x)}{h} - f'(x) \right|
= \left| \frac{h}{(x+h)x^2} \right| \\
$$
For any $\delta > 0$, you can just let $h = x = \frac{1}{M}$, for some sufficiently large $M$.Then
$$
\left| \frac{f(x + h) - f(x)}{h} - f'(x) \right|
= \left| \frac{1\; / \; M}{(2 \; / \; M) (1 \; / \; M)^2} \right|
= \frac{M^2}{2}
$$

so this function is certainly not uniformly differentiable.
The point is that the condition for uniform differentiability requires the same value of $\boldsymbol\delta$ for all $\boldsymbol{x, h}$, i.e. $\delta$ does not depend on $x$ and $h$.


calculus - Use the Mean Value Theorem to prove inequality




f is a continuous function defined on [a,b] and differentiable on ]a,b[ with f'(x)>0 on ]a,b[.



Use the mean value theorem to prove that for any x, y, all real [a,b], if y > x then f(y)>f(x)



I understand what the MVT is and what the collalories are, but I just can't figure out how to do these types of questions!



Thanks!


Answer



From the MVT, if $y \gt x$, there exists some $c$ in interval $(x,y)$ such that slope of the chord joining $x$ and $y$ equals $f'(c)$.

So, $\frac{f(y)-f(x)}{y-x}$ should equal $f'(c)$. Since it is given that the derivative is positive throughout the interval, $f'(c) \gt 0$.
It should follow that $f(y)-f(x)$ is positive since $y-x$ is negative.
Thus, $f(y) \gt f(x)$ if $y \gt x$.


real analysis - $limlimits_{n to{+}infty}{sqrt[n]{n!}}$ is infinite




How do I prove that $ \displaystyle\lim_{n \to{+}\infty}{\sqrt[n]{n!}}$ is infinite?


Answer



By considering Taylor series, $\displaystyle e^x \geq \frac{x^n}{n!}$ for all $x\geq 0,$ and $n\in \mathbb{N}.$ In particular, for $x=n$ this yields $$ n! \geq \left( \frac{n}{e} \right)^n .$$



Thus $$\sqrt[n]{n!} \geq \frac{n}{e} \to \infty.$$


elementary number theory - Prove that if $d$ divides $n$, then $2^d -1$ divides $2^n -1$

Prove that if $d$ divides $n$, then $2^d -1$ divides $2^n -1$.




Use the identity $x^k -1 = (x-1)*(x^{k-1} + x^{k-2} + \cdots + x +1)$

real analysis - Outer measure of a nested sequence of non-measurable sets




Let $\bigcup_{n=1}^\infty E_n=E$ and $ E_{n} \subseteq E_{n+1} $ then $\lim\limits_{n\mapsto \infty} \mu^*(E_n) = \mu^*(E) $ even if each $E_n$ is a non-measurable set, where $\mu^*$ is Lebesgue outer measure. $E$ is a bounded set.Proof sketch please?



This theorem allows the short proofs of Dominated convergence theorem, Vitali Convergence Theorem, Monotone Convergence Theorem , Egorov's theorem and Luzin's theorem without dwelling much on the machinery of measure theory.


Answer



The theorem is true for measurable sets.



Proof for the general case:



There is a subsequence { $ E_k $} such that $\mu^* (E_{k+1} ) - \mu^* (E_k ) \le \frac {\epsilon}{2^{k+1}} $




Lets first construct such subsequence :



$ \lim\limits_{n\mapsto \infty}\mu^*(E_n) \ge \mu^*(E_{n+1}) \ge \mu^*(E_n)$



Choose $E_1$ such that $ \lim\limits_{n\mapsto \infty}\mu^*(E_n) -\mu^*(E_1) \le \frac {\epsilon}{2} $



Choose $E_2$ such that $E_1 \subseteq E_2$ and $ \lim\limits_{n\mapsto \infty}\mu^*(E_n) -\mu^*(E_2) \le \frac {\epsilon}{2^3} $



Choose $E_3$ such that $E_2 \subseteq E_3$ and $ \lim\limits_{n\mapsto \infty}\mu^*(E_n) -\mu^*(E_3) \le \frac {\epsilon}{2^4} $
Then use induction




Step 1: cover $E_k$ with union of open intervals $\bigcup_{i=1}^\infty I_i = L_k $
Such that $ \mu^* (L_k) \le \mu^* (E_k) + \frac {\epsilon}{2^k} $



By Caratheodory condition $ \mu^* (E_{k+1} \bigcap L_k^c ) = \mu^* (E_{k+1} ) - \mu^* (E_{k+1} \bigcap L_k ) $
$\mu^*(E_k) \le \mu^*(E_{k+1} \bigcap L_k ) \le \mu^*(L_k) $



Therefore $ \mu^*(E_{k+1} \bigcap L_k^c ) \le \mu^*(E_{k+1}) - \mu^* (E_k) \le \frac {\epsilon}{2^{k+1}}$



Step 2 : Let $ G_{k+1} = E_{k+1} \bigcap L_k^c $




$ L_k \bigcup G_{k+1}$ contains $E_{k+1}$



Now cover $ L_k \bigcup G_{k+1}$ with union of intervals $\bigcup_{i=1}^\infty I_i = H_{k+1} $
Such that $\mu^* (H_{k+1}) \le \mu^*( L_k \bigcup G_{k+1}) + \frac {\epsilon}{2^{k+1}} $



$\mu^* (H_{k+1}) \le \mu^*( L_k) + \mu^*( G_{k+1}) \le \mu^*(E_k) + \frac { \epsilon}{2^k} +\frac { 2\epsilon}{2^{k+1} }$



Now using $H_{k+1} $ as the cover for $ E_{k+1} $




As seen $\mu^*(E_{k+1}) \le \mu^*(H_{k+1})$ $\le \mu^*(E_{k+1})$ $+ \frac {\epsilon}{2^k}$ $+\frac {2\epsilon}{2^{k+1}} $



Now apply step 1 and 2 to $ E_{k+1}$ and $ E_{k+2}$ and get :



$ \mu^*(E_{k+2}) \le \mu^* (H_{k+2}) \le \mu^*(E_{k+2}) + \frac { \epsilon}{2^k} +\frac { 2\epsilon}{2^{k+1} } +\frac { 2\epsilon}{2^{k+2} }$



It is obvious $ E \subseteq \bigcup_{i=1}^\infty H_k $



$H_k \subseteq H_{k+1}$




$ \mu^*(E_{k}) \le \mu^* (H_{k}) \le \mu^*(E_{k}) + 4 \epsilon $



Notice that the theorem is valid for $H_k$ as it is a measurable set (union of intervals)
So $\lim\limits_{k\mapsto \infty} \mu^*(H_k) \ge \mu^*(E) $



$\lim\limits_{k\mapsto \infty}\mu^*(E_{k}) \le \lim\limits_{k\mapsto \infty} \mu^*(H_k) \le \lim\limits_{k\mapsto \infty}\mu^*(E_{k}) + 4\epsilon $



$\lim\limits_{k\mapsto \infty}\mu^*(E_{k}) \le \mu^*(E) \le \lim\limits_{k\mapsto \infty}\mu^*(E_{k}) + 4\epsilon $



Because $\epsilon $ is arbitrary the proof is complete.




Remark: The proof is straight forward for measurable sets but not so for arbitrary sets. Actually it is true and it was given as an exercise in 'The Integrals of Lebesgue, Denjoy , Perron , and Henstock (Graduate Studies in Mathematics Volume 4 )' by Russell A. Gordon . It is theorem 1.15 in the book.



This theorem allows the short proofs of Dominated convergence theorem, Vitali Convergence Theorem, Monotone Convergence Theorem , Egorov's theorem and Luzin's theorem without dwelling much on the machinery of measure theory.


finding all complex roots of equation

let $z = 1 +i$




Find all complex solutions such that $z^2 + \bar z^2 = 0$.



My working out:



$z^2 = -\bar z^2 = -(1-i)^2 = 2i$



so $z^2 = 2i$



hence $r^2 = 2 \implies r = \sqrt 2$




mod: $2\theta = \frac{\pi}{2} + 2k\pi \implies \theta = \frac{\pi}{4} + k(\pi)$ where $k = 0, 1$



overall roots are $z = \sqrt 2 \operatorname{cis} \left(\frac\pi4 +k\pi\right)$



Is my working out and solution correct?

calculus - Show that the Mean Value Theorem does not apply to $f(x)=x^{-2}$ on $(-1,1)$



Show that the Mean Value Theorem does not apply to $f(x)=x^{-2}$ on $(-1,1)$. Is this a contradiction to the Mean Value Theorem?



My solution so far:



$$f(1)-f(-1)=f'(c)(1-(-1)) \Leftrightarrow 1-1=2f'(c) \Leftrightarrow f'(c)=0$$




Since $ f'(x)=-2x^{-3} \Rightarrow f'(c)=-2c^{-3}$



$$f'(c)=-2c^{-3}=0.$$



This is not satisfied by any value $c$ so I think I've now got the first part of the task. However, I'm not quite sure how do I figure if this contradicts with the Mean Value Theorem. I know that the MVT tells us that if $f$ is continuous on $[a, b]$ and differentiable on $(a, b)$, there is a point $c\in (a,b )$ such that $f(b)-f(a)=f'(c)(b-a)$ but I don't know how to apply it here.


Answer



The mean-value theorem only applies for continous functions. But $x^{-2}=\frac{1}{x^2}$ is not defined at $x=0$ , the singularity is not even removeable.


References to Information on Alternative Series Representations




I've come across a problem with the size of coefficients of a series expansion. Here's an example of an expression that I expand into a power series:



$\displaystyle\frac{1-e^{i3n\cdot t}}{1-e^{i\cdot n\cdot t}}$ where $n \in \mathbb{N}$ and $t \in \mathbb{R}$



The problem is that I'd like to work with small coefficient values, but the values increase exponentially with $n$.



One question that seems natural to ask in this case is, "Can we find another series with smaller coefficients?". Let me explain a little. I'm fiddling with an integration technique for specific forms of integrals, and I use the series expansions to help with the integration. I can fairly easily work with more terms, but it's hard to work with larger values of coefficients. This may seem a bit contrary to many mathematicians' experiences.




What I'm getting at is that I'd like to find alternatives to power series expansions. In other words, is there literature on series expansions other than power series expansions? I've seen series like Dirchlet series and whatnot, but I'm specifically interested in finding series expansions for functions of exponentials. And, of course, I'm hoping that I can find ones whose terms are smaller than the corresponding power series expansions.


Answer



I'm not entirely sure how helpful this suggestion (this was supposed to be a comment, but it got very long) will ultimately be, but you might be interested in knowing that there exists a generalization of the usual Taylor series, called the (Lagrange-)Bürmann series.



Briefly, given a function $f(x)$ to be expanded, a "basis function" $g(x)$, an expansion point $a$, and the assumption that $g^{\prime}(x)$ is nonzero over some interval containing $a$, such that $g(x)-g(a)$ is monotonic as $x$ increases over said interval, the Bürmann series of $f(x)$ with basis function $g(x)$ and expansion point $x=a$ reads as



$$f(x)=f(a)+\sum_{k=1}^{\infty} \frac{\beta_k(a)}{k!}(g(x)-g(a))^k$$



where the $\beta_k(x)$ satisfy the recursion




$$\beta_0(x)=f(x),\qquad \beta_k(x)=\frac1{g^{\prime}(x)}\frac{\mathrm d}{\mathrm dx}\beta_{k-1}(x)$$



The series may or may not converge, of course, so the analysis of convergence (which highly depends on the nature of your $f(x)$ and $g(x)$) is still your burden.



It's not terribly popular, probably because it's difficult to choose $g(x)$ such that your expansion of $f(x)$ is meaningful, but it's worth a shot.


Mathematical Induction (summation): $sum^n_{k=1} k2^k =(n-1)(2^{n+1})+2$




I am stuck on this question from the IB Cambridge HL math text book about Mathematical induction. I am sorry about the bad formatting I am new and have no idea how to write the summation sign.



Using mathematical induction prove that the
$$\sum^n_{k=1} k2^k =(n-1)(2^{n+1})+2$$



[correction made]



I tried solving it and got stuck on the let $n=k+1$ part
So first I made $n=1$ and both sides equaled to $2$
then assume $n=k$ and got an expression which I don't know how to write here because of the formatting

then $n=K+1$



Thanks again


Answer



We need to prove that
$$
\sum_{k=1}^nk2^k=(n-1)(2^{n+1})+2
$$
Consider $P_1$ where $k=1$. Left Hand Side (LHS) and Right Hand Side (RHS) are evaluated as follows.
$$\sum_{k=1}^1k2^k=1\times2=2\quad\quad\quad\quad\quad\quad(1-1)(2^2)+2=2$$




Now, we assume that $P_m$ holds for some natural number $m$. (The trick here is that you will need to use this result later.)
$$
\sum_{k=1}^mk2^k=(m-1)(2^{m+1})+2
$$



It must be shown that $P_{m+1}$ holds too. Let us prove it. The RHS, which is relatively easier is
$$
\begin{align*}
[(m+1)-1](2^{m+2})+2&=m(2^{m+2})+2

\end{align*}
$$
What we need to do is to show that the LHS has this form:



$$
\begin{align*}
\sum_{k=1}^{m+1}k2^k&=\sum_{k=1}^mk2^k+(m+1)2^{m+1}\\
&=(m-1)(2^{m+1})+2+(m+1)2^{m+1}\\
&=(m-1)(2^{m+1})+(m+1)2^{m+1}+2\\&=(2m)(2^{m+1})+2\\
&=m(2^{m+2})+2

\end{align*}
$$
Since $P_1$ true, $P_m$ true $\rightarrow P_{m+1}$ true, by Mathematical Induction, $P_k$ true for $k=1, 2, \cdots$



Proven


Unclear possible rounding in converting decimal to octal number



I am doing computer science homework that wants me to write 3 pieces of code. I just started and basically I have to write a program that converts a decimal number to octal number. I looked at the professors example of the process and a you tube example of how to convert a decimal to octal number and I am really confused. I need to first understand the math in order to write the program of course. So my professor gave an example of the number 255. So here is the steps of the professor:




Key: R=Remainder



Step 1: 255/8=31 R7



Step 2: 31/8=3 R7



Step 3: 3/8=0 R3



Then the answer is basically the remainders put together (not sum) backwards for an answer of 377.




However, when I do the same steps into my calculator, this is what happens.



Step 1: 255/8=31 R8 Calculator gives 31.875



Step 2: 31/8=3 R8 Calculator gives 3.875



Step 3: 3/8=0 R3 Calculator gives .375



So at first I thought my calculator was just rounding up and I would just have to account for that when programming. However, the rule doesn't hold up in the last step 3. So my little rule was to subtract 1 from what ever remainder my calculator gives. So step 1 and 2 would then match up with the professors but then step 3 would fail as it actually matches up originally. So I looked on you tube for tutorial on this topic and still the same sort of logic just with a different number. Then I thought, ok so subtract 1 each time until the last step. In the last step leave it. I looked on this forum for another example, and my new rule still fails and was actually the way I originally would have done it in the first place. Here is a link to this sites forum of this topic that I used:




Decimal Number to Octal



So lets look at this forums steps of converting the number 9243. The forums steps is as follows:



Step 1: 9243/8=1155 R3 My calculator answer is 1155.375



Step 2: 1155/8=144 R3 My calculator answer is 144.375



Step 3: 144/8=18 R0 My calculator answer is 18




Step 4: 18/8=2 R2 My calculator answer is 2.25



Step 5: 2/8= 0 R2 My calculator answer is .25



Answer is 22033



So this forums answer is doing it exactly like what I expect but then the other 2 examples of you tube and my professor has something a little different. So as you can see, this is kind of confusing and if anyone can help me out, that would be greatly appreciated. Thanks in advance.



P.S My tag maybe wrong as there was no tag specifically for octal number, or converting, or just decimal, or anything related to this thread.



Answer



Your calculator does full decimal division while what you want for base conversion is integer division with remainder. When your prof writes $255/7=31R7$ your calculator is continuing the division and writing $7/8=0.875$ To use your calculator for this you should take the integer part of the quotient for the next division step, then multiply the fractional part by the new base to get the remainder, so you would do $255/8=31.875$, so you pass $31$ on to the next step and the remainder is $0.875\cdot 8=7$. At the end when you do $3/8=0.375$ the quotient is $0$ so you know to stop and the remainder is $0.375 \cdot 8=3$. In the conversion of $9243$ when you get $18.0$ as the quotient the remainder is $0$ and when you get $2.25$ the remainder is $0.25 \cdot 8=2$


Friday 23 January 2015

calculus of variations - Basis for solution space of Jacobi accessory equation

The Jacobi accessory equation has importance as a means of checking candidates for functional extrema. A book of mine ($\textit{Calculus of variations}$, by van Brunt) proves that we can find solutions to the Jacobi accessory equation by differentiating the general solution to the Euler-Lagrange equation; that is, if the latter has a general solution $y$ involving parameters $c_1, c_2$, then the functions
$$u_1(x) = \frac{\partial y}{\partial c_1}, \quad u_2(x) = \frac{\partial y}{\partial c_2}$$

evaluated at some particular $(c_1, c_2)$ are solutions to the Jacobi accessory equation (given basic smoothness assumptions). However, van Brunt goes on to claim without proof that $u_1, u_2$ $\textit{form a basis for the solution space}$. Can anyone suggest how this might be proved?

general topology - Continuous bijection from $mathbb{R}^{2} to mathbb{R}$



Can anyone give an example of a continuous bijection from $\mathbb{R}^{2} \to \mathbb{R}$


Answer



Ok, I will add my hint as an answer so that it's not unanswered.






  • Not possible is my guess. If you remove finitely many points from $\mathbb R^2$ it remains connected where as $\mathbb R$ does not. That should be a hint.



real analysis - Are polynomials with integer coefficients uniquely determined by their coefficients?



Assume f and g are polynomials, where f = g. More explicitly;



$$ f = a_{n}z^{n} + \ldots + a_{1}z + a_{0} = b_{n}z^{n} + \ldots + b_{1}z + b_{0} = g $$



where $ \ \ a_{0} \ , \ldots, \ a_{n} \in \mathbb {Z}, \ and \ z \in \mathbb{C} $



then $$ a_{0} = b_{0} \ , \ldots, \ a_{n} = b_{n} $$




Attempted proof:



$ f - g = 0 $, thus we can now equate coefficients;



i.e. $$ a_{0} - b_{0} = 0 \ , \ldots, \ a_{n} - b_{n} = 0 $$



$$ \implies a_{0} = b_{0} \ , \ldots, \ a_{n} = b_{n} $$



$\square$




So, is this essentially stating that polynomials are uniquely determined by there coefficients?



This question is spurred as I am trying to prove that the set of all algebraic numbers is countable, and consequently my proof will in turn rely upon such a statement as this, since if each polynomial is uniquely determined in this manner, then we should be able to find some (not exactly sure if it will be an isomorphism or just a monomorphism?) correspondence between the set of all polynomials with integer coefficients of degree n and the set of all n-tuples with integer components.



I believe we can then show that this set of all n-tuples with integer coefficients will be countable, because it will be the cartesian product:



$$ \mathbb{Z} \times \ldots \times \ \mathbb{Z} \cong \mathbb{Z}^{n} $$



and the finite cartesian product of countable sets is countable. We then can see that the set of all polynomials with integer coefficients is countable, which i feel puts me a step closer to solving said problem.




I feel that given this to be true, then as all polynomials only have a finite number of zeroes, we can use that the countable union of finite sets is countable to finish the proof.



Does this seem like a feasible plan for proving the statement?


Answer



If $f(x)=g(x)$ as functions, then set $h(x)=f(x)-g(x)$, a polynomial since $f,g$ are. Since $f=g$ as functions, $h(x)$ is identically zero, i.e. zero for every $x$. If $h(x)$ isn't the zero polynomial, then it has some degree, say $m$. But then it would have at most $m$ complex zeroes. But $h(x)$ has infinitely many zeroes. Hence $h(x)$ must be the zero polynomial, so in fact the coefficients of $f,g$ must be identical.


summation - Generating functions and central binomial coefficient




How would you prove that the generating function of $\binom{2n}{n}$ is $\frac{1}{\sqrt{1-4x}}$?



More precisely, prove that( for $|x|<\frac{1}{4}$ ):



$$\sum^{\infty}_{n=0}x^n\binom{2n}{n}=\frac{1}{\sqrt{1-4x}}$$



Background: I was trying to solve
$$S=\sum^{\infty}_{n=0}\frac{(2n+1)!}{8^n(n!)^2}=\sum^{\infty}_{n=0}\frac{(2n+1)}{8^n}\binom{2n}{n}$$
Which if we let $f(x)$ be the generating function in question, would be simply

$$f(x)+2xf'(x)$$
With $x=\frac{1}{8}$. Is there a simple proof of the first identity? Wikipedia states it without a proper reference (the reference provided states it without proof). Is there an easier way of calculating $S$? (which is $\sqrt{8}$, by the way)


Answer



Here is a simple derivation of the generating function: start with
$$ (1+x)^{-1/2} = \sum_k \binom{-1/2}{k}x^k.$$
Now,
\begin{align*}
\binom{-1/2}{k} &= \frac{\left(-\frac{1}{2}\right)\left(-\frac{3}{2}\right)\left(\cdots\right)\left(-\frac{1}{2}-k+1\right)}{k!} \\
&= (-1)^k \frac{1\cdot 3\cdot 5\cdots (2k-1)}{2^k k!} \\
&= (-1)^k \frac{(2k)!}{2^k2^kk!k!} \\

&= (-1)^k 2^{-2k}\binom{2k}{k}.
\end{align*}

Finally replace $x$ by $-4x$ in the binomial expansion, giving
$$
(1-4x)^{-1/2} = \sum_k (-1)^k 2^{-2k}\binom{2k}{k}\cdot (-4x)^k
= \sum_k \binom{2k}{k}x^k.$$


calculus - Closed form solution for this integral?

I'm hitting a road block in finding an expression (closed form preferably) for the following integral:



\begin{equation}

\int^{+\infty}_0 x^b \left ( 1-\frac{x}{u} \right )^c \exp(-a x^3) dx
\end{equation}



where $a,b$ are positive constants; $b>1$ is an odd multiple of $0.5$, while $c$ is a positive or negative odd multiple of $0.5$; $u$ is a (positive) parameter.



Things I have considered or tried:




  • look up in tables (Gradshsteyn and Ryzhik): there are very few explicit results for integrals involving $\exp(-a x^3)$ (or for the other factors after transforming via $y=x^3$). Also, tabulated results involving $\exp(-a x^p)$ for more general $p$ do not include the other factors $x^b (1-x/u)^c$. One exception is (3.478.3):
    \begin{equation}

    \int^{u}_0 x^b (1-ux)^c \exp(-a x^3) dx,
    \end{equation}
    but the limits of integration do not match with my case;


  • there is a closed form solution (3.478.1) for the simpler integral
    \begin{equation}
    \int^{+\infty}_0 x^{d-1} \exp(-a x^3) dx = \frac{a^{-d/3}}{3} \Gamma(d/3).
    \end{equation}
    (NB: there is also an expression for the indefinite integral.)
    A binomial expansion of $[1-(x/u)]^n$ for integer $n$ would produce a solution in series form. However, in my case, the exponents $b$ and $c$ are strictly half-integer. For the same reason, integration by parts does not lead to a simpler integral without the factor $[1-(x/u)]^c$;


  • Wolfram Math online did not produce a result;



  • the integral is an intermediate step in a longer analysis, so numerical solution (with given values for the parameter) is not practical.




Grateful for any pointers or solution.

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...