Tuesday 31 March 2015

can a number of the form $x^2 + 1 $ be a square number?




I have been trying to prove that $x^2 + 1 $ is not a perfect square (other than $0^2 +1^2=1^2$). I'm stuck and can't move forward.



The thing I have tried so is to relate the problem to a hyperbola and find an integer solution for both $x$ and $y$ when $a=b=1$. The pell's equation came up in my search, but I don't understand it fully.






Note: I was in a confused state and @CoolHandLouis' visual answer cleared my muddled mind, so I selected that answer. In that way, his answer was very helpful to me. @Alessandro's proof is clear to me now and if I could accept two answers, I would accepted that one too. Thanks to everyone for helping!


Answer



We want to prove $x^2 + 1$ can never be a perfect square.




Let




  • $f(x) = x^2$



    Then,



    $f(x)$     $<$     $f(x) + 1$     $<$     $f(x+1)$



    $x^2$         $<$        $x^2 + 1$     $<$      $x^2 + 2x + 1$        (for all $x > 0$).





Therefore, $x^2 + 1$ cannot be a perfect square (except $x = 0$) because it will always be greater than the prior perfect square and less than the next perfect square.



The following table illustrates this. Note that $f(x)$ is the set of all perfect squares:




x f(x)=x^2 x^2+1 f(x+1)
0 0 1 1
1 1 2 4

2 4 5 9
3 9 10 16
4 16 17 25

calculus - Defining an upper/lower bound in lexicographically ordered C



If I have a lexicographic ordering on $\mathbb{C}$, and I define a subset, $A =
\{z \in \mathbb{C} : z = a + bi, a, b \in \mathbb{R}, a < 0\}$.



I have an upper bound, say $\alpha = 0 + di$. My question is does only the real part, $\Re(\alpha) = 0$ define the upper bound? Or does the $\Im(\alpha) = d$ have nothing to do with bounds in general?




Since it seems to me if I have the lexicographic ordering on $\mathbb{C}$ such as for any two $m, n \in \mathbb{C}$, where $m = a + bi$ and $n = c + di$ and I define the ordering as $m < n$ if $a < c$ or if $a = c$ and $b < d$.



The last bit, $b < d$ gives me the impression that $\Im(\alpha)$ would play a role in the upper bound. The reason I am asking is because in a proof I read, they prove this order has no least upper bound as there are infinitely many complex numbers with their real parts equal to $\Re(\alpha)$ but different imaginary parts. So, I guess if only the real parts of complex numbers define the bounds then it makes sense to me.


Answer



A least upper bound has to be a specific number with the LUB property. In this case there is no such number, since there are lots of upper bounds but none of them is the smallest.


discrete mathematics - Using the Principle of Mathematical Induction to Prove propositions




I have three questions regarding using the Principle of Mathematical Induction:




  1. Let $P(n)$ be the following proposition:



    $f(n) = f(n-1) + 1$ for all $n ≥ 1$, where $f(n)$ is the number of subsets of a set with $n$ elements.


  2. Let $P(n)$ be the following proposition:



    $n^3 + 3n^2 + 2n$ is divisible by 3 for all $n ≥ 1$. Determine whether $P(n)$ holds.



  3. Use the Principle of Mathematical Induction to prove that $1 \cdot 1! + 2 \cdot 2! + 3 \cdot 3! + ... + n \cdot n! = (n+1)! -1$ for all $n ≥ 1$.




Here is the work I have so far:



For #1, I am able to prove the basis step, 1, is true, as well as integers up to 5, so I am pretty sure this is correct. However, I am not able to come up with a formal proof.



For #2, for the basis step, I have $1^3 + 3(1)^2 + 2(1) = 6$, which is divisible by 3. For the inductive step, I need to prove that $P(k) \rightarrow P(k+1)$, so I have $P(k+1) = (k+1)^3 + 3(k+1)^2 + 2(k+1)$. However, I'm not sure how to take the inductive step and plug in the inductive hypothesis to make this formal proof true.



For #3, I think that the inductive hypothesis would be $\sum_{i=1} ^ {k+1} i \cdot i! = (k+2)! -1$. When I do this, I am getting $\sum_{i=1} ^ {k+1} i \cdot i! = \sum_{i=1} ^ k (k+1)! + \sum_{i=1} ^ 1 1! - 1$ , but I don't think this will work for plugging in the inductive hypothesis. I think I should be using $1 \cdot 1! + 2 \cdot 2! + 3 \cdot 3! + ... + n \cdot n! + n+1 \cdot n+1! = (n+2)! -1$ instead for the proof. I'm getting nowhere with this one.




Any help would be appreciated.


Answer



the number of subsets for a set with order $n$ is $2^n$. Consequently, $2^n-1 \neq 2^{n-1}$ in general, so it is false.



To prove this fact, you can actually use induction!



More specifically, think about the number of ways you can have a k-element subset which is going to be$\dbinom{n}{k}$ [how many ways can you pick a group of k people out of an $n$-person group.] Then all possible subsets will look like the total number of ways to be pick $k$-element subsets. In particular:
$$\dbinom{n}{0}+\dbinom{n}{1}+...+\dbinom{n}{n}=2^n$$




For practice, try to prove this with induction.



For (2), the base case is clear. We assume that $3 \mid n^3+3n^2+2n$. This means that there exists some integer $k$ so that $3k=n^3+3n^2+2n$



Then:
$$\begin{align}(n+1)^3+3(n+1)^2+2(n+1)&=(n^3+3n^2+3n+1)+3(n^2+2n+1)+2n+2 \\
&=(n^3+3n^2+2n)+3n+3+3(n^2+2n+1)\\ \end{align}$$
by substitution from the hypothesis, we obtain:
$$\begin{align}&=3k+3(n^2+3n+2) \\
&=3(n^2+3n+2+k)\end{align}$$




Hence, the result follows readily.



I leave part 3 to you. You got the induction step correctly, at least in the set up. Try some algebraic manipulation to start with; keep in mind what your assumption is, you just want it to show up somewhere in your inductive step. Induction is an easy enough idea, but the problem is that it doesn't show much in the way of intuition. As in, it generally doesn't tell you why something works. We just get used to the fact that it solves problems for us. Kind of like l'hopital's rule in Calculus.


Monday 30 March 2015

real analysis - A series that gives inconclusive results by the root & ratio tests, but converges STRONGLY.



I'm trying to come up with a positive sequence $\{a_n\}_1^\infty$ such that $\lim_{n\to\infty} \left(\sqrt[\leftroot{-2}\uproot{2}n]{a_n}\right) =

\lim_{n\to\infty} \left|\frac{a_{n+1}}{a_n}\right| = 1$, but $\forall \alpha > 0$ the series $\sum_1^\infty a_n n^\alpha$ converges. I know $1/n^n$ goes to zero faster than $n^\alpha$ goes to infinity, but it converges by the two tests. I've tried screwing around with $1/n^n$ but had no luck. Any thoughts?


Answer



If $a_n=e^{-\sqrt{n}}$ then
$$ \lim_{n\to\infty}a_n^{\frac{1}{n}}=\lim_{n\to\infty}e^{-\frac{1}{\sqrt{n}}}=1$$
and
$$ \lim_{n\to\infty}\frac{a_{n+1}}{a_n}=\lim_{n\to\infty}\exp[\sqrt{n+1}-\sqrt{n}]=1$$
However,
$$ \lim_{x\to\infty}x^{\beta}e^{-\sqrt{x}}=\lim_{y\to\infty}y^{2\beta}e^{-y}=0$$
for all $\beta>0$, so taking $\beta=\alpha+2$ and using the limit comparison test (comparing to $\frac{1}{n^2}$) it follows that $\sum n^{\alpha}e^{-\sqrt{n}}$ converges for all $\alpha>0$.


calculus - Differentiation of $2arccos left(sqrt{frac{a-x}{a-b}}right)$



Okay so the question is:





Show that the function
$$2\arccos \left(\sqrt{\dfrac{a-x}{a-b}}\right)$$
is equal to
$$\frac{1}{\sqrt{(a-x)(x-b)}} .$$




I started by changing the arccosine into inverse cosine, then attempted to apply chain rule but I didn't get very far.
Then I tried substituting the derivative for arccosine in and then applying chain rule. Is there another method besides chain rule I should use? Any help is appreciated.


Answer




$$\dfrac{d}{du} 2\arccos u = - 2\dfrac{1}{\sqrt{1 - u^2}} ~du$$



See the Proof Wiki for a proof of this.



In this problem, we have, $u = \sqrt{\dfrac{a-x}{a-b}}$, and we need to find $dx$, so we have:



$$ \dfrac{d}{dx} \left(\sqrt{\dfrac{a-x}{a-b}} \right) = -\dfrac{\sqrt{\dfrac{a-x}{a-b}}}{2 (a-x)} = -\dfrac{1}{2 \sqrt{(a - b)(a - x)}}$$



So, lets put these two together.




$\dfrac{d}{du}\left(2 \arccos u \right) =-2 \dfrac{1}{\sqrt{1 - u^2}} ~du = -\dfrac{2}{\sqrt{1 - \left(\sqrt{\dfrac{a-x}{a-b}}\right)^2}} \left(-\dfrac{1}{2 \sqrt{(a - b)(a - x)}} \right)$



We can reduce this to:



$$\dfrac{d}{dx} \left(2 \arccos \left(\sqrt{\dfrac{a-x}{a-b}}\right)\right)=\dfrac{1}{\sqrt{(a-x)(x-b)}}$$


Summation with a variable as the upper limit




$$\sum_{n=1}^m \frac{n \cdot n! \cdot \binom{m}{n}}{m^n} = ?$$



My attempts on the problem:



I tried writing out the summation.



$$1+\frac{2(m+1)}{m} + \frac{3(m-1)(m-2)}{m^2} + \cdots + \dfrac{m\cdot m!}{m^m}$$



I saw that the ratio between each of the terms is $\dfrac{\dfrac{n}{n-1} (m-n+1)}{m}$




I wasn't able to proceed because this isn't a geometric series. Please help!



I would appreciate a full solution if possible.


Answer



Expanding the binomial ${m\choose n} = \frac{m!}{(m-n)!n!}$ your sum can be written
$$m!\sum_{n=1}^m \frac{1}{(m-n)!}\frac{n}{m^n}$$



We can now change the summation index $i = m-n$ (i.e. summing from $m$ down to $0$) to get
$$\frac{m!}{m^m}\sum_{i=0}^{m-1} \frac{m^i}{i!}(m-i) = \frac{m!}{m^m}\left[\sum_{i=0}^{m-1} m^{i+1}\frac{1}{i!} - \sum_{i=0}^{m-1} m^i\frac{i}{i!}\right]$$




Now use $\frac{i}{i!} = \frac{1}{(i-1)!}$ and change the summation index $j=i-1$ in the last sum and you will see that most of the terms will cancel giving you a simple result ($=m$).


complex analysis - Laurent series for $f (z)=frac {sin (2 pi z)}{z (z^2 + 1)}$

How Can find the Laurent series for this function valid for $0 <|z-i|<2$ $$f (z)=\frac {\sin (2 \pi z)}{z (z^2 +1)}$$




Let $g (z) = \sin (\pi z)$



$$\sin (\pi z ) = \sin( 2 \pi (z - i)) \cos (2 \pi i) + \cos (2 \pi (z-i)) \sin (2 \pi i )$$



And Let $h (z)= \frac {1}{z^2 + 1}$



$$\frac {1}{z (z^2 + 1)}= \frac {1}{i (1 -(-(z-i))}[\frac {1/2i}{z-i} +\frac {-1/2i}{2i (1-(-\frac {z-i}{2i}))}]$$



So it's easy to find expansion for $g (z)$ and $h (z)$ and then multiply the two expansions




We notice that $ f $ has simple pole at $z = i$ So, we can get the principal part easily Or using this
$$2 \pi i a_1 = \int_{|z-i|=1} f (z) dz$$



Is there a trick to find the Laurent series quickly ?



This question was in my exam .I Calculated the principal part , but I didn't have enough time to calculate the exact form for the analytic part .



Thank you

elementary set theory - Intuition of Empty Set in Ordered Pair



This question is inspired by Exercise 1.44 Mathematical Proofs, 2nd Ed, by Gary Chartrand et al. Given that $A = \{\emptyset, \color{green}{\{{\emptyset}\}}\}$, I understand that
$A \times \mathcal{P}(A) = \LARGE{\{} \normalsize{ (\emptyset, \emptyset), (\emptyset, \color{green}{\{{\emptyset}\}}), (\emptyset, \color{blue}{\{\{\emptyset\}\}}), (\emptyset, A),
(\color{green}{\{{\emptyset}\}},\emptyset), ( \color{green}{\{{\emptyset}\}}, \color{green}{\{{\emptyset}\}}), ( \color{wgreen}{\{{\emptyset}\}}, \color{blue}{\{\{\emptyset\}\}} ), (\color{green}{\{{\emptyset}\}} ,A) } \LARGE{\}} $.
I also understand that these 3 all differ: $\emptyset$ = an empty box, $\color{green}{\{{\emptyset}\} \text{= a box containing an empty box }}, \color{blue}{\{\{\emptyset\}\} \text{= a box containing a box containing an empty box} }.$



However, what is the meaning of an ordered pair containing any one of $\emptyset$, or $\color{green}{\{{\emptyset}\}}$ or $\color{blue}{\{\{\emptyset\}\}}$? As 3 examples, $(\emptyset, \emptyset), (\emptyset, \color{blue}{\{\{\emptyset\}\}}), ( \color{green}{\{{\emptyset}\}}, \color{blue}{\{\{\emptyset\}\}} )$ unhinge me because I only obtained them by the definition of Cartesian product; I am daunted and nescient about their true meaning. Could the 8 ordered pairs above be represented graphically? Or is there intuition or another interpretation?




I have referenced Ordered pairs in a power set.






$\large{\text{Supplement to Alex Mardikian and Professsor Scott's Answers :}}$



With many thanks to your answers, I now understand the ordered pairs can be bijected with natural numbers for simplicity.



Nonetheless, without rewriting or paring them, is it possible to understand directly the meaning of an ordered pair containing any one of $\emptyset$, or $\color{green}{\{{\emptyset}\}}$ or $\color{blue}{\{\{\emptyset\}\}} ?$ The bijections have taken away some of my "angst", but I still feel fazed by ordered pairs like $(\emptyset, \emptyset), (\emptyset, \color{blue}{\{\{\emptyset\}\}}), ( \color{green}{\{{\emptyset}\}}, \color{blue}{\{\{\emptyset\}\}} )$.


Answer




Here’s the set of ordered pairs $A\times\wp(A)$, with $A$ listed across the bottom and $\wp(A)$ along the lefthand edge:



$$\begin{array}{r|ll}
\{\varnothing,\{\varnothing\}\}&\langle\varnothing,\{\varnothing,\{\varnothing\}\}\rangle&\langle\{\varnothing\},\{\varnothing,\{\varnothing\}\}\rangle\\
\{\{\varnothing\}\}&\langle\varnothing,\{\{\varnothing\}\}\rangle&\langle\{\varnothing\},\{\{\varnothing\}\}\rangle\\
\{\varnothing\}&\langle\varnothing,\{\varnothing\}\rangle&\langle\{\varnothing\},\{\varnothing\}\rangle\\
\varnothing&\langle\varnothing,\varnothing\rangle&\langle\{\varnothing\},\varnothing\rangle\\ \hline
&\varnothing&\{\varnothing\}
\end{array}\tag{1}$$




Here’s the same table of $B\times\wp(B)$, where $B=\{0,1\}$:



$$\begin{array}{r|ll}
\{0,1\}&\langle0,\{0,1\}\rangle&\langle1,\{0,1\}\rangle\\
\{1\}&\langle0,\{1\}\rangle&\langle1,\{1\}\rangle\\
\{0\}&\langle0,\{0\}\rangle&\langle1,\{0\}\rangle\\
\varnothing&\langle0,\varnothing\rangle&\langle1,\varnothing\rangle\\ \hline
&0&1
\end{array}\tag{2}$$




As you can see, they have exactly the same structure: only the labels are different. If you go through $(1)$ replacing $\varnothing$ by $0$ and $\{\varnothing\}$ by $1$, you get exactly $(2)$.



And here, to make the structure even clearer, is $B\times C$, where $C=\{0,1,2,3\}$



$$\begin{array}{r|ll}
3&\langle0,3\rangle&\langle1,3\rangle\\
2&\langle0,2\rangle&\langle1,2\rangle\\
1&\langle0,1\rangle&\langle1,1\rangle\\
0&\langle0,0\rangle&\langle1,0\rangle\\ \hline
&0&1

\end{array}\tag{3}$$



If in $(2)$ you replace every second coordinate $\varnothing$ by $0$, every second coordinate $\{0\}$ by $1$, every second coordinate $\{1\}$ by $2$, and every second coordinate $\{0,1\}$ by $3$, you get exactly $(3)$. The three Cartesian products $A\times\wp(A)$, $B\times\wp(B)$, and $B\times C$ have the same basic structure. This structure is easiest to see in $(3)$, but that’s only because the notation is less cluttered.



Added: I think that the problem that you’re having isn’t really with the ordered pairs themselves, but rather with the objects appearing in them. In practical terms an ordered pair is just a list of two things arranged so that there is a first thing and a second thing (which may be the same thing listed twice), and we can tell which is which. As far as the ordered pair aspect is concerned, there’s no difference between $\langle\varnothing,\{\{\varnothing\}\}\rangle$ and $\langle 3,7\rangle$: both are ordered pairs whose first and second components are unequal. Each becomes a different ordered pair if you reverse the order of its components: $\langle\{\{\varnothing\}\},\varnothing\rangle\ne\langle\varnothing,\{\{\varnothing\}\}\rangle$, and $\langle 7,3\rangle\ne\langle3,7\rangle$.



To put it a little differently, the meaning of $$\langle\text{thing}_1,\text{thing}_2\rangle$$ is simply list of two things, the first one being $\text{thing}_1$ and the second $\text{thing}_2$. The identities of $\text{thing}_1$ and $\text{thing}_2$ have no bearing on this meaning. We might say that they’re buried one layer deeper: after identifying $\langle\text{thing}_1,\text{thing}_2\rangle$ as an ordered pair of entities, we can worry about just what those entities are. If $\text{thing}_1=3$ and $\text{thing}_2=7$, and the context is plotting a point in the Cartesian plane, we hardly notice this step, because we’re very familiar with $3$ and $7$ as real numbers. If $\text{thing}_1=3$ and $\text{thing}_2=7$, and the context is a first serious course in set theory, we might have to think a bit harder about this step, because in that context it’s likely that $3=\{0,1,2\}$ and $7=\{0,1,2,3,4,5,6\}$, so that



$$\langle3,7\rangle=\langle\{0,1,2\},\{0,1,2,3,4,5,6\}\rangle\;.$$




But the added complication has nothing to do with the ordered pair structure: it’s all in the two objects that are the components of the ordered pair.



The same is true if $\text{thing}_1=\varnothing$ and $\text{thing}_2=\{\{\varnothing\}\}$: $\langle\varnothing,\{\{\varnothing\}\}\rangle$ is just a list whose first element is $\varnothing$, and whose second element is $\{\{\varnothing\}\}$. If you don’t feel that you have an intuitive grip on this, the problem is unlikely to be in the ordered pair structure itself, in the notion of a first thing and a second thing; it’s much likelier to result from the relative unfamiliarity of the things themselves. That’s a problem that to a large extent will solve itself over time, provided that you keep working with the concepts. In the short term it may help to make a conscious effort to think about ordered pairs in ‘layers’:




Okay, this is an ordered pair. Its first component is $\text{thing}_1$, and its second component is $\text{thing}_2$. It’s an element of the function $f$, so I know that $f(\text{thing}_1)=\text{thing}_2$. Now what are these things? Well, $\text{thing}_1=\varnothing$ and $\text{thing}_2=\{\{\varnothing\}\}$, and therefore $f(\varnothing)=\{\{\varnothing\}\}$. Okay: the function $f$ assigns to the empty set the set $\{\{\varnothing\}\}$. Do I really need to know more than that right now?




In this example the ordered pair is in a middle layer, with the function $f$ ‘above’ it and the objects $\text{thing}_1$ and $\text{thing}_2$ ‘below’ it. It’s part of the internal structure of $f$, and $\text{thing}_1$ and $\text{thing}_2$ are part of its internal structure. They in turn may have internal structure; in this case $\varnothing$ has none, but $\{\{\varnothing\}\}$ clearly does have some. The details of that internal structure may or may not matter. If they do, I’ll have to think about them: $\{\{\varnothing\}\}$ is the set whose only element is $\{\varnothing\}$, which in turn is the set whose only element is $\varnothing$. If not, I can just treat $\{\{\varnothing\}\}$ as a fancy label for some set whose precise nature isn’t important, at least at the moment. If it becomes important later, I can worry about it then.




Actually, this advice applies to all mathematical notation. You don’t have to grasp a complicated expression all at once: it’s fine to build up an understanding of it a piece at a time. An equation, for instance, by definition has the form $A=B$. We see that, and we immediately know the type of expression with which we’re dealing. Now what are $A$ and $B$? And we go on from there, making sense of $A$ and $B$.


algebra precalculus - What is the term for a factorial type operation, but with summation instead of products?

(Pardon if this seems a bit beginner, this is my first post in math - trying to improve my knowledge while tackling Project Euler problems)



I'm aware of Sigma notation, but is there a function/name for e.g.



$$ 4 + 3 + 2 + 1 \longrightarrow 10 ,$$



similar to $$4! = 4 \cdot 3 \cdot 2 \cdot 1 ,$$ which uses multiplication?



Edit: I found what I was looking for, but is there a name for this type of summation?

real analysis - Prove that the sequence $z_n$, where $z_n:= x_n-y_n$, converges and $lim (x_n-y_n)=lim z_n = (lim x_n) - (lim y_n)$




Prove that the sequence ${z_n}$, where $z_n:= x_n-y_n$, converges and $\lim (x_n-y_n)=\lim z_n = (\lim x_n) - (\lim y_n)$



Also, when all limits when $n\to\infty$.



Here is what I have so far:
let $x=\lim(x_n), y=\lim(y_n), z=x-y$



let $\epsilon>o$, find an $M_1$ such that for all $n\geq M_1$, we have $$|x_n-x|<\frac{\epsilon}{2};$$find an $M_2$ such that for all $n\geq M_2$, we have $$|y_n-y|<\frac{\epsilon}{2};$$ take $$M:=\max\{M_1, M_2\}$$




For all $n\geq M$ we have:



$$|z_n-z|=|(x_n-y_n)-(x-y)|=|x_n-x-y_n+y|≥ |x_n-x|-|y_n-y|$$



I don't know how to proceed from here, I want to get something like $|z_n-z|<\epsilon$, to complete the proof, right?



Also, is this sufficient enough to prove the second part of the question--the limit part



Please help me here. Thank you!


Answer




Hint:



You're very close! Note that:
$$|z_n-z|=|(x_n-y_n)-(x-y)|=|(x_n-x)-(y_n-y)|\leq|x_n-x|+|y_n-y|$$
It was the other side of the triangular inequality!



Please, use MathJax when typing math formulas! :)


Sunday 29 March 2015

real analysis - Application of L'Hospital's Rule on the definition of a derivative.



I'm currently taking an introduction to Calculus course and I've come across the following identity:






How would one come up with this? My best guess is using L'Hospital's Rule on $$\lim_{x\rightarrow a}{\frac{f(x)-f(a)}{x-a}}$$



but I'm not very sure how, since differentiating both the numerator and denominator merely yields



$$\lim_{x\rightarrow a}{f'(x)} = f'(a)$$


Answer



The result holds under the weaker assumption that $f''(a) $ exists (other answers assume the continuity of $f''$ at $a$ or even more). Also note that under this weaker assumption it is not possible to apply L'Hospital's Rule on the expression under limit in question and hence a slight modification is required.







By definition of derivative we have $$\lim_{x\to a} \frac{f'(x) - f'(a)} {x-a} =f''(a)\tag{1}$$ Adding this to the limit in question it is clear that our job is done if we can establish that $$\lim_{x\to a} \frac{f(x) - f(a) - (x-a) f'(a)} {(x-a)^2}=\frac{f''(a)}{2}\tag{2}$$ And the above limit is easily evaluated by a single application of L'Hospital's Rule. Applying it on the fraction on left side we get a new fraction $$\frac{f'(x) - f'(a)} {2(x-a)}$$ which clearly tends to $f''(a) /2$ (via $(1)$) and hence the fraction on left side of $(2)$ also tends to the same value and the identity $(2)$ is established.


real analysis - Prove any continuous function on a 3-dim ellipsoid can be approximated by a polynomial

I'm familiar with the Weierstrass approximation theorem and some aspects of the Stone-Weierstrass theorem but I mainly only get it for closed intervals [a, b]. I am familiar with the proof that begins with showing f is continuous on $[0, 1]$ and going from there. I have a 3-d set which forms an ellipsoid and I'd like to show that any continuous function on that set can also be approximated by a polynomial. Is there a way to extend the proof of Weierstrass approximation theorem or is this way over my head.



For example, [this post] (Showing a continuous functions on a compact subset of $\mathbb{R}^3$ can be uniformly approximated by polynomials) has an example set but I can't really follow the answer. I only know introductory real analysis. I'm guessing there's a way to go from the $[0, 1]$ case to the unit cube case but I'm missing that leap.

calculus - Evaluation of $intfrac{sqrt{cos 2x}}{sin x},dx$





Compute the indefinite integral
$$
\int\frac{\sqrt{\cos 2x}}{\sin x}\,dx
$$




My Attempt:




$$
\begin{align}
\int\frac{\sqrt{\cos 2x}}{\sin x}\,dx &= \int\frac{\cos 2x}{\sin^2 x\sqrt{\cos 2x}}\sin xdx\\
&= \int\frac{2\cos^2 x-1}{(1-\cos^2 x)\sqrt{2\cos^2 x-1} }\sin x \,dx
\end{align}
$$



Let $\cos x = t$, so that $\sin x\,dx = -dt$. This changes the integral to



$$

\begin{align}
\int\frac{(2t^2-1)}{(t^2-1)\sqrt{2t^2-1}}\,dt &= \int\frac{(2t^2-2)+1}{(t^2-1)\sqrt{2t^2-1}}\,dt\\
&= 2\int\frac{dt}{\sqrt{2t^2-1}}+\int \frac{dt}{(t^2-1)\sqrt{2t^2-1}}
\end{align}
$$



How can I solve the integral from here?


Answer



\begin{align}
\int\frac{\sqrt{\cos 2x}}{\sin x}\ dx&=\int\frac{\sqrt{\cos^2x-\sin^2x}}{\sin x}\ dx\\

&\stackrel{\color{red}{[1]}}=\int\frac{\sqrt{t^4-6t^2+1}}{t^3+t}\ dt\\
&\stackrel{\color{red}{[2]}}=\frac12\int\frac{\sqrt{u^2-6u+1}}{u^2+u}\ du\\
&\stackrel{\color{red}{[3]}}=\int\frac{(y^2-6y+1)^2}{(y-1)(y-3)(y+1)(y^2+2t-7)}\ dy\\
&\stackrel{\color{red}{[4]}}=\int\left[\frac1{y-1}+\frac1{y-3}-\frac1{y+1}-\frac{16}{y^2+2y-7}\right]\ dt\\
&=\int\left[\frac1{y-1}+\frac1{y-3}-\frac1{y+1}-\frac{16}{(y+1)^2-8}\right]\ dt
\end{align}
The rest is yours.







Notes :



$\color{red}{[1]}\;\;\;$Use Weierstrass substitution, $\tan\left(\dfrac{x}{2}\right)=t$.



$\color{red}{[2]}\;\;\;$Use substitution $u=t^2$.



$\color{red}{[3]}\;\;\;$Use Euler substitution, $y-u=\sqrt{u^2-6u+1}\;\color{blue}{\Rightarrow}\;y=\dfrac{u^2-1}{2u-6}$.



$\color{red}{[4]}\;\;\;$Use partial fractions decomposition.


calculus - Elementary derivation of certian identites related to the Riemannian Zeta function and the Euler-Mascheroni Constant

Is the proof of these identities possible, only using elementary differential and integral calculus? If it is, can anyone direct me to the proofs? ( or give a hint for the solution )



1)$$\int_0^\infty { e^{-x^2} \ln x }\,dx = -\tfrac14(\gamma+2 \ln 2) \sqrt{\pi} $$



2)$$\int_0^\infty { e^{-x} \ln^2 x }\,dx = \gamma^2 + \frac{\pi^2}{6} $$




3) $$\gamma = \int_0^1 \frac{1}{1+x} \sum_{n=1}^\infty x^{2^n-1} \, dx$$



and lastly,



4) $$\zeta(s) = \frac{e^{(\log(2\pi)-1-\gamma/2)s}}{2(s-1)\Gamma(1+s/2)} \prod_\rho \left(1 - \frac{s}{\rho} \right) e^{s/\rho}\!$$



I personally think the last is obtained from a simple use of the Weierstrass factorization theorem. I'm unsure as to what substitution is used.



$\gamma$ is the Euler-Mascheroni constant and $\zeta(s)$ is the Riemannian Zeta function.




Thanks in advance.

Saturday 28 March 2015

complex numbers - How to solve the equation $ z^2 = i-1 $?




$\ z^2 = i-1 \ $
Hey guys, I couldn't find a way to solve this problem. The question suggests I replace $\ z\ $ with $\ x+iy\ $ and then go from there, but I always end up having to complete the square and end up with a completely different answer to the back o the book. Can you please help?
Thanks


Answer




let $$z=x+iy$$ then we get
$$x^2-y^2+2xyi=i-1$$ so we get
$$x^2-y^2+1=0$$
and $$2xy-1=0$$
Can you solve this?


linear algebra - Matrix's determinant

I've got to calculate determinant for such matrix:
$$ \left[

\begin{array}{cccc}
a_1+b & a_2 & \cdots & a_n\\
a_1 & a_2+b & \cdots & a_n\\
\vdots & \vdots & \ddots & \vdots\\
a_1 & a_2 & \cdots & a_n+b\\
\end{array}
\right]
$$
Please give me some tips how to calculate this.




Thanks in advance

Sequences and Series - Arithmetic and Geometry



If the 3rd , 5th and 8th terms of an arithmetic progression with a common difference of 3 are
three consecutive terms of a geometric progression , then what is common ratio? Help me with step-by-step.




Regards!


Answer



$$
\frac{a+2d}{a}=\frac{a+5d}{a+2d}\implies a^2+4ad+4d^2=a^2+5ad\implies4d^2=ad
$$
Thus, $a=4d$ or $d=0$. Since we are given $d=3$, we must have $a=12$. Thus, the common ratio is
$$
\frac{a+2d}{a}=\frac{a+5d}{a+2d}=\frac32
$$



algebra precalculus - On the expression $(2x+1)$ of odd numbers



I have this problem:




Find three odd consecutive numbers with the property that the product of the first one and the third one minus the product of the first one and the second is greater by eleven than the third one.



I have solved the problem with the following equation:



$(2x+1)(2x+5)-(2x+1)(2x+3)=11 + (2x+5).$



Solution is $x=7$ which gives $15,17,19$ as the requested numbers.



Now If use simpy $x, x+2, x+4$ to denote these numbers I also obtain the same solution with the equation:




$x(x+4)-x(x+2)=11+(x+4)$.



Solution is $x=15$ so the numbers are $15, 17, 19$.



So my doubt is why I didnt have the need to express the numbers as a proper odd number $(2x+1)$ and it works simply with $x$.



Could be the case that there exist some other three (even) numbers that satisfy the conditions and so it's ok to use the proper expression for an odd number. Or can I always use $x, x+2, x+4, x+6$ to denote consecutive odd numbers without getting into any problems.



My question is why I was able to find the same solution using $x, x+2, x+4$ to denote the numbers, why I didn't have to use the proper $2x+1$. This tells me I can always use $x, x+2, x+4..$ to denote consecutive odd numbers to solve this kind of problems. Probably it's just a coincidence.


Answer




Your first equation requires that the numbers be odd if $x$ is a whole number. Your second does not. If you find a whole solution to the first, you are done. In theory, you could find a whole solution to the second in which $x$ was even. You would have to check the solution(s) you find to make sure the numbers were odd. Alternatively, you could look at your second equation and note that if $x$ is even all the terms are even except for the $11$, so the equation will fail. This guarantees that all solutions will have $x$ odd, satisfying the constraint. There is nothing wrong with the second approach if you check the solution(s) you find to be odd, or prove that all solutions are odd. It will find all the solutions, and maybe more.


integration - A question from my final exam



Today I had the final exam of the lesson Mathematics I. There was a question that I want to know if I solved correctly the following limit.



$$\lim\limits_{n\to\infty}\left(\frac{n}{0^2+n^2} + \frac{n}{1^2+n^2} + \frac{n}{2^2+n^2} + \dots + \frac{n}{(n-1)^2+n^2}\right)$$



By using the summation symbol and integral method, I found the answer $\pi/4$ . Is my answer correct?


Answer



Your answer is correct; however, because you have only described your method of solution rather than shown it, it is not possible at this time to confirm whether or not your actual computation was without methodological or conceptual flaws. We can only say that the value of the quantity you wrote is in fact $\pi/4$ as claimed.



arithmetic - word problem involving interest



Upon entering college, Meagan borrowed the limit of $5000 on her credit card to help pay for expenses. The credit company charges 19.95 % interest compounded continuously. How much will Meagan owe when she graduates in four years ?




I wanted to use A(t)=A(0)e^rt, and r=19.95%, t=4, A(0)=5000



so I was thinking



A(t) = A(0)e^(rt) 
A = (5,000)e^(.1995)(4)


Am I doing this correctly ? how do I calculate the rest? Is this is all I need ?




thanks!


Answer



How come you wrote the basis is $\;e\;$ and the exponent is $\;0.1995\;$ ?



I'd say the basis is $\;a:=1+\frac{19.95}{100}=1.1995\;$ , and the exponent is $\;4\;$ , so the ammount is



$$5,000\cdot(1.1995)^4\cong 10,350.73$$



assuming the interest is charged annually.


Prove that $mu$ is a measure under several conditions.



Suppose that ${\cal F}$ is a $\sigma$-algebra on a set $X$ and $\mu\mathop:{\cal F} \to [0,\infty]$ satisfies the conditions:




  1. $\mu(\emptyset) = 0$.

  2. For every pair $A$ and $B$ of disjoint sets in ${\cal F}$, $\mu(A \cup B) = \mu(A) + \mu(B)$.


  3. For every decreasing sequence $\{E_n\}$ in ${\cal F}$ (that is $E_{n+1} \subseteq E_n$ for all $n$) such that ${\bigcap_{n =1}^{\infty} E_n = \emptyset}$, we have $\lim_{n \to \infty} \mu(E_n) = 0$.



Prove that $\mu$ is a measure on ${\cal F}$.



Here's my attempt:



Proof.



Let $\{E_n\}$ be a countably infinite collection of sets such that $E_i \cap E_j =\emptyset$ for all $i,j$. Write

$$E = \bigcup_{n=1}^\infty{E_n}$$
and let
$$F_n = E \setminus\bigcup_{k=1}^n{E_k}.$$
for $n\geq 1.$
Then we have
$$
F_{n+1}= E \setminus\bigcup_{k=1}^{n+1}{E_k} \subseteq E \setminus\bigcup_{k=1}^n{E_k}=F_n
$$

and
$$\bigcap_{n=1}^\infty{F_n} = \emptyset.$$

Hence, by applying condition (2), we have
\begin{align*}\mu\left(F_n\right) & =\mu\left(E \setminus\bigcup_{k=1}^n{E_k}\right)\\
& = \mu(E) - \mu\left(\bigcup_{k=1}^n{E_k}\right)\\
& = \mu(E) - \sum_{k=1}^n{E_k}
\end{align*}

and the above holds for all $n\in \mathbb{N}$. Thus, applying condition (3), we have
\begin{align*}\mu(E) & = \lim_{n\to \infty}\mu(F_n) + \lim_{n\to \infty}\sum_{k=1}^n{E_k}\\
& = \sum_{k=1}^\infty{E_k}.
\end{align*}

This shows that $\mu$ is a measure.



Answer



Your proof is correct.



There are some minor typos in some places, where you wrote $E_k$ instead of $\mu(E_k)$, but I am sure you meant the right thing.


real analysis - Proving that the Lebesgue integral over a measurable function $f$ is equal to the area/volume below the graph of $f$



Given a Borel set $A \subseteq \mathbb{R}^d, d ≥ 1$ and a measurable function $f: A \to [0, \infty)$, I want to consider the set:



$$E = \{(x, y) \in \mathbb{R}^{d+1}: x \in A, 0 ≤ y ≤ f(x)\} \subseteq \mathbb{R}^{d+1}$$



I first want to show that $E$ is a Borel set. Then, I want to prove that




$$\lambda_{d+1}(E) = \int_A f(x) d \lambda_d(x)$$



where $\lambda_d$ is the $d$-dimensional Lebesgue measure.



I unfortunately wasn't even successful showing that $E$ is a Borel set so far. I first thought that one could write $E$ as the product of two Borel sets ($E = A \times \text{another Borel set}$), but I then realized that it isn't that simple, seeing as the $y$ in a vector $(x, y) \in E$ is dependent on $x$. Maybe one could construct a clever measurable function that sends $E$ onto a measurable set in $\mathbb{R}$ or something like that? I'm not really all that sure though.



Once established that $E$ is measurable, wouldn't the second part follow more or less right from Fubini's theorem?



Also, I think the intuition behind this excercice is to acknowledge that, in case $d = 1$, the Lebesgue integral of $f$ over $A$ is nothing but the area inbetween the graph of $f$ and the $x$-axis; for $d = 2$, it's the volume, and so on. I'm not really sure how that helps me (formally) showing it.


Answer




(I will assume that $f$ is Borel measurable, and extend $f$ to all of $\Bbb R^d$ by setting $f(x)=-1$ for $x\in A^c$.) Think of $E$ as the inverse image $g^{-1}([0,\infty))$, where $g:\Bbb R^d\times[0,\infty)\to\Bbb R$ is defined by $g(x,y)=f(x)-y$. Then $g$ is the composition of the $\mathcal B^d\otimes\mathcal B_+/\mathcal B^2$ map $(x,y)\to (f(x),y)$ with the continuous map $\Bbb R^2\ni(u,v)\to u-v\in\Bbb R$. It follows that $g$ is $\mathcal B^d\otimes\mathcal B_+/\mathcal B$-measurable, hence $E\in \mathcal B^d\otimes\mathcal B_+$.



Similar considerations apply if $f$ is Lebesgue measurable.


Friday 27 March 2015

geometry - Numbers of circles around a circle

"When you draw a circle in a plane of radius $1$ you can perfectly surround it with $6$ other circles of the same radius."



BUT when you draw a circle in a plane of radius $1$ and try to perfectly surround the central circle with $7$ circles you have to change the radius of the surround circles.



How can I find the radius of the surround circles if I want to use more that $6$ circles?



ex :
$7$ circles of radius $0.4$




$8$ circles of radius $0.2$

probability - Creating multiple sided dice which roll avg. high numbers with controlled sum of die-faces.

I need to create:



One 4 sided Die : Numbers add up to 14



One 6 sided Die: Numbers add up to 21



One 8 sided Die: Numbers add up to 28



Which combinations of numbers would result in my rolling the highest average number on each dice?

complex analysis - Why do we say the principal branch of the logarithm has the negative axis removed?

My question is really simple. I didn't understand why do we say the principal branch of the logarithm has the negative axis removed (the branch cut). Usually, the argument of the principal branch of the logarithm without the branch cut is defined on $\{z:-\pi\lt \arg (z)\le\pi\}$. So the negative axis is still there since we are allowed to use $\pi$ as argument.



Could someone clarifies me this?

calculus - Prove an inequality involving positive real numbers Lagrange multipliers




This is from Apostol's Calculus, Vol. II, Section 9.15 #11:




Find the maximum of $f(x,y,z)=\log x + \log y + 3 \log z$ on that portion of the sphere $x^2+y^2+z^2=5r^2$ where $x>0,\,y>0,\,z>0$. Use the result to prove that for real positive numbers $a,b,c$ we have
$$abc^3\le 27\left(\frac{a+b+c} 5\right)^5$$




I had no trouble with the first part, using Lagrange's Multipliers. The maximum of $f$ subject to this constraint is $f(r,r,\sqrt 3 r )=5\log r + 3\log \sqrt 3$, and this answer matches the book's.




Now I see how we can take $f(a,b,c)=\log(abc^3)$. Then define $r>0$ by $a^2+b^2+c^2=5r^2\implies r=\sqrt {\frac{a^2+b^2+c^2} 5}$, so we can conclude that



$$abc^3\le3^{3/2}\left(\frac{a^2+b^2+c^2}5\right)^{5/2}$$



But this is a looser bound (for some numbers) than the one suggested by the text. In particular, if we consider $a=\frac 1 4,b= 1, c=1$, then $$27\left(\frac{a+b+c} 5\right)^5<3^{3/2}\left(\frac{a^2+b^2+c^2}5\right)^{5/2}$$
so I have the feeling that "I can't get there from here", at least not using the method suggested. Am I correct?



Hints > Full Answers


Answer



I think I'm going to delete this question - I realize no one (except maybe Apostol, and perhaps not even him) can really answer this question. Perhaps what was intended was that I should realize that a similar approach would yield the inequality desired, and in fact it does:




Maximize $f(x,y,z) = \log x + \log y + 3\log z$ subject to the constraint that $x+y+z=5r$, and then use this result to deduce that, for real numbers $a,b,c$ we have that $$abc^3\le27\left(\frac {a+b+c} 5 \right)^5$$



This actually works in a straightforward way. It's anyone's guess, I suppose, whether the author's "use this result" was meant in the broadest sense (i.e. generalize a strategy from the first result), or whether it was a typo in either the constraint or the inequality.



If no one has any complaints, I will delete this question tomorrow morning.


real analysis - continuous functions on $mathbb R$ such that $g(x+y)=g(x)g(y)$




Let $g$ be a function on $\mathbb R$ to $\mathbb R$ which is not identically zero and which satisfies the equation $g(x+y)=g(x)g(y)$ for $x$,$y$ in $\mathbb R$.



$g(0)=1$. If $a=g(1)$,then $a>0$ and $g(r)>a^r$ for all $r$ in $\mathbb Q$.



Show that the function is strictly increasing if $g(1)$ is greater than $1$, constant if $g(1)$ is equal to $1$ or strictly decreasing if $g(1)$ is between zero and one, when $g$ is continuous.


Answer



For $x,y\in\mathbb{R}$ and $m,n\in\mathbb{Z}$,
$$

\eqalign{
g(x+y)=g(x)\,g(y)
&\implies
g(x-y)={g(x) \over g(y)}
\\&\implies
g(nx)=g(x)^n
\\&\implies
g\left(\frac{m}nx\right)=g(x)^{m/n}
}
$$

so that $g(0)=g(0)^2$ must be one (since if it were zero, then $g$ would be identically zero on $\mathbb{R}$), and with $a=g(1)$, it follows that $g(r)=a^r$ for all $r\in\mathbb{Q}$. All we need to do now is invoke the continuity of $g$ and the denseness of $\mathbb{Q}$ in $\mathbb{R}$ to finish.



For example, given any $x\in\mathbb{R}\setminus\mathbb{Q}$, there exists a sequence $\{x_n\}$ in $\mathbb{Q}$ with $x_n\to x$ (you could e.g. take $x_n=10^{-n}\lfloor 10^nx\rfloor$ to be the approximation of $x$ to $n$ decimal places -- this is where we're using that $\mathbb{Q}$ is dense in $\mathbb{R}$). Since $g$ is continuous, $y_n=g(x_n)\to y=g(x)$. But $y_n=a^{x_n}\to a^x$ since $a\mapsto a^x$ is also continuous.



Moral: a continuous function is completely determined by its values on any dense subset of the domain.


proof of an easy (?) inequality

I hope someone can help me giving a hint or sth for my inequality, which I'm trying to solve now for some days. I want to show that $$\frac{2}{\sqrt{\vphantom{\large A}1+c}}\ \leq\ \frac{1}{\sqrt{1+c\,\left(\frac{c\ +\ \sqrt{\vphantom{\Large A}c^{2}\ -\ 4\,}\,}{\vphantom{\Large A}2}\right)^{3}}} + \frac{1}{\sqrt{1+c\,\left(\frac{c\ -\ \sqrt{\vphantom{\Large A}c^{2}\ -\ 4\,}\,}{\vphantom{\Large A}2}\right)^{3}}}\,,\quad \forall\ c \geq 2$$



From plotting its easy to see and Mathematica/Maple also gave me the solution c>2, but thats not a real proof. Still from this, I think there must be a way to show it.



Besides just trying to show the inequality i tried showing that the difference of these terms is an injective function (for c=2 there is equality, which is a problem for many approximations). However the resulting terms dont really get easier to handle...
Other than that I tried to use some meanvalue inequalities of harmonic mean, geometric mean, quadratic mean. The problem is that the inequalitiy between HM and GM is already too rough.



So at the moment im quite stuck and feel out of ideas how to approach the problem. It would be really nice if someone has some ideas (i hope i dont need a full solution :) )




P.S.: please excuse my bad english, I'm no native speaker.

calculus - Evaluating integral (comes from a bigger problem in statistics)



Let $\alpha, \beta>0$ be parameters. I wish to compute
$$\int_0^\infty \frac{x^\alpha}{x-1} e^{-\beta x} dx.$$



I managed to reduce this problem when $\alpha$ is integer by using
$$\frac{x^\alpha}{x-1}=\frac{1}{x-1}+\sum_{j=1}^{\alpha} x^{j-1}.$$



So the question is how to compute
$$\int_0^\infty \frac{1}{x-1} e^{-\beta x} dx.$$

Any ideas? Thanks a lot!


Answer



The integral of interest, $\int_0^\infty \frac{e^{-\beta x}}{x-1}\,dx $ diverges due to the singularity at $x=1$. However, the Cauchy Principal Value of the integral exists and can be expressed as



$$\begin{align}
\text{PV}\int_0^\infty \frac{e^{-\beta x}}{x-1}\,dx &=\lim_{\epsilon\to0^+}\left(\int_0^{1-\epsilon}\frac{e^{-\beta x}}{x-1}\,dx+\int_{1+\epsilon}^\infty \frac{e^{-\beta x}}{x-1}\,dx\right)\\\\
&=e^{-\beta }\lim_{\epsilon\to0^+}\left(\int_{-\beta}^{-\beta\epsilon}\frac{e^{- x}}{x}\,dx+\int_{\beta\epsilon}^\infty \frac{e^{- x}}{x}\,dx\right)\\\\
&=-e^{-\beta}\text{Ei}(\beta)
\end{align}$$




in terms of the Exponential Integral $\text{Ei}(x)\equiv -\text{PV}\int_{-x}^\infty \frac{e^{-x}}{x}\,dx$.



If $\alpha\in \mathbb{N}$ with $\alpha\ge 1$, then we have



$$\begin{align}
\text{PV}\int_0^\infty \frac{x^\alpha e^{-\beta x}}{x-1}\,dx&=(-1)^{\alpha+1} \frac{d^\alpha}{d\beta^\alpha}\left(e^{-\beta}\text{Ei}(\beta)\right)\\\\
&=-e^{-\beta}\text{Ei}(\beta)+\sum_{m=1}^\alpha \frac{(m-1)!}{\beta^m}
\end{align}$$


Thursday 26 March 2015

independence - root of prime numbers are linearly independent over $mathbb{Q}$

How can we prove by mathematical induction that $1,\sqrt{2}, \sqrt{3}, \sqrt{5},\ldots, \sqrt{p_n}$ ($p_n$ is the $n^{\rm th}$ prime number) are linearly independent over the rational numbers ?



$\underline{\text{base case (n=1)}}$: $1,\sqrt{2}$ are linearly independent over the field $\mathbb{Q}$ otherwise $a1+b\sqrt{2}=0 \Leftrightarrow \sqrt{2}=-\frac{a}{b}$ which is absurd.



Then I am stuck.

elementary number theory - Prove that $n^{2003}+n+1$ is composite for every $nin mathbb{N} backslash{1}$



Prove that $n^{2003}+n+1$ is composite for every $n\in \mathbb{N} \backslash\{1\}$.



I tried with expanding $n^{2003}+1$, but I got nothing pretty not useful. I also couldn't get any improvement, let alone contradiction for assuming $n^{2003}+n+1=pq$ where $p,q\not= 1$. How should I do this and are there general tips on how to approach these problems, what to think about?



Answer



Let $w=e^{i2\pi/3}$. It's easy to see that $w$ and $w^2$ are all the roots of $x^2+x+1$ and roots of $x^{2003}+x+1$, therefore $x^2+x+1|x^{2003}+x+1$. So we have That $x^{2003}+x+1=(x^2+x+1)P(x)$, where $P(x)$ is some polynomial with integer coefficients. For $x\ge 2$, $x^{2003}+x+1$ is much bigger than $x^2+x+1$ so $P(x)$ is some integer greater than $2$ from which the conclusion follows.


Wednesday 25 March 2015

probability - Expectation of the product of two dependent binomial random variable



Suppose you have n balls and m numbered boxes. You place each ball randomly and
independently into one of the boxes. Let X_i be the number of balls placed into box number i, so $X_1 + · · · + X_m = n$.




For $i\ne j$, find $E(X_iX_j)$



This is what I have done so far...
Can someone pointed out where I have go wrong, or give some hints on how to go forward.
enter image description here


Answer



Let $Y=X_1+\cdots+X_m$. Then $E(Y^2)=n^2$. But also
$$E(Y^2)=\sum_1^m E(X_i^2)+\sum_{i\ne j}E(X_iX_j).$$
Now all we need is $E(X_i^2)$.




The random variable $X_i$ has binomial distribution, $p=1/n$, and number of trials equal to $m$. This has mean $\frac{m}{n}$, and variance $m\cdot \frac{1}{n}\cdot\left(1-\frac{1}{n}\right)$. So now we can calculate $E(X_i^2)$, and finish.


calculus - Prove the series $sum_{n=1}^{infty}frac{(-1)^n}{ln{n}+sin{n}}$ converge

How to prove this serie



$$\sum_{n=1}^{\infty}\frac{(-1)^n}{\ln{n}+\sin{n}}$$



converge? I can't do a comparison test with the Leibniz formula for $\pi$ because the series are not $>0$ for all $n$. I can't do a ratio test because I can't compute the limit, the alternating series test can't be applied, the absolute serie is not convergent. I'm out of ideas.




Any clues?

calculus - Can we consider a hypergeometric function as a closed-form?

Let's say a calculus problem like an integral or a series has a solution that inevitably involving a hypergeometric function. It turns out that hypergeometric function cannot be expressed in term of certain "well-known" functions or expressions. The question then arises:





Can we consider that solution as a closed-form?




How about a solution that involving a Meijer $\rm G$-function? Please provide me an answer or a comment that contains explanations to support your arguments. I am aware that the answer of this OP can be subjective, but I would dearly love to know your thought or opinion, so please share your view about this issue as an answer or a comment. Any constructive answers or comments would be greatly appreciated. Thank you.

measure theory - $lim_{n to infty} int_X f_n , dmu = int_X f , dmu$ implies $lim_{n to infty} int_B f_n , dmu = int_B f , dmu$ for $B subseteq X$

I'm having trouble with the following problem.



Let $(X, \mathcal{M},\mu)$ be a measure space, where $X = [a,b] \subset \mathbb{R}$ is a closed and bounded interval and $\mu$ is the Lebesgue measure. Let $f_{n}$ be a sequence of non-negative functions in $L^{1}(X,\mathcal{M},\mu)$ $\textit{converging in measure}$ to a function $f \in L^{1}(X,\mathcal{M},\mu)$. Given that the following holds,



$\lim\limits_{n\rightarrow\infty}\int\limits_{X}f_{n}d\mu = \int\limits_{X}fd\mu$




show that for all $B \subset X$,



$\lim\limits_{n\rightarrow\infty}\int\limits_{B}f_{n}d\mu = \int\limits_{B}fd\mu$



where $B$ belongs to the Borel $\sigma$-algebra.



I was given a hint where convergence in measure in X implies convergence in measure in B, but I'm not sure where to proceed from here.

abstract algebra - Multiplication table of a Galois group?

I'm looking at the polynomial $x^4 − 4x^2 + 16$. I know that its roots are
$1\pm\sqrt{3}$ and so its normal field extension is $\mathbb Q(i, \sqrt{3})$.



However, I am also asked to give a multiplication table for the Galois group. What will that look like? What will be its rows and columns?)




(I am practicing for an exam)

calculus - Limit contradiction in L'Hopitals Rule and Special Trig limits

While in Calculus the other day I stumbled upon a contradiction in L'Hopitals Rule vs. Special Trig Limits.



The problem looks like this



$$
\lim_{x\to 0} \frac{\tan(x)−x}{x^3}
$$




Using L'Hopitals rule (because the limit = 0/0)
$$
\lim_{x\to 0} \frac{\tan(x)−x}{x^3} = \lim_{x\to 0} \frac{\frac{d}{dx}(\tan(x)−x)}{\frac{d}{dx}(x^3)} = \lim_{x\to 0} \frac{\sec(x)^2−1}{3x^2}
$$
Again using L'Hopitals rule (because the limit = 0/0)
$$
\lim_{x\to 0} \frac{\sec(x)^2−1}{3x^2} = \lim_{x\to 0} \frac{\frac{d}{dx}\sec(x)^2−1}{\frac{d}{dx}3x^2} = \lim_{x\to 0} \frac{2\sec(x)^2\tan(x)}{6x} = \lim_{x\to 0} \frac{\sec(x)^2\tan(x)}{3x}
$$
Again using L'Hopitals rule (because the limit = 0/0)

$$
\lim_{x\to 0} \frac{\sec(x)^2\tan(x)}{3x} = \lim_{x\to 0} \frac{\frac{d}{dx}\sec(x)^2\tan(x)}{\frac{d}{dx}3x} = \lim_{x\to 0} \frac{4\sec(x)^2\tan(x)^2 + 2\sec(x)^4}{6} = \frac{1}{3}
$$
Now, using special trig limits...
$$
\lim_{x\to 0} \frac{\tan(x)−x}{x^3} = \lim_{x\to 0} \frac{\tan(x)}{x^3}-\frac{x}{x^3} = \lim_{x\to 0} \frac{\tan(x)}{x}*\frac{1}{x^2}-\frac{1}{x^2}
$$
With knowledge of $\tan(x)$ trig limit
$$
\lim_{x\to 0} \frac{\tan(x)}{x} = 1

$$
You can now simplify.
$$
(\lim_{x\to 0} \frac{\tan(x)}{x}*\frac{1}{x^2}-\frac{1}{x^2}) = (\lim_{x\to 0} \frac{\tan(x)}{x}*\lim_{x\to 0}\frac{1}{x^2}-\lim_{x\to 0}\frac{1}{x^2}) = (1 * \lim_{x\to 0}\frac{1}{x^2}-\lim_{x\to 0}\frac{1}{x^2})
$$
Now the end result
$$
\lim_{x\to 0}\frac{1}{x^2}-\lim_{x\to 0}\frac{1}{x^2} = \lim_{x\to 0}\frac{1}{x^2}-\frac{1}{x^2} = 0
$$
1/3 does not = 0. Therefore there is a discrepancy through special trig limits and L'Hopitals rule. There is the same anomaly with

$$
\lim_{x\to0}\frac{\sin(x)-x}{x^3} = (\text{through L'Hopitals}) -\frac{1}{6} \text{ or (through special trig limits) } 0
$$
On a calculator in graph or table mode as you approach 0 it seems to be 1/3.
But as it turns out, if you get very precise, the limit actually seems to be approaching 0.



This happens the same with the graph, it seems to be parabolically approaching 1/3 but if you zoom in to an extreme you see it actually approaches zero.



This original work is my own as published on Sunday, October 9, 2016 at 9:51pm. I claim all knowledge credits and fallacies that may come with this discrepancy. But overall please, prove or disprove this, or at least explain why this occurs. Thank you for your time.




(I will try to get a picture of the graph and table)

Tuesday 24 March 2015

sequences and series - Is $sum_{i=0}^infty (i) + c = sum_{k=0}^infty (2k) + c + sum_{j=0}^infty (2j+1) + c$?



Basically, the question is, "is the sum of all positive numbers equal to the sum of all positive even numbers and odd numbers?" (which is obviously yes) but with a twist: for every number, there is a constant $c$ which is also an integer.



$$\sum_{i=0}^\infty (i) + c = \sum_{k=0}^\infty (2k) + c + \sum_{j=0}^\infty (2j+1) + c$$




It really feels like this equation is true, as there should be an equal amount of $c$'s in both sides but I am not a mathematician at all, just equipped with high school maths, I wanted a proper explanation for this from you guys! Couldn't find this question when googling, sorry in advance if there is one.


Answer



Note that the sum: $$1+2+3+… $$ diverges, so they sum to $\infty$ on both sides.


algebra precalculus - Show (via Complex Numbers): $frac{cosalphacosbeta}{cos^2theta}+frac{sinalphasinbeta}{sin^2theta}+1=0$ under given conditions





$\alpha$ and $\beta$ do not differ by an even multiple of $\pi$. If $\theta$ satisfies $$\frac{\cos\alpha}{\cos\theta}+ \frac{\sin\alpha}{\sin\theta}=\frac{\cos\beta}{\cos\theta}+\frac{\sin\beta}{\sin\theta}=1$$ then show that $$\frac{\cos\alpha\cos\beta}{\cos^2\theta}+\frac{\sin\alpha\sin\beta}{\sin^2\theta}+1=0$$
I wish to solve this problem using some elegant method, preferably complex numbers.




I've tried using the fact that $\alpha$ and $\beta$ satisfy an equation of the form $\cos x/\cos\theta + \sin x/\sin\theta = 1$, and got the required result. See my solution here: https://www.pdf-archive.com/2017/07/01/solution



I'm guessing there's an easier way to go about it. Thanks in advance!


Answer



Well, I'm not sure I can do that in a very elegant way, but it might be shorter. I'm using addition theorems, and the identity $$\sin x -\sin y=2\sin\frac{x-y}{2}\,\cos\frac{x+y}{2}$$ following immediately from them.
Multiplying the given equations by $\sin\theta\,\cos\theta,$ we get
$$\sin(\alpha+\theta)=\sin\theta\,\cos\theta=\sin(\beta+\theta),$$

but $$0=\sin(\alpha+\theta)-\sin(\beta+\theta)=2\sin\frac{\alpha-\beta}{2}\,\cos\left(\frac{\alpha+\beta}{2}+\theta\right).$$ The first factor is $\neq0$ by assumption, so $$\cos\left(\frac{\alpha+\beta}{2}+\theta\right)=0.$$ Multiplying by $2\sin\left(\frac{\alpha+\beta}{2}-\theta\right)$ and using the above identity, you get $\sin(\alpha+\beta)-\sin2\theta=0$, and this means (using $\sin2\theta=2\sin\theta\,\cos\theta$ and dividing by $\sin\theta\,\cos\theta$)
$$\frac{\sin\alpha\,\cos\beta}{\sin\theta\,\cos\theta}+\frac{\sin\beta\,\cos\alpha}{\sin\theta\,\cos\theta}=2.$$ Now you have
$$\left(\frac{\cos\alpha}{\cos\theta}+ \frac{\sin\alpha}{\sin\theta}\right)\,\left(\frac{\cos\beta}{\cos\theta}+\frac{\sin\beta}{\sin\theta}\right)-\left(\frac{\sin\alpha\,\cos\beta}{\sin\theta\,\cos\theta}+\frac{\sin\beta\,\cos\alpha}{\sin\theta\,\cos\theta}\right)=1\cdot1-2=-1,$$ and that gives your required result after simplifying.


trigonometry - Prove this trigonometric identity in quadrilateral



If $\alpha,\beta,\gamma,\delta$ are angles in quadrilateral different from $90^\circ$, prove the following:




$$ \frac{\tan\alpha+\tan\beta+\tan\gamma+\tan\delta}{\tan\alpha\tan\beta\tan\gamma\tan\delta}=\cot\alpha+\cot\beta+\cot\gamma+\cot\delta $$



I tried different transformations with using $\alpha+\beta+\gamma+\delta=2\pi$ in equation above, but no success. Am I missing some not-so-well-known formula?


Answer



It follows directly from $\tan(\alpha + \beta + \gamma + \delta) = 0$ and the sum angle formula for $\tan$ (see here: Tangent sum using symmetric polynomials)



Using that formula we get (from numerator = 0) that



$$ \tan \alpha + \tan \beta + \tan \gamma + \tan \delta = $$




$$\tan \alpha\tan \beta\tan \gamma+ \tan \alpha\tan \beta\tan \delta + \tan \alpha\tan \gamma\tan \delta + \tan \beta\tan \gamma\tan \delta$$



divididing by $ \tan \alpha\tan \beta\tan \gamma\tan \delta$ gives the result.


calculus - Evaluating a parametric integral



I need some help to evaluate the following integral.




$$\int_{0}^{\infty}{\mathrm{d}x \over x^{\alpha}\left(x + 1\right)}$$ where $\alpha \in \left(0,1\right)$




I've tried many ways (the best one seems to be developing by Taylor series) but actually I have no solution.




Some ideas?
Thank you.


Answer



I think we will need some complex analysis here.



Take a branch of $1/z^a$ defined in $\mathbb{C}\setminus [0,+\infty)$ and consider the integral
$$\int_{\gamma}\frac{dz}{z^a(z+1)}$$
where $\gamma$ is a close path composed by an arc of a inner circle of radius $01$ and two parallel segments over and under the segment $[r,R]$.
Then by the residue theorem

$$\int_{\gamma}\frac{dz}{z^a(z+1)}=2\pi i\mbox{Res}\left(\frac{1}{z^a(z+1) },-1\right)=2\pi i e^{-i\pi a}.$$
Now we take the limit as $R\to+\infty$ and $r\to 0^+$.
It is easy to see that the integrals along the arcs of the circles goes to $0$. Hence
$$\int_0^{+\infty}\frac{dx}{x^a(x+1)}-\int_0^{+\infty}\frac{dx}{x^ae^{2\pi ia}(x+1)}=2\pi i e^{-i\pi a}$$
which implies that
$$\int_0^{+\infty}\frac{dx}{x^a(x+1)}=\frac{2\pi i e^{-i\pi a}}{1-e^{-2i\pi a}}=\frac{\pi}{\sin (\pi a)}.$$


set theory - Help with intuition on Cardinal Arithmetic Problems



It happens a lot to me that when I find an intuitive model (picture) of a mathematical entity, the proofs left as exercises in books are very easy to solve. For example when dealing with filters and ultrafilters on sets (specially $\omega$) I just need to imagine the Hasse diagram of the Poset $\langle\mathcal{P}(\omega),\subseteq\rangle$ and most proofs and definitions come naturally.




I have been trying to find a model/picture for arithmetic with cardinals that helps me solve the problems that come in some books but I don't know if cardinal numbers are too big and I cannot picture them correctly or I just have to use different models/pictures for different problems. So far the two that have been working best are parallel lines (when comparing cardinals) and my intuition with injective and surjective functions. But those approaches only work quickly with easy problems (adding or multiplying finitely many cardinals or relating the cardinality of two specific sets). However, this techniques become lengthy when new concepts are introduced (like cofinality and exponentiation of cardinals). Moreover, I have not been able to solve many problems with just these two methods. I'll quote two of those problems and try to make this ideas more clear:




  1. (This one is in Andras Hajnal & Peter Hamburguer's Set Theory Book) If $\kappa$ is an infinite cardinal number and $\kappa=\sum_{\lambda


For this one I just changed $\kappa^{cf(\kappa)}$ for $\prod_{\lambda


  1. (A friend gave me this one but I have a feeling there is a typo somewhere) Suppose that $\alpha$ is a limit ordinal and that $\langle\kappa_\xi\rangle_{\xi<\alpha}$ is a strictly increasing sequence of cardinals such that $\kappa=\sum_{\xi<\alpha}\kappa_\xi$ and if $0<\lambda



Although not stated in the problem, every cardinal $\kappa_\xi$ must be strictly less than $\kappa$ (otherwise the sum is greater than $\kappa$). And given that for every infinite cardinal $cf(\kappa)=\min\{\lambda\in CN_\infty\mid\forall\xi\in\lambda(\kappa_\xi<\kappa)\wedge\kappa=\sum_{\xi<\lambda}\kappa_\xi\}$ we must have that $cf(\kappa)\leq\alpha$. Under the same principle one just have to prove that $cf(\kappa^\lambda)\leq\alpha$. There is also a trivial inequality in this problem: $\kappa^\lambda=(\sum_{\xi<\alpha}\kappa_\xi)^\lambda\geq\sum_{\xi

So the questions are: Do you use a single model/picture to help you solve this kind of problems about infinite sums, products and exponentiation of cardinals? Which one? and could you provide a hint on how to use such a model/picture to solve problems 1. and 2.?


Answer



I just realized that you are asking for other methods than constructing explicit functions, however, I figure direct attacks might be the most intuitive for problems like this.



In order to show that $\kappa^{cf(\kappa)}\leq\prod_{\lambda


Given $f\in \kappa^{cf(\kappa)}$, recursively build a sequence $g$ such that $g(\lambda)=\begin{cases}
f(i)+1 && \text{if $i$ is the least such that f(i) is in }[0,\kappa_\lambda) \text{ and i is not used previously} \\
0 && otherwise
\end{cases} $.



Note we eventually exhaust all f-values as $cf(\kappa)$ is regular.



If $f\neq h$, then let $i\in cf(\kappa)$ be the least point where they differ. There are two cases to consider one is there exists a least undefined $\lambda$ such that $f(i), h(i)\in [0,\kappa_\lambda)$ and another case being there exists a least undefined $\lambda$ such that $\kappa_\lambda$ separates $f(i)$ and $h(i)$. In either case, the sequences constructed from two functions are different. So you have an injection.


radicals - Proving that for each prime number $p$, the number $sqrt{p}$ is irrational







I'm a total beginner and any help with this proof would be much appreciated. Not even sure where to begin.





Prove that for each prime number $p$, the square root of p is irrational.


Monday 23 March 2015

calculus - Textbooks that use notation with explicit argument variable in the upper bound $int^x$ for "indefinite integrals."

I dare to ask a question similar to a closed one but more precise.



Are there any established textbooks or other serious published work that use $\int^x$ notation instead of $\int$ for the so-called "indefinite integrals"?



(I believe I've seen it already somewhere, probably in the Internet, but I cannot find it now.)



So, I am looking for texts where the indefinite integral of $\cos$ would be written something like:
$$
\int^x\cos(t)dt =\sin(x) - C

$$

or
$$
\int^x\cos(x)dx =\sin(x) + C.
$$



(This notation looks more sensible and consistent with the one for definite integrals than the common one with bare $\int$.)



Some context.




IMO, the indefinite integral of $f$ on a given interval $I$ of definition of $f$ should not be defined as the set of antiderivatives of $f$ on $I$ but as the set of all functions $F$ of the form
$$
F(x) =\int_a^x f(t)dt + C,\qquad x\in I,
$$

with $a\in I$ and $C$ a constant (or as a certain indefinite particular function of such form).
In other words, I think that indefinite integrals should be defined in terms of definite integrals and not in terms of antiderivatives.
(After all, the integral sign historically stood for a sum.)



In this case, the fact that the indefinite integral of a continuous function $f$ on an interval $I$ coincides with the set of antiderivatives of $f$ on $I$ is the contents of the first and the second fundamental theorems of calculus:





  1. the first fundamental theorem of calculus says that every representative of the indefinite integral of $f$ on $I$ is an antiderivative of $f$ on $I$, and


  2. the second fundamental theorem of calculus says that every antiderivative of $f$ on $I$ is a representative of the indefinite integral of $f$ on $I$ (it is an easy corollary of the first one together with the mean value theorem).


real analysis - Is it true that $0.999999999dots=1$?



I'm told by smart people that
$$0.999999999\dots=1$$
and I believe them, but is there a proof that explains why this is?


Answer



What does it mean when you refer to $.99999\ldots$? Symbols don't mean anything in particular until you've defined what you mean by them.



In this case the definition is that you are taking the limit of $.9$, $.99$, $.999$, $.9999$, etc. What does it mean to say that limit is $1$? Well, it means that no matter how small a number $x$ you pick, I can show you a point in that sequence such that all further numbers in the sequence are within distance $x$ of $1$. But certainly whatever number you choose your number is bigger than $10^{-k}$ for some $k$. So I can just pick my point to be the $k$th spot in the sequence.




A more intuitive way of explaining the above argument is that the reason $.99999\ldots = 1$ is that their difference is zero. So let's subtract $1.0000\ldots -.99999\ldots = .00000\ldots = 0$. That is,



$1.0 -.9 = .1$



$1.00-.99 = .01$



$1.000-.999=.001$,



$\ldots$




$1.000\ldots -.99999\ldots = .000\ldots = 0$


calculus - derivative of $xcdot|sin x|$



I have the function $f(x)=x|\sin x|$, and I need to see in which points the function has derivatives.
I tried to solve it by using the definition of limit but it's complicated. It's pretty obvious that the function has no derivatives where $|\sin x|=0$, but I don't know how to show it.
I thought maybe calculate the derivative of $f(x)$ but I didn't learn how to calculate the derivative of $|\sin x|$.
How can I solve it withut knowing the derivative of $|\sin x|$? or better question how to calculate the derivative of $|\sin x|$?



edited:I didn't learn the derivative of $|f(x)|$.


Answer




I'm going to make this as simple as I can. So, ofcourse, I'll be assuming $x\in \mathbb R$



You're first question is for what values of $x$ is the function differentiable.



There are nice algebraic ways to find it but why waste time in an explicit proof if all one needs is to convince one's peers that one's logic is right.



Let's just observe the differentiability of $f(x) = x\cdot |\sin x|$ through it's graph.
But oh, wait, you must not know how to plot it's graph. So, let's take baby steps to get find it.



Take what we know. The standard graph of $y = \sin x$




sin x



Note that the roots (ie, $\sin x = 0$) are $x = n\pi,\quad n\in\mathbb Z$



Now, let's graph y = $|\sin x|$ . How?
There's a method to get $|f(x)|$ from $f(x)$ and it goes something like this:




Step 1: Make $y = 0$ a mirror which reflect all $y<0$ into the plane $y>0$
Step 2: Eliminate the portion of the graph which lies in $y < 0$
Step 3: Be amazed that by executing the above two steps precisely, you got the right graph.





Learn why this works



$|\sin x|$



Now we have to multiply this with $x$. There's no real method for this, it only takes a slight bit of thinking and understanding of what multiplication does to a graph.



Usually when we multiply a function with a constant, say $c$.





  • The graph diminishes for $c\in(0,1)$

  • Enlarges for $c>1$

  • Turns upside down for $c<0$ and follows the above two observations once again.



Since we're multiplying by a variable and not a pure scalar, the graph is distorted such that all the above can be seen at once and in increasing degree with increase in the magnitude of $x$.
$x|\sin x$|



Now, it is obvious that the roots of this graph are the same as that of $\sin x$

and you know we can't differentiate a function at sharp points. (Why?)



Notice that the sharp point at $x = 0$ has been smoothed over by the inversion in the graph for $x<0$. But is it differentiable here at $x=0$?



To prove that it's differentiable at this point,



$$f'(0) = \lim_{h \to 0} \frac{f(x+h) - f(x)}{h} , \quad \text{ where } x =0\\
= \lim_{h \to 0} \frac{\sin (0+h) - \sin 0}{h}\\
= \lim_{h \to 0} \frac{\sin h}{h} = 1
$$

(Why?)



$\therefore $ Derivative exists @ $x = 0$ $\implies$ Differentiable @ $x = 0$



So, we can now safely say that we can differentiate $f(x) = x\cdot|\sin x|\quad \forall \space x\in\mathbb R - \{n\pi\}, \quad n\in\mathbb Z-\{0\}$



Or more easily in words,
$f(x) = x|\sin x|$ is differentiable at $x \neq n\pi ,\quad n \neq 0$



The following is how I would differentiate the function:

$$
\frac{d}{dx} x\cdot|\sin x|\\
= \frac{d}{dx} x\cdot\sqrt{\sin^2 x}
\quad ,\quad \{\because\space|x| = \sqrt{x^2}\} \\
= x\frac{d}{dx}\sqrt{\sin^2 x} + |\sin x|\frac{dx}{dx} \quad ,\quad \{\because\space (uv)' = uv' + u'v\}\\
= x\cdot\frac{1}{2\sqrt{\sin^2x}}\cdot (2\sin x)\cdot (\cos x) + |\sin x|
\quad , \quad \{\because\text{Chain Rule }\}\\
=\frac{x\cdot\sin 2x}{2|\sin x|} + |\sin x|$$

This isn't totally simplified but hopefully this is helpful.




Now, to further clarify on the derivitive of $|x|$:
$$\frac{d}{dx} |x|
= \frac{d}{dx} \sqrt{x^2}
= \frac{1}{2}(x^2)^{\frac{1}{2} - 1} \cdot 2x
= \frac{2x\cdot (x^2)^{-\frac{1}{2}}}{2}
=\frac{1}{2\sqrt{x^2}}
= \frac{x}{|x|}
\equiv \frac{|x|}{x}
= \text{sgn}(x)$$




Here is more information on $\text{sgn}(x)$ and a better more explicit way of finding the derivative of the absolute value



Exercise: Can you try to get the derivative of $x|\sin x|$ with the sign-function included?


polynomials - Quadratics: Intuitive relation between discriminant and derivative at roots

While working with quadratics that have real roots, I realized an interesting fact:



The slope of a quadratic at its roots is equal to $\pm \sqrt{D}$ where $D=b^2-4ac$



Proof:



$$f(x) = ax^2 + bx +c$$




$$f'(x) = 2ax+b$$



Roots:



$$x = \frac{-b\pm\sqrt{b^2-4ac}}{2a}$$



So, if we try to find the slope at any root ($r$):



$$f’(r) = \pm \sqrt D$$




where the sign ($\pm$) can be determined by whether the root is on the right of the vertex or the left.



If the quadratic has only $1$ root (or $2$ roots that are the same) then it means the quadratic is at a stationary point so the slope must be $0$. This is backed by the fact that quadratics have only 1 distinct root when $b^2 - 4ac = 0$.



What geometric/intuitive approach can be applied to explain this interesting phenomenon?

Sunday 22 March 2015

linear algebra - Intuitive perspective of eigenvalues and rank of a matrix



Assuming a matrix $A$, $n\times n$, with $n$ non-repeated and non-zero eigenvalues;




  1. If we calculate the matrix $A-\lambda I$ for one of its $n$ eigenvalues, we see that its rank has been decreased by one. If the eigenvalue has repetitiveness of $k$, then the rank decreases again by $k$. What would be an intuitive explanation for it?


  2. By $Ax=\lambda x$ one could argue that we try to find the values of $\lambda$ for which an $n\times n$ matrix with $\text{rank}(A)=n$ to have the same impact on $x$ as a:



    2a. a scalar $\lambda$?




    or



    2b. a $n\times n$ diagonal matrix of rank $n$?


  3. In the relationship $(A-\lambda I)x=0$, given that we want a nontrivial solution for the vector $x$, could we declare the matrix $A-\lambda I$ as zero, without the determinant, following directly the above relationship?



Answer



For 1, it depends on what you mean by “intuitive”, but here’s a shot at it. A matrix either sends a vector to 0 or it doesn’t. The amount of vectors that it sends to 0 is related to the amount of vectors that it doesn’t by the rank it’s theorem. The more vectors it sends to 0, the fewer it sends to non-zero values. If A has an eigenvector then $A-\lambda I$ sends a vector to 0. Therefore, it can’t send as many to non-zero values (meaning it’s rank has been reduced).



EDIT: This is not always true as Widawensen points out in another answer.




For 2, the answer is a. But we normally think about it the other way around. We are trying to find a vector such that A operating on that vector simply scales it.



For 3, if $A-\lambda I$ is 0 then $A=\lambda I$. This is the special case where A has a single eigenvalue of multiplicity equal to its rank. But there are cases where A can have eigenvector a but A is not diagonal. Those cases can be found with the determinant because we know that the determinant of a singular matrix is 0 and we want $A-\lambda I$ to be singular.


combinatorics - Why are permutations (nPr) called variations in non-English languages?




First of all, you should be at least a little familiar with combinatorics to understand that question.
Some often used calculator keys in stochastic are the nCr and nPr ones.



Edit: I've first asked this question for the German and English Stackexchange version (both places where "mathematics" tags exist), but as it turned out this is likely also the case for all non-English languages and the community here may be better suited to answer the question, I've also posted it here. Despite that, below the German language is referred to as an example, where it is called differently.
Edit2: Also posted at History of Science and Mathematics and linguistics.





nCr is quite obvious. The "C" stands for "combinations" (actually those without repetition) and this is how they are called in German and English. That is just the binomial coefficient:




$$\binom{n}{k}=\frac{n!}{k!(n-k)!}=\text{n nCrk}$$





Keeping that knowledge in mind, as a German, you would assume nPr is for calculating the permutations (without repetition, again), i.e. just:



$$n!$$



However, that's not the case, actually it calculates the "variation", as it's called in German:




$$\frac{n!}{(n-k)!} =\text{n nPr k}$$



And it is true: Actually the "P" does stand for "permutation" in English. So the last formula is what they call "permutation".



Just different names?



So we could say, these are just different names, but no, it gets more complicated, because – using the German terms here again – permutations are just a special kind of variations. Essentially, it's the last formula, where k=n, i.e. you choose all items and do not select a subset when arranging them.



Obviously the English mathematics do not use the term "permutations" for the specific version we name it in German, but for the general version.
Essentially this leads to another problem, however, when we look at nPr with repetition. All examples before where without repetition, but you have formulas for the ones with repetition, too.




So the "permutation with repetition"/"Variation mit Wiederholung" and is easy to calculate, you just:



$$n^k$$



Wikipedia does not seem to want to acknowledge the English term for that saying they have "sometimes been referred to" in this way… (Or is this actually something different as the formula is k^n?)



Anyway, if we assume the term is used like that, we've got another way to have German "Permutationen" "with repetition". This time, however, as in the German definition of permutations we do not select items, we just have multiple of the same items. So e.g. you have r, s, …, t same elements in n elements you get a formula like that:



$$\frac{k!}{r!\cdot s! \cdots t!}$$




And this is what we call "Permutation mit Wiederholung" in German. But what term is then used in the English for this kind of "repetition"?





So how did this inconsistent naming across languages happen? Is there any "correct" term or has one term been invented before another one, so someone adapted it wrong?
Do other languages possibly also name it differently, i.e. is the German naming the exception or the English one?
And what term is used for "Permutationen mit Wiederholung"/same elements in a set in English then?






If you need some more understanding:








Edit: I found something: The English Wikipedia describes the term "variations" as:






  • Variations without repetition, an archaic term in combinatorics still commonly used by non-English authors for k-permutations of n

  • Variations with repetition, an archaic term in combinatorics still commonly used by non-English authors for n-tuples




Despite that sounding a little pejorative to me as a German speaker, it raises the question of whether this is really (internationally?) deprecated/outdated? Or what term is supposed to be used?
Also the relation to tuples, which are – I thought – just a different concept of a list of numbers, is not clear to me. After all, I could not found any of the formulas I've just mentioned in the linked article.


Answer



$(\boldsymbol{1})\quad$ We call $\,P(n,k)\,$ k-permutations. Order matters, repetitions not allowed. The number of permutations of the n objects taken k at a time:

$$P(n,k)=n(n-1)...(n-k+1)=\frac{n!}{(n-k)!}$$
So, these are ordered arrangements/selections/choices. We also use $\;_n P_k\;$notation.



$\quad$



$(\boldsymbol{2})\quad$ When we permute all objects we simply call them permutations and write $\,n!$



$\quad$



$(\boldsymbol{3})\quad$ If repetitions are allowed and order matters, we refer to such arrangements as permutations with repetitions or distinguishable permutations:

$${{n}\choose{n_1, n_2, n_3…,n_p}} = \frac {n!}{n_1!\, n_2!\, n_3!…n_p!}$$



$$\quad$$



$(\boldsymbol{4})\quad$ When we have $n^k$ ordered arrangements, replacement allowed -- we call them permutations with replacements or k-tuples.



$$\quad$$



$(\boldsymbol{5})\quad$ When repetitions are not allowed and order doesn't matter, we call such arrangements combinations: ${{n}\choose{k}}\;$ or $\;_n C_k\;$ or $\;C(n,k).\;$ We choose n objects taken k at a time without regard to order. We read it "n choose k" and write:
$${{n}\choose{k}}=\frac{n(n-1)...(n-k+1)}{k!}=\frac{n!}{k!\,(n-k)!}$$




$(\boldsymbol{6})\quad$ And finally, when we deal with unordered arrangements, repetitions allowed -- we call them combinations with repetitions:
$${{n+k-1}\choose{k}}=\frac{(n+k-1)\cdot...\cdot n}{k!}=\frac{(n+k-1)!}{k!\,(n-1)!}$$



Please note it doesn't matter what these are called in German or French since each language has its own rules. An English manual of style is not applicable to other languages and vice versa; likewise you can't replace permutations or combinations with German variations in English. Yet we sometimes have different notations even within one language as authors may have their own preferences in terms of notation and terminology. There's nothing to worry about here.



Despite sounding a little pejorative to me, a German language speaker, it raises a question of whether such usage is really (internationally?) deprecated and considered outdated? -- No. That's more of a speculation on terminology used in other languages by some Wikipedia writers. See their editing history. Do not blindly trust something which is not a hard science in Wikipedia.



Do other languages possibly also name it differently? -- Yes. You can see it in the comments from people of other countries. There should be a lot of subtle examples. Let me make a rough guess based on "googling":





  1. Disposizioni semplici=Variation ohne Wiederholung=l'arrangement=k-permutations


  2. Arranjo com repetição=Variation mit Wiederholung=permutations with replacement.




So we should have been able to say these are just different technical terms -- but no -- it gets more complicated because in German terminology permutations are just a special kind of "variations". -- Yes, to some extent at least. Permutations is a broad term in English. What's more you can view these formulas from different "angles", e.g., combinations being just a special case of distinguishable permutations, permutations being just ordered combinations; and permutations with replacement can be called permutations with repetitions (it might create some confusion!) and so on.



The German flow chart with the German nomenclature? -- I checked it. It's really nice and logical. Quite commendable. I don't see how it may be inferior to any other nomenclature. It is rather on the contrary.



Nomenclatures, notations, and terminology differ from country to country. In biology, any species receives a binomial name (Latin name) and there's no ambiguity across the world about that species. In math we also have universal symbols and notations but they are not so rigid. You can find $cot^{-1}$, $arccot$, $arcctg $ used to denote the same and so forth. You can find in some countries analytic geometry is almost never part of calculus but always part of linear algebra. Sometimes you can find calculus being called mathematical analysis and being confused with analysis or real analysis. You may come across calques (or verbatim translations) of higher algebra, general algebra, etc. You may see how in English we coined words Calc I, II, III, IV as well as precalculus. Things are not clear cut, and there will be variations, just like the difference in meaning the word gift has in English and in German. While the word variation may have very similar meanings in English and German, there will also be differences, maybe subtle differences. And it is exactly the words with minor differences in meaning that cause most of the confusion. People expect them to be the same but they are not. One final example. We have books on vector calculus but it is a bit of a misnomer, as these books are just enhanced versions of Calculus III/IV. And it may have no bearing whatsoever on what might be the case in other languages.




CONCLUSION:
Now we can answer the "title" question: Why are permutations P(n,r) called variations in languages other than English? -- That is simply not the case! While European languages may use math variations in a similar way, the usage will diverge to some extent.



IMPORTANT:
Please note that not only technical terms are quite different from country to country but notations, too, may vary. Thus, in France, Russia, etc. permutations are often denoted $A^{k}_n$, and $C^{k}_n$ is used for combinations, which means the upper and bottom indexes are reversed. It may lead to mistakes in translation.


analysis - Existence of smooth function $f(x)$ satisfying partial summation



We know that $\sum_{i=1}^{n}i=\frac{n(n+1)}{2}$, $\sum_{i=1}^{n}\frac{1}{i}=\psi_{0} (n+1)-\psi_{0} (1)$, where $\psi_{0}(x)$ is the digamma function.



My problem is,



(1).Is there a transformation such that it maps
$x \to \frac{x(x+1)}{2}$ and $\frac{1}{x} \to \psi_{0}(x+1)$, and map a smooth $f(x)$ into another smooth function $g(x)$, such that $g(x)-g(x-1)=f(x)$ ? When I mention transformation, I mean an operator or algorithm for me to get $g(x)$ from $f(x)$.




(2). Surely $g(x)$ if exists, it is not unique because $g(x)+C$ also satisfy the condition. Let's take $g(x)+C$ and $g(x)$ as the same case. Is there another smooth $h(x)\not = g(x)+C$ satisfying this condition?



The problem came when I tried to evaluate $\sum_{i=1}^{n} \sqrt{i}$, I'd like to represent it by integral form. Thanks for attention!


Answer



You can use the zeta function and the Hurwitz zeta function to write your sum. As a more general case of your summation, we can have the following representation




$$\sum_{i=1}^{n} i^s = \sum_{i=1}^{\infty}i^s - \sum_{i=0}^{\infty}(i+n+1)^s = \zeta(-s) - \zeta(-s, n+1)\,.$$
Now, substituting $s= -\frac{1}{2}$ in the above identity yields





$$\sum_{1}^{n} i^{\frac{1}{2}} = \zeta\left(-\frac{1}{2}\right) - \zeta\left(-\frac{1}{2}, n+1\right) \,.$$



See convergence issues of the zeta function and the Hurwitz zeta function.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...