Saturday 31 December 2016

Additive functional equation



Find all function $f: \mathbb{R} \rightarrow \mathbb{R}$ satisfying
$$ f(x+y) = f(x) + f(y)$$ and $$ f(f(x)) = x$$
for all $x, y \in \mathbb{R}$




This is one problem involving additive functional equation, but I don't know how to deal with the case $x$ is an irrational number. I appreciate all help and ideas. Thank you.



P.S: Or at least from the given solution it would be nice if you can infer one of the following statements:




  1. $f(x)$ is continuous on $\mathbb{R}$


  2. $f(x)$ is continuous at one point


  3. $f(x)$ is monotonic on $\mathbb{R}$


  4. $f(x)$ is bounded (on any interval)




Answer



It is straightforward to show that Cauchy's functional equation implies $f(qx)=q f(x)$ for all $q\in\mathbb{Q}, x\in\mathbb{R}$. Thus we can see $f$ as a $\mathbb{Q}$-linear map of the $\mathbb{Q}$-vector space $\mathbb{R}$. Like every linear map, it is determined by its values on a basis.



Let us choose a $\mathbb{Q}$-basis $B\subset\mathbb{R}$ of $\mathbb{R}$. Note that this requires the axiom of choice. That is, for every $x\in\mathbb{R}$ we can choose a coefficient function $x^*:B\rightarrow \mathbb{Q}$ such that $q(b)\not=0$ only for finitely many $b\in B$ and



$$x=\sum_{b\in B} x^*(b) b$$



Since $f$ is a linear map, it can be represented by an (infinite) $B\times B$ matrix of rational coefficients $(F_{b,b^\prime})_{b,b^\prime\in B}$ (with only finitely many non-zero terms in every column) such that



$$f(x)= F\cdot x$$




where $\cdot$ denotes multiplication of the matrix $F$ with the $\mathbb{Q}$-vector $x$, i.e.



$$f(x)^*(b) = \sum_{b^\prime\in B} F_{b,b^\prime} x^*(b^\prime)$$



$F_{b,b^\prime}$ is simply the coefficient of $b^\prime$ in the expansion of $f(b)$.



These are all solutions to Cauchy's functional equation by itself.
The condition $f(f(x))=x$ now reads




$$F^2=I$$



with $I$ being the identity matrix. That is,



$$\sum_{b^{\prime\prime}\in B} F_{b,b^{\prime\prime}} F_{b^{\prime\prime},b^\prime}=\left\{\begin{array}{ll}1 & \text{if}\;b=b^\prime,\\
0 & \text{if}\;b\not=b^\prime.\end{array}\right.$$



This characterizes all the solutions to the simultanous functional equations. The two solutions corresponding to the continuous solutions are just the cases $F=\pm I$. None of the other solutions satisfy any of your conditions $1.$ through $4.$ (since they all imply $f(x)=\pm x$).


abstract algebra - How the following multiplication table is solved ( related to $F_2[X]/f(x)$ )





$F_2$ is polynomial field of group of integer modulo $2.f(x)$ is $x^2 + x + 1$.
enter image description here



I didn't got how the multiplication is happening in the table.I referred to many sources related to this topic but still i am facing difficulty in understanding it.I will be very thankful if someone explains the concept behind it.


Answer



This is the multiplication table for the field ${\Bbb F}_4 = {\Bbb F}_2[x]/\langle x^2+x+1\rangle$ consisting of the residue classes of the elements $0,1,x,x+1$ which are the remainders of ${\Bbb F}_2[x]$ modulo $x^2+x+1$.



For instance, $[x] \cdot [x+1] = [x\cdot(x+1)] = [x^2+x]$ and the residue class of $x^2+x$ modulo $x^2+x+1$ is $[1]$, i.e., $x^2+x = 1\cdot (x^2+x+1) + 1$ with quotient $q(x)=1$ and remainder $r(x)=1$. This is an elementary way to view this field extension.


elementary set theory - Prove that if $Zsubseteq Y$, then $(gcirc f)^{-1}(Z)=f^{-1}(g^{-1}(Z)).$




Let $W ,X$ and $Y$ be three sets and let $f :W \to X$ and $g: X \to Y$ be two functions. Consider the composition $g \circ f: W \to Y $ which, as usual , is defined bt $(g\circ f)(w)=g(f(w))$ for $w \in W$.



$(a)$ Prove that f $Z\subseteq Y$, then $(g\circ f)^{-1}(Z)=f^{-1}(g^{-1}(Z)).$



$(b)$ Deduce that if $(W,c) ,(X,d)$ and $(Y,e)$ are metric spaces and the functions $f$ and $g$ are both continuous ,then the function $g \circ f$ is continuous.





Definitions:




  • Let $(X, d)$ and $(Y, e)$ be metric spaces, and let$ x \in X$. A
    function $f : X \to Y $is continuous at $x$ if:
    $\forall B \in \mathcal B(f(x)) \exists A \in \mathcal B(x) : f(A) \subseteq B$


Answer



Note that $f: X \rightarrow Y$ is continuous iff for for any open set $U \subseteq Y$, $f^{-1}(U)$ is open in $X$.




To prove $g \circ f$ is continuous, for any open set $U \subset Y$, we only need to prove $(g \circ f)^{-1}(U)$ is open in $W$.



To see this, as $(g \circ f)^{-1}(U)=f^{-1}(g^{-1}(U))$, and $g$ is continuous, we see
$g^{-1}(U)$ is open; as $f$ is continuous, then $f^{-1}(g^{-1}(U))$ is also open.


probability - Set $Z=X+Y$ and calculate the density function $f_Z$ for Z. Given solution doesn't match mine. (Continuous R.V)



I'm having difficulties with task 2, since given solution doesn't equal mine:





$f_{XY}(x,y) = \begin{cases} \left (6·e^{-3x}·e^{-2y} \right) &
\text{if } 00 & \text{otherwise } %
\end{cases}$



Task 1: Show that X and Y are independent



Task 2: Set $Z=X+Y$ and calculate the density function $f_Z$ for Z





I'll post the entire task, since others might seek help for a similar problem.



Task 1:



To show this we use that "Two continuous random variables X and Y are independent if:



$f_{XY}(x,y)=f_X(x)·f_Y(y),\space for\space all \space x,y$



We find the marginal PDF's




$f_X(x)=\int_0^\infty6·e^{-3x}·e^{-2y}$dy



$f_Y(y)=\int_0^\infty6·e^{-3x}·e^{-2y}$dx



We multiply the two:



$f_Y(y)·f_X(x)=6·e^{-3x}·e^{-2y}$



Which is the same as the joint PDF. We conclude they are independent.




Task 2:



We solve it by finding the CDF and differentiate this to find the PDF:



$P(Z\leq z)$



$P(x+y\leq z)$



$P(y\leq z-x)$




We solve the double integral:



$\int_0^\infty \int_0^{(z-x)}(6·e^{-3x}·e^{-2y}dydx=(1-3·e^{-2·z})$



We now have the CDF:



$F_{Z}(z) = \begin{cases} \left (0 \right) &
\text{if } 0>z \\(1-3·e^{-2·z}) & \text{}01 & \text{}z> \infty %
\end{cases}$




To find the PDF we differentiate the CDF:



$\frac{d}{dz}(1-3·e^{-2·z})=6·e^{-2z} $



Giving us a PDF of:



$f_{Z}(z) = \begin{cases} \left (6·e^{-2z} \right) &
\text{if } 00 & \text{otherwise } %

\end{cases}$



However the solution provided is:



$f_Z(z)=6·e^{-2z}·(1-e^{-z})$



What am I doing wrong?



Besides this im unsure of:




The intervals in the CDF


Answer



$Y = z-X = -X + z$ is a linear equation with $y$-intercept $z$ and slope $-1$. The region where $Y \leq -X + z$ is in green below. Note where the line intersects with the $X$ and $Y$ axes.



enter image description here



Thus, the CDF is, according to the graph above,
$$\int_{0}^{z} \int_{0}^{-x+z}6e^{-3x}e^{-2y}\text{ d}y\text{ d}x = 2e^{-3z}-3e^{-2z}+1$$
from which we obtain
$$f_{Z}(z) = -6e^{-3z}+6e^{-2z}=6e^{-2z}-6e^{-3z}=6e^{-2z}(1-e^{-z})$$

for $z > 0$ as desired.


Friday 30 December 2016

linear algebra - link between difference and product of inverse matrices



Let $A, E \in \mathbb{R}^{n \times n}$ matrices with rank $n$ (so non singular). Let $\lambda, \mu \in \mathbb{R}$ two different values which are both not eigenvalues of the pencil $(A,E)$. I am currently reading about the Loewner framework and I need following proposition.



$\dfrac{(\lambda E - A) ^ {-1}-(\mu E - A) ^ {-1}}{\mu - \lambda} = (\mu E - A)^{-1} E (\lambda E - A)^{-1}$



I tested it in matlab and it is correct, but I have no idea why this holds. The problem is that I don't know a proposition between the product of matrices and the difference of two matrices. Can someone help me?


Answer



Multiply the equation




$\dfrac{(\lambda E - A) ^ {-1}-(\mu E - A) ^ {-1}}{\mu - \lambda} = (\mu E - A)^{-1} E (\lambda E - A)^{-1}$ from the left by $\mu E-A$ and then from the right by $\lambda E-A$ and look what happens .....



Its your turn to make a valid proof from the above observation ...


calculus - Proving $lim_{x to 1} left (tan left(frac{pi x}{2} right ) lfloor x rfloor right )$ doesn't exist



I'm asked to prove that the following limit doesn't exist:
$$\lim_{x \to 1} \left (\tan \left(\frac{\pi x}{2} \right ) \lfloor x \rfloor \right )$$



My attempt was to break the case into one-sided limits. So, for example, I could prove that




$$\lim_{x \to 1^+} \tan \left(\frac{\pi x}{2} \right ) = -\infty$$



and
$$\lim_{x \to 1^-} \tan \left(\frac{\pi x}{2} \right ) = \infty$$



and then show that $$\lim_{x \to 1^+} \left (\tan \left(\frac{\pi x}{2} \right ) \lfloor x \rfloor \right ) \neq \lim_{x \to 1^-} \left (\tan \left(\frac{\pi x}{2} \right ) \lfloor x \rfloor \right )$$



using the product rule for infinite limits.



The only problem is that I'm not allowed to use the theorem about the limit of a composition of functions ($g(t)=\pi t /2$ and $f(x)=\tan{x}$) nor any theorem related to the limit of a continuous function when proving $\lim_{x \to 1^+} \tan \left(\frac{\pi x}{2} \right ) = -\infty$. I can however use the fact that $\lim_{x \to \frac{\pi}{2}^+}\tan x=-\infty$ and $\lim_{x \to \frac{\pi}{2}^-}\tan x=\infty$. Is there any way I can prove it using the basic limit arithemtic rules?



Answer



Hint. You may observe that, as $x \to 1$,
$$
\tan \left( \frac \pi2x\right)=-\frac{2}{\pi (x-1)}+\mathcal{O}(x-1) \tag1
$$ and that
$$\lfloor x \rfloor =\begin{cases}
1 & \text{if } x \to 1^+ \\
0 & \text{if } x \to 1^-
\end{cases}$$ giving
$$\tan \left( \frac \pi2x\right)\lfloor x \rfloor \to\begin{cases}

-\infty & \text{if } x \to 1^+ \\
0 & \text{if } x \to 1^-
\end{cases}$$ thus a limit doesn't exist as $x \to 1$.






To see $(1)$, we may write, as $x \to 1$,
$$
\begin{align}
\tan \left( \frac \pi2x\right)&=\tan \left( \frac \pi2(x-1)+\frac \pi2\right)\\\\&=\frac{\cos\frac \pi2(x-1)}{-\sin\frac \pi2(x-1)}\\\\

&=\frac{1+\mathcal{O}((x-1)^2)}{-\sin\frac \pi2(x-1)}\\\\
&=\frac{1+\mathcal{O}((x-1)^2)}{-\frac \pi2(x-1)+\mathcal{O}((x-1)^3)}\\\\
&=-\frac{2}{\pi (x-1)}+\mathcal{O}(x-1).
\end{align}$$







The notation $$\displaystyle f(x)=\mathcal{O}((x-x_0)^n)$$ means that there exists some constant $C$ around $x_0$ such that $$\displaystyle |f(x)|\leq C|x-x_0|^n$$ for all $x$ in a neighbourhood of $x_0$.




abstract algebra - Explicit construction of a finite field with $8$ elements


Give an explicit contruction of the finite field $K$ containing $8$ elements, as a quotient of an appropriate polynomial ring. Include the multiplication table of the group $K^{*}=K\setminus \{0\},$ and write $K^{*}=\langle \alpha \rangle$ for some $\alpha \in K.$




I have no idea how to approach this problem. Can anyone guide me in the right direction? Thanks.

Gradient of matrix-vector product

Is there a way to make the identity of a gradient of a product of matrix and vector, similar to divergence identity, that would go something like this:




$\nabla(\textbf{M}.\textbf{c})= \nabla(\textbf{M}).\textbf{c}\ +\ ... (\text{not necessarily like this}) $,



where M is a $n\times n$ matrix and $c$ is a $n\times 1$ matrix (column vector)?

combinatorics - Given the following conditions, how many tickets do you need to win the lottery?



A lottery game consists of scoring $15$ numbers out of $25$. The chance to hit 11 points with a single ticket in this lottery game is $\frac{1}{11}$. What is the minimum amount of tickets to score $11$ points? And how to choose these tickets?




To win the main prize with one ticket:
${25 \choose 15} = \frac{25!}{15!(25−15)!} = 3268760$



Probability of hitting 11 numbers with one ticket:



$$\frac{{15 \choose 11} \cdot {10 \choose 4}}{{25 \choose 15}}$$



$$=\frac{\frac{15!}{11!(15−11)!} \cdot \frac{10!}{4!(10−4)!} }{\frac{25!}{15!(25−15)!}}$$




$$=\frac{1365 \cdot 210}{ 3268760}$$



$$=\frac{286650}{3268760}$$



$$\frac{1}{ 11.4033141462 \cdots}$$



$$\approx \frac{1}{11}$$



So the chance to hit $11$ points with a single ticket in this lottery game is $\frac{1}{11}$.




I think that the minimum amount of tickets to score $11$ points is $11$, because $\frac{1}{11} \cdot 11 = 1$ but I am not sure. And I dont know how to choose these tickets.


Answer



First, let’s solve an easier problem. There are $25$ total numbers, $15$ of them are winning, and we still want at least $11$ points, but this time the lottery tickets are only $11$ numbers long. In order to buy enough tickets to be guaranteed $11$ points on one of them, in the worst case scenario ${25\choose 11}-{15\choose 11}=4456035$ of the tickets would be duds.



What would happen if the tickets were actually $15$ numbers long instead of $11$? Well then this would make things easier because instead of having to test $4456035$ combinations of $11$ numbers individually, we can now test ${15\choose 11}=1365$ of them at once!



This means we now only have to fail at most $4456035/1365=3264.49$ times. So we need to buy $3265$ tickets with the following strategy. For the first $3264$ tickets, ensure that no two tickets have the same combination of $11$ numbers. For the last ticket, make sure that it includes as many combinations of $11$ numbers as possible that haven’t already appeared in the other tickets.


Thursday 29 December 2016

How to find Triangular Numbers

I read that Gauss's Eureka Theorem says that any positive integer can be represented by at most 3 triangular numbers. So say I have some positive integer X, how do I find which 3 triangular numbers added together make that Number X?



edit: if there isn't a way to do this without just guessing and combining all combinations of 3 triangular numbers less than X, then is there a program that can do it for me?



edit 2: It looks like for some integers that there is more than 1 way to describe them using 3 triangular numbers. The way to calculate the possible number of ways is using this formula: enter image description here




http://www.3quarksdaily.com/3quarksdaily/2015/03/last-month-at-3qd-we-discovered-that-while-it-was-invented-to-solve-problems-in-counting-and-probability-pascals-triangle-c.html



I still don't know if there is a formula to find which triangular numbers though :(

calculus - Showing that an unbounded set has an unbounded sequence



Suppose I have an unbounded set $S$ of Real numbers.
I want to show that I can find a sequence $\{a_n\}$ in $S$ such that $ \lim_{n \rightarrow \infty}a_n=\infty$.




Here is what I have so far:



Since $S$ is unbounded, either $S$ is unbounded above or $S$ is unbounded below.



Case 1: $S$ is not bounded above



If there is no sequence $\{a_n\}$ in $S$ such that $ \lim_{n \rightarrow \infty}a_n=\infty$ then there must be a number $M$ such that $\forall n$ in any sequence $\{a_n\}$ in $S$, $a_n\leq M$. This implies that $S$ is bounded by $M$, a contradiction.



Case 2: $S$ is not bounded below




The same argument as Case 1.



It seems obvious to me that I can always find such sequence but then I am not satisfied with my explanation.



Any ideas? Thank you!



By the way, I'm trying to prove that a subset of the Real numbers is compact if and only if every sequence in subset has a sub-sequence converging to point in the subset.



I'm in the last part of my proof. I want to show that S is bounded and I am going to use the answer to this question to finish the proof. Thanks!



Answer



If you know that there is no sequence $\{a_n\}$ such that $\lim_{n\to\infty} a_n = +\infty$, how do you conclude that there is a number $M$ such that all sequences are bounded by the same number $M$? Doesn't that assume what you are trying to prove?



That said, what you are trying to prove is as follows : Suppose $S$ is not bounded above, then for any natural number $n \in \mathbb{N}$,
$$
S\cap[n, \infty) \neq \emptyset
$$
Hence, we may choose any number $a_n \in S\cap[n,\infty)$, and consider the sequence $\{a_n\}$. Can you show that this sequence must be going to $\infty$?


calculus - Why isn't $dy/dx$ (in this case, $dr/dt) $ simply the rate or slope?




I came late to the Maths party & have no undergraduate Mathematics qualification (am doing a Mathsy postgraduate qualification so I've tried my best to bring myself up to speed but the foundations are dreadfully shaky) so please explain like I'm five, if at all possible.



I have read the other "isn't $dy/dx$ a ratio" / "when can $dy/dx$ be a ratio for all intents and purposes" posts but the answers are either overly complex for my purposes or too removed from my particular problem for me to apply the answers sensibly.



I'm helping my lovely neighbour with her high-school level Maths at the moment and she asked me a question I simply can't provide a satisfactory and simple answer to.



We're dealing with rudimentary calculus, an oil slick to be precise. We're using cm as units and we have already calculated dr/dt:
$$\frac{dr}{dt}=\frac{2\times10^7}{\pi r}\text{ cm}$$
The next part of the question asks us how many hours until the radius is 1km so 1x10^5 cm for our particular question.




Would someone please be so kind as to explain simply why we can't divide the radius by the "rate"? Why can't this simply be treated as a rate/slope?



I do know that I instead must set the integral of $r.dr =\int (2x10^7/\pi).dt$ to find the correct answer. And I do of course notice that this elicits a value that is half that which is obtained from simple division of the radius by $dr/dt.$ Where/why/how in the simpler approach am I neglecting to divide by 2?



I do apologize for the low-level of this question. I did scour a series of resources and did find semblances of answers but none that have truly clicked with me / none that I can relay v simply to my tutee. I'm aware I'm likely being incredibly dense so I do apologize for this also and understand if it's not worth the time to answer! As I said, I tried in vain to solve the problem myself before deciding to hassle you here at MathExchange.



Many many thanks!


Answer



$\frac {dy}{dx}$ is the instantaneous rate. You can't just divide the radius by the rate because the rate is changing with time. If you do your calculation when the radius is $1$ cm, you get a rate of $\frac {2 \cdot 10^7}{\pi}$ and it takes only $\frac \pi{200}\cdot \frac {10^5-1}{10^5}$ seconds to get to $r=10^5$. If you start when $r=100$ the rate is $\frac {2 \cdot 10^5}{\pi}$ and it takes $\frac \pi{2}\cdot \frac {10^5-100}{10^5}$. It can't take longer to get from $100$ to $10^5$ than it takes to get from $1$ to $10^5$ because from $1$ you have to go through $100$.



calculus - How do I prove $displaystylelim_{xto -∞} (xcdot e^x)=0$ using L'Hôpital's Rule?

The above limit can be written as: $\displaystyle\lim_{x\to -∞} (x\cdot e^x)=\displaystyle\lim_{x\to -∞} \frac{e^x}{1/x} $.



The limit is an Indeterminate type of ${0/0}$. It can be solved using L'Hôpital's Rule:




$\displaystyle\lim_{x\to -∞} \frac{e^x}{1/x} = \lim_{x\to -∞} \frac{\frac{d}{dx}\left[e^x\right]}{\frac{d}{dx}\left[1/x\right]} = \lim_{x\to -∞} \frac{e^x}{-1/x^2}$



Here the numerator ${e^x\to 0}$ and denominator ${-1/x^2\to 0}$ as ${x\to -∞}$. So after using L'Hôpital's Rule the limit is still an Indeterminate type of ${0/0}$. How do I find a limit that's not an Indeterminate type?

calculus - How to evaluate this series using fourier series?



With the help of Hermite's Integral,I got
$$\sum_{n=1}^{\infty }\frac{1}{n}\int_{2\pi n}^{\infty }\frac{\sin x}{x}\mathrm{d}x=\pi-\frac{\pi}{2}\ln(2\pi)$$
I'd like to know can we solve this one using fourier series?


Answer




We begin with the expression



$$S=\sum_{n=1}^\infty \frac{1}{n}\int_{2\pi n}^\infty \frac{\sin (x)}{x}\,dx \tag 1$$



Enforcing the substitution $x \to 2\pi n x$ in $(1)$ yields



$$\begin{align}
\sum_{n=1}^\infty \frac{1}{n}\int_{2\pi n}^\infty \frac{\sin (x)}{x}\,dx&=\sum_{n=1}^\infty \frac{1}{n} \int_{1}^\infty \frac{\sin (2\pi nx)}{x}\,dx\\\\
&=\int_{1}^\infty \frac 1x\sum_{n=1}^\infty \frac{\sin (2\pi nx)}{n}\,dx \tag 2\\\\
&=\sum_{k=1}^\infty \int_{k}^{k+1}\frac1x \left(\frac{\pi}{2}(1-2(x-k))\right)\,dx \tag 3\\\\

&=\frac{\pi}{2} \sum_{k=1}^\infty \left((2k+1)\log\left(\frac{k+1}{k}\right)-2\right) \tag 4\\\\
\end{align}$$



where we recognized the Fourier sine series for $\frac{\pi}{2}(1-2x)$ for $x\in(0,1)$ to go from $(2)$ to $(3)$.



To evaluate the sum in $(4)$, we write the partial sum $S_K$



$$\begin{align}
S_K&=\sum_{k=1}^K \left((2k+1)\log\left(\frac{k+1}{k}\right)-2\right)\\\\
&=-2K+\sum_{k=1}^K \left((2k+1)\log(n+1)-(2k-1)\log(n)\right)-2\sum_{k=1}^K\log(k) \\\\

&=-2K+(2K+1)\log(K+1)-2\log(K!) \\\\
&=-2K+(2K+1)\log(K)+(2K+1)\log\left(1+\frac1K\right)-2\log\left(K!\right) \tag 5\\\\
&=-2K+(2K+1)\log(K)+2-2\log\left(\sqrt{2\pi K}\left(\frac{K}{e}\right)^K\right)+O\left(\frac1K\right) \tag 6\\\\
&=2-\log(2\pi)+O\left(\frac1K\right)
\end{align}$$



In going from $(5)$ to $(6)$, we used the asymptotic expansion for the logarithm function, $\log(1+x)=x+O(x^2)$ and Stirling's Formula



$$K!=\left(\sqrt{2\pi K}\left(\frac{K}{e}\right)^K\right)\left(1+O\left(\frac1K\right)\right)$$




Finally, putting it all together, we find that



$$\begin{align}
S&=\sum_{n=1}^\infty \frac{1}{n}\int_{2\pi n}^\infty \frac{\sin (x)}{x}\,dx\\\\
&=\frac{\pi}{2}\lim_{K\to \infty}S_K\\\\
&=\pi-\frac{\pi}{2}\log(2\pi)
\end{align}$$



and therefore




$$\bbox[5px,border:2px solid #C0A000]{\sum_{n=1}^\infty \frac{1}{n}\int_{2\pi n}^\infty \frac{\sin (x)}{x}\,dx=\pi-\frac{\pi}{2}\log(2\pi)}$$



as was to be shown!


Wednesday 28 December 2016

geometry - Why it is impossible for primitive Pythagoras triplets in integers to be all as powerful numbers?

I had seen an elementary proof for Fermat's last theorem at Quora.



I had checked all the steps (around one page only), where I couldn't catch any error, but I was confused about the last step only that includes the main idea that depends on a right angle triangle (in integers), are impossible to be with all sides being as powerful integers, the author claims this is too elementary to prove, but I don't see why this must be true?, there must be a counter example in large numbers (say more than six digits)!
or this might have a simple proof as stated!




So, can we find such a counter example or prove it simply?

calculus - Finishing proof of identity $sum_{k=b}^{n} binom{n}{k} binom{k}{b} = 2^{n-b} binom{n}{b}$




The identity



$$
\sum_{k=b}^{n} \binom{n}{k} \binom{k}{b} = 2^{n-b} \binom{n}{b}\
$$



is one of a few combinatorial identities I having been trying to prove, and it has taken me way too long. I am using the principles most familiar to me (which are algebra, some basic combinatorial identities, but not applying differentiation or proof by bijection).



First I tried to see whether finding an identity for $\sum\limits_{k=0}^n \binom{n}{k}$ leads anywhere.




$$\begin{align}
&\sum_{k=0}^{n} \binom{n}{k} = \sum_{0 \le k \lt b} \binom{n}{k} + \sum_{b \lt k \lt n} \binom{n}{k} \tag{1} \\
\\
\end{align}$$



But it didn't for me, so I started over and next tried



$$\begin{align}
&\sum_{k=b}^{n} \binom{n}{k} \binom{k}{b} = \sum_{k=b}^{n} \left( \frac{n!}{k! (n-k)! } \right) \left( \frac{k!}{(k-b)!} \right) \tag{2} \\
\\

\end{align}$$



but this also fell short of a proof.



It is really hard for me to step away from the problem. I was just hoping for a really a big hint on how to proceed.


Answer



Try thinking of this in terms of what it's counting, instead of algebraically. If we view this as counting committees, we're picking some big committee (of size at least b) of our n employees, and then choosing a subcommittee of that with b employees. However, what if we try picking the b employees first, and then choosing some extra people for the bigger committee?


real analysis - How does ($x_n$) converge for $x_1=1$, $x_{n+1}=frac{1}{x_n+3}$ for $n=1,2,ldots$?



Show that the sequence ($x_n$) defined by $$x_1=1\quad \text{and}\quad x_{n+1}=\frac{1}{x_n+3} \quad (n=1,2,\ldots)$$ converges and determine its limit ?



I try to show ($x_n$) is a Cauchy sequence or ($x_n$) is decreasing (or increasing) and bounded sequence but I fail every step of all.


Answer



Hint: For $x,y \geq 0$ we have $\left\vert\frac{1}{x+3} - \frac{1}{y+3}\right\vert = \left\vert\frac{y-x}{(x+3)(y+3)}\right\vert \leq \frac{1}{9}\vert x-y\vert$.


calculus - Derivative Notation Explanation



I am learning differential calculus on Khan Academy, but I am uncertain of a few things.
By the way; I understand derivatives this far: $d^{\prime}(x)$ and this: $d^{\prime}(g(x))$




I am confused mainly about Leibniz's notation.




  1. What does the "respect" mean in "derivative with respect to x" and "derivative of y with respect to x" mean?


  2. Why does $\frac d{dx}f(x)$ have only a $d$ on top? I suspect there is a hidden variable not notated.


  3. Lastly, because this question is not as important, (but can help my understanding) what does $\frac {dx}{d f(x)}$ mean? For example, what is $\frac {dx}{d \sin(x)}\left(x^2\right)$? Other examples would be helpful.




Thanks for all the help. I really don't want to wait for my senior year in high school.



Answer



Q1. It means exactly what it says. :-) How much does one variable change, with respect to (that is, in comparison to) another variable? For instance, if $y = 3x$, then the derivative of $y$, with respect to $x$, is $3$, because for every unit change in $x$, you get a three-unit change in $y$.



Of course, that's not at all complicated, because the function is linear. With a quadratic equation, such as $y = x^2+1$, the derivative changes, because the function is curved, and its slope changes. Its derivative is, in fact, $2x$. That means that at $x = 1$, an infinitesimally small unit change in $x$ gives a $2x = 2$ unit change in $y$. This ratio is only exact right at $x = 1$; for example, at $x = 2$, the ratio is $2x = 4$.



This expression is the limit of the ratio $\frac{\Delta y}{\Delta x}$, the change in $y$ over the change in $x$, over a small but positive interval. The limit as that interval shrinks to zero is $\frac{dy}{dx}$.



Q2. You will rarely see, at this stage, $\frac{d}{dx}$ by itself. It will be a unary prefix operator, operating on an expression such as $x^2+1$. For instance, we might write



$$

\frac{d}{dx} \left(x^2+1\right) = 2x
$$



It just means the derivative of the expression that follows.



Q3. This is an unusual formulation. Ostensibly, though, it would mean the derivative of the operand with respect to $f(x)$, which you can obtain using the chain rule:



$$
\frac{dx}{df(x)} = \frac{\frac{dx}{dx}}{\frac{df(x)}{dx}} = \frac{1}{f'(x)}
$$




and



$$
\frac{d}{df(x)} g(x) = \frac{\frac{dg(x)}{dx}}{\frac{df(x)}{dx}}
= \frac{g'(x)}{f'(x)}
$$


calculus - Evaluate $limlimits_{n to infty}sumlimits_{k=0}^n dfrac{sqrt{n}}{n+k^2}(n=1,2,cdots)$

I tried to change it into a Riemann sum but failed, since




\begin{align*}
\lim_{n \to \infty}\sum_{k=0}^n \frac{\sqrt{n}}{n+k^2}=\lim_{n \to \infty}\frac{1}{n}\sum_{k=0}^n \frac{\sqrt{n}}{1+(k/\sqrt{n})^2}
,\end{align*}

which is not a standard form. Maybe, it need apply the squeeze theorem, but how to evaluate the bound.



By the way, WA gives its result
\begin{align*}
\lim_{n \to \infty}\sum_{k=0}^n \frac{\sqrt{n}}{n+k^2}=\frac{\pi}{2}.
\end{align*}

Equivalence of congruences - why are these congruences equivalent?




I'm reading a solution of the following congruence: $x^{59} \equiv 604 \mod 2013$. It says that it is equivalent to the following system of congruences:
$$\begin{cases} x^{59} \equiv 604 \mod 3 \\ x^{59} \equiv 604 \mod 11\\ x^{59} \equiv 604 \mod 61\end{cases}$$
Why?



EDIT:



I know that $2013=3*11*61|x^{59}-604$. But why is this information sufficient to say that $3,11,61$ all divide $x^{59}-604$ when considered separately?


Answer



Hint $\ $ If $\,m,n\,$ are coprime then $\,{\rm lcm}(m,n) = \color{#c00}{mn},\ $ therefore




$$ a\equiv b\!\!\!\pmod{\!m,n} \iff m,n\mid a-b\color{#0a0}\iff \color{#c00}{mn}\mid a-b\iff a\equiv b\!\!\!\pmod{mn}$$



Applied twice yields the claim. This is a special case of CRT = Chinese Remainder Theorem.



Remark $\ $ The $\,\rm\color{#0a0}{middle}\,$ equivalence employs the universal property of lcm, $ $ i.e.



$$ m,n\mid k\color{#0a0}\iff {\rm lcm}(m,n)\mid k$$


Tuesday 27 December 2016

complex analysis - Multiple annuli of laurent series expansion

Let $A_z(R_1,R_2)$ denote the open annulus about $z$ with inner radius $R_1$ and outer radius $R_2$.




According to the basic theorem of Laurent series, if $f\in Hol(A_z(R_1,R_2))$ then $f$ has a laurent series expansion about $z$ which converges locally uniformly in the annulus.



But what about cases where $f$ is holomorphic in multiple, mutually disjoint annuli?



For instance, consider $$f=\frac{1}{(z-1)z}$$



It has singularities both at $z=0$ and $z=1$ and is holomorphic in both the anuulus $A_0(0,1)$ and $A_0(1,\infty)$.



Does that mean that $f$ can be represented as two different Laurent series about $0$? each converging in a different annulus? If so, what can we tell about the relation between their coefficients? And what can the series tell us about $f$?




If there is only a single Laurent expansion of $f$ about $0$, then where does it converge?



Perhaps most importantly though, do the answers to these questions depend on the specific example of $f$ at all?

elementary set theory - How to prove this? "For all sets A,B⊆D and functions f:D→R, we have f(A∩B)⊆(f(A)∩f(B))."




Here's my attempt:




  • f(A∩B) = f({x|x∈A∧x∈B}) = {f(x)|x∈{x|x∈A∧x∈B}}

  • f(A)∩f(B) = f({x|x∈A}) ∩ f({x|x∈B}) = {f(x)|x∈{x|x∈A}} ∩ {f(x)|x∈{x|x∈B}} = {x|x∈{f(x)|x∈{x|x∈A}}∧x∈{f(x)|x∈{x|x∈B}}}



And now I'm stuck. Please help.



Answer



You have
$$
A\cap B\subset A\implies f(A\cap B)\subset f(A),\\
A\cap B\subset B\implies f(A\cap B)\subset f(B)
$$
so it follows that $f(A\cap B)\subset f(A)\cap f(B)$.


real analysis - Prove if $sumlimits_{n=1}^ infty a_n$ converges, {$b_n$} is bounded & monotone, then $sumlimits_{n=1}^ infty a_nb_n$ converges.



Prove that if $\displaystyle \sum_{n=1}^ \infty a_n$ converges, and {$b_n$} is bounded and monotone, then $\displaystyle \sum_{n=1}^ \infty a_nb_n$ converges.




No, $a_n, b_n$ are not necessarily positive numbers.



I've been trying to use the Dirichlet's Test, but I have no way to show that $b_n$ goes to zero. If I switch $a_n$ and $b_n$ in Dirichlet's Test, I can show $a_n$ goes to zero, but then I'm having trouble showing that $\displaystyle \sum_{n=1}^ \infty \left\lvert a_{n+1}-a_{n}\right\rvert$ converges (because $a_n$ isn't necessarily monotone).


Answer



Outline: Indeed, the key is use Dirichlet's test (a.k.a. Abel's summation at its core) as you intended:



$$\begin{align}
\sum_{n=1}^N a_n b_n &= \sum_{n=1}^N (A_n-A_{n-1}) b_n =
A_Nb_N + \sum_{n=1}^{N-1} A_n b_n -\sum_{n=1}^{N-1} A_n b_{n+1} \\
&= A_Nb_N + \sum_{n=1}^{N-1} \underbrace{A_n}_{\text{bounded}} \underbrace{(b_n -b_{n+1})}_{\text{constant sign}}

\end{align}$$
where $A_n \stackrel{\rm def}{=} \sum_{n=1}^{N} a_n$ and $A_0=0$.



Now, this does not quite work: the issue boils down to the fact that at the end of the day you cannot rely on $b_n\xrightarrow[n\to\infty]{} 0$; since indeed it is not necessarily true. But you need this for the argument to go through.



To circumvent that, observe that $(b_n)_n$ is a bounded monotone sequence, and therefore is convergent. Let $\ell\in\mathbb{R}$ be its limit.



You can now define the sequence $(b^\prime_n)_n$ by $b^\prime_n \stackrel{\rm def}{=} b_n-\ell$. This is a monotone bounded sequence converging to $0$, and
$$
\sum_{n=1}^N a_n b^\prime_n = \sum_{n=1}^N a_n b_n - \ell \sum_{n=1}^N a_n.

$$
The second term is a convergent series by assumption on $(a_n)_n$, so showing convergence of the series $ \sum a_n b_n$ is equivalent to showing convergence of the series $\sum a_n b^\prime_n$. Apply your idea (Abel's summation) on the latter.


Monday 26 December 2016

Sunday 25 December 2016

soft question - Acronyms and words as variables and in mathematical notation

I am unsure if this question is warranted.




Often mathematical symbols and objects are represented by a single character, e.g. variables are most often single characters like $x$, and often to describe a variable further we use an additional character in a subscript or a superscript, like $x_t$.



In physics people often choose intuitive letters to represent quantities as variables, for example, using $t$ for time or $v$ for velocity (starting character is used to make it more recognizable).



Of course this always isn't the case and I acknowledge that there are the cases of function names such as $\sin(x)$ or even objects like $\sup A$ and $\inf A$ for a set $A$.



However my question is what is the general opinion (concerning formal formatting) on using words, abbreviations, and acronyms as variable quantities in mathematical sentences.







An example, we have that the revenue of a simple trade of a good is the quantity multiplied by the price. A few possible formats include:




  1. $R = q\times p$

  2. $\text{revenue} = \text{quantity} \times \text{price}$

  3. $r = q_{\text{good}} \times p_{\text{good}}$



Perhaps #3 isn't so pretty with this simple formula, but if we take the mark-to-market formula of quantity times the difference in market price and trade price, we can get some other formats:





  1. $\text{MTM} = q\times(p_{\text{market}} - p_{\text{trade}})$

  2. $\text{MTM} = q\times (\text{MP} - \text{TP})$



and so forth.



Of course I see words and acronyms often in the equations from the softer sciences like economics where my example comes from, but I am sure this question applies to pure mathematics in some cases.







I do not know which notation style we should tend towards these cases, especially if it was in the context of an academic paper.



Thanks for any opinions!

A function that is Lebesgue integrable but not measurable (not absurd obviously)

I think: A function $f$, as long as it is measurable, though Lebesgue integrable or not, always has Lebesgue integral on any domain $E$.




However Royden & Fitzpatrick’s book "Real Analysis" (4th ed) seems to say implicitly that “a function could be integrable without being Lebesgue measurable”. In particular, theorem 7 page 103 says:



“If function $f$ is bounded on set $E$ of finite measure, then $f$ is Lebesgue integrable over $E$ if and only if $f$ is measurable”.



The book spends a half page to prove the direction “$f$ is integrable implies $f$ is measurable”! Even the book “Real Analysis: Measure Theory, Integration, And Hilbert Spaces” of Elias M. Stein & Rami Shakarchi does the same job!



This makes me think there is possibly a function that is not bounded, not measurable but Lebesgue integrable on a set of infinite measure?



===

Update: Read the answer of smnoren and me below about the motivation behind the approaches to define Lebesgue integrals.
Final conclusion: The starting statement above is still true and doesn't contradict with the approach of Royden and Stein.

The limit $lim_{nto infty}frac{T_n(n)}{e^n}$ where $T_n(x)$ is the Taylor polynomial of $e^x$





From working on a problem I was lead to consider the function $\frac{T_n(n)}{e^n}$ where $T_n(x)$ is the $n$'th order Taylor polynomial of $e^x$.



Numerical evidence suggest that



$$\lim_{n\to \infty} \frac{T_n(n)}{e^n} \equiv\lim_{n\to \infty} \frac{\sum_{k=0}^n\frac{n^k}{k!}}{\sum_{k=0}^\infty\frac{n^k}{k!}} = \frac{1}{2}$$




Is there a nice proof for this statement? More generally: is there a 'standard' approach for evaluating limits on the form $\lim_{n\to\infty}\frac{f_n(x_n)}{f(x_n)}$ where $f_n$ is a series converging (uniformly) to $f$ and where $x_n$ is an unbounded sequence? I would also apprechiate refs. to similar questions on this site or in the literature (I could only find this one).


Answer



Look here:
A limit involves series and factorials



That answer links to here:
http://journals.cambridge.org/download.php?file=%2FPEM%2FPEM2_24_03%2FS0013091500016503a.pdf&code=fd828d6902ca6a380244640216120c97



This has a result of
(who else)

Ramanujan
where he proved
(in S. RAMANUJAN, J. Ind. Math. Soc. 3 (1911), 128; ibid. 4 (1911), 151-152; Collected Papers
(Chelsea, New York; 1962), 323-324)
that



$$e^n/2 = \sum_{k=0}^{n-1} n^k/k! + (n^n/n!) r(n)$$



where, for large $n$,
$r(n) \approx 1/3 + 4/(135n) + O(1/n^2)$.



Friday 23 December 2016

soft question - Why should "graph theory be part of the education of every student of mathematics"?



Until recently, I thought that graph theory is a topic which is well-suited for math olympiads, but which is a very small field of current mathematical research with not so many connections to "deeper" areas of mathematics.



But then I stumbled over Bela Bollobas "Modern Graph Theory" in which he states:




The time has now come when graph theory should be part of the education of every serious student of mathematics and computer sciences, both for its own sake and to enhance the appreciation of mathematics as a whole.





Thus, I'm wondering whether I should deepen my knowledge of graph theory. I find topics like spectral and random graph theory very interesting, but I don't think that I am ever going to do research on purely graph theoretic questions. To the contrary, I'm mainly interested in areas like algebraic topology, algebraic number theory and differential topology, and I'm wondering if its useful to have some knowledge of graph theory when engaging in these topics.



So my question is:
Why should students like me, which are aspiring a mathematical research carreer in mathematical areas which are not directly related to graph theory, study graphs?


Answer



If you're more interested in algebraic topology, I suggest not to spend much time studying the combinatorial aspects of graph theory. It is true that graphs in this guise do appear in such areas; for instance, one uses Dynkin diagrams (which are graphs) to classify algebraic groups and also Lie groups. It's really very elegant and useful for work in algebraic groups, but you need very little graph theory for this. Graphs are often used where there is some combinatorial structure, but again I doubt (but perhaps I am wrong) that knowing lots of graph theory (as one would find in the typical book like Bondy's) would help too much.



"Graph theory" covers much more than just this, however. For instance an esperantist family (generalisation of expander family) of graphs arise naturally as a certain family of Cayley graphs associated to finite groups that are quotients of fundamental groups (as Riemann surfaces) of algebraic curves, which come from any family of etale covers. This can be used to prove interesting results about families of various arithmetic objects and how they behave generically.




An excellent starting point for these topics is the paper by Ellenberg, Hall, and Kowalski, "Expander graphs, gonality, and variation of Galois representations". This source hopefully should spark your imagination about such topics and encourage you to read up on such topics.



The kind of graph theory covered in a typical undergraduate course I think isn't so prevalent in every day algebraic topology and related fields since the stuff in "typical graph theory" studies properties that aren't invariant under homotopy, and homotopy invariants is the stuff that algebraic topology is built upon. There is however, a kind of "graph theory" that is extremely useful in topology and number theory: it's the theory of simplicial sets (and simplicial objects in any category)! This doesn't just look at graphs though, but objects that are built from higher simplicies too. The basic theory of simplicial objects in algebraic topology covers homotopy-type stuff. Simplicial objects, for instance simplicial sets, are completely combinatorially-defined. For instance "nice" simplicial sets called fibrant ones have a notion of fundamental group and there is a functor from simplicial sets to spaces called "geometric realization" that sends a simplicial set to a space, which for a graph would be the obvious topological space, and the notion of fundamental group agrees with the combinatorially defined one.



Simplicial sets are so fundamental to many areas of algebra such as: $K$-theory (they are typically used to define the higher $K$-groups), higher category theory (which is a generalisation of category theory and also has applications to $K$-theory), homological algebra (essential tool, the cat of nonnegative chain complexes of $R$-modules is equivalent to the category of simplicial objects in the category of $R$-modules), algebraic topology itself of course, algebraic geometry (for things like $\mathbb{A}^1$ homotopy theory), and tons more stuff that I don't know about I'm sure.



Good sources for simplicial objects are:





  • May, "Simplicial Objects in Algebraic Topology"

  • Ch. 8 of Weibel's "An Introduction to Homological Algebra" (you probably should start here!)

  • Goerss's book "Simplicial Homotopy Theory"

  • Moerdijk and Toen's "Simplicial Methods for Operads and Algebraic Geometry" (Part 2 is about algebraic geometry)

  • Ferrario and Piccinini's "Simplicial Structures in Topology" (more topology)


induction - Question on inductive proof structure involving a conditional statement



Say we have an inductive proof that is structured as follows:



If $P(k)$ then $Q(k)$ where $P$ and $Q$ are some arbitrary predicates and $k$ is the subject of those predicates.



Now, suppose we go through a base case and get to our inductive step.




We are going to assume $P(k)\Rightarrow Q(k)$ (call this the inductive hypothesis) is true for values greater than or equal to our base case and try to show $P(k+1)\Rightarrow Q(k+1)$. Now it is clear to me that we may use our inductive hypothesis in our proof as granted... but what about the antecedent of the next step we are trying to prove? That is I can assume $P(k)\Rightarrow Q(k)$, but may I also assume $P(k+1)$?



For instance suppose the statement $P(k)$ were to be there exist two integers $a$ and $b$ such that $a\leq x_1,x_2,x_3,...,x_k \leq b$ do I get to assume $a \leq x_1,x_2,x_3,...,x_k,x_{k+1}\leq b$ in my inductive step?



Basically I know $P(k)\Rightarrow P(k+1)$ is the general structure of an inductive proof. But what about when we have conditional statments like $(P(k)\Rightarrow Q(k))\Rightarrow(P(k+1)\Rightarrow Q(k+1))$?


Answer



Yes, that works. The general pattern for (weak mathematical) induction is:



$(\varphi(0) \land \forall k (\varphi(k) \rightarrow \varphi(k+1)) \rightarrow \forall k \varphi(k)$




for any formula $\varphi(k)$.



So, with $\varphi(k)$ being the formula $ P(k) \rightarrow Q(k)$, you get what you want.



And yes, for the inductive step, you assume the inductive hypothesis $P(k) \rightarrow Q(k)$, and try to show $P(k+1) \rightarrow Q(k+1)$, and to show that, you will probably want to assume $P(k+1)$ and try to get to $Q(k+1)$.



Finally, don't forget the base case, i.e. $P(0) \rightarrow Q(0)$ (or, if the base case is $k=1$, $P(1) \rightarrow Q(1)$), for which you probably want to do a conditional proof as well (i.e. Assume $P(0)$, and try and get to $Q(0)$.


How do I express 0.999(9) as a fraction?

I'm noob in math. If 0.333(3) is 1/3, 0.666(6) is 2/3, then 0.999(9) is what?



If 3/3 and 0.999(9) is the same, then how can I express one of them without expressing the other?

inequality - How to prove the fraction identity without using calculator

How to prove without calculator that



$$ \frac{1}{1001} + \frac{1}{3001} > \frac{1}{1000}$$

Thursday 22 December 2016

Expectation of a function in Poisson Distribution



Find the expectation of the function $\phi(x) = xe^{-x}$ in a Poisson distribution.



My Attempt: If $\lambda$ be the mean of Poisson distribution, then expectation of




$$\displaystyle \phi(x)=\sum_{x \mathop \ge 0} \frac{\phi(x)\lambda^xe^{-\lambda}}{x!}$$



$$= \displaystyle \sum_{x \mathop \ge 0} \frac{ xe^{-x}\lambda^xe^{-\lambda}}{x!}$$



$$= \displaystyle \lambda e^{-\lambda} \sum_{x \mathop \ge 1} \frac {e^{-x}} {\left({x-1}\right)!} \lambda^{x-1}$$



Now what?



Without the $e^{-x}$, the rest of summation is just a Taylor's expansion of $e^{\lambda}$, which gets cancelled.




But what do I do here?


Answer



Group together $e^{-x} \lambda^{x-1} = \frac{(\frac{\lambda}{e})^x}{\lambda}$, then do a bit of algebra to get the Taylor series for $e^s$ for some $s$.


real analysis - Proving that Cauchy condensation test is true for $a_n$ any monotone sequence.

I'm trying to show that then $\Sigma a_n$ converges iff $\Sigma 2^n a_{2^n}$ converges if $a_n$ is any monotone sequence.



I am assuming I have to prove the cases:





  • $a_n$ is positive and increasing

  • $a_n$ is negative and increasing

  • $a_n$ is positive and decreasing

  • $a_n$ is negative and decreasing



Here is what I have worked out from some of the comments below:




Assume $a_n$ is an increasing sequence. If $a_n$ is positive, then $\lim a_n \neq 0$ and we have divergence.





I don't really understand why this is. Is it because $a_n$ is unbounded and so it diverges to $\infty$? I don't know a rigorous way to prove this. I'm assuming I have to make use of the theorem that monotonic $s_n$ converges iff it is bounded.




If $a_n$ is negative, then we have that $\lim a_n = 0$




Again, I don't know how to rigorously prove this. How do I find a bound for $a_n$?





Let $s_n = a_1 + a_2 + ... + a_n$ and $t_n = a_1 + 2 a_2 + ... + 2^k a_{2^k}$.
For $n<2^k$, we have $s_n \geq a_1 + (a_2 + a_3) + ... _ (a_{2^k}+...+a_{2^{k+1}-1}
\geq a_1 + a_2 + ... + 2^{k}a_{2k} = t_k
$



And so $s_n \geq t_k$.



Similar proof for $2s_{n} \leq t_k$





Someone below has already proved nonnegative decreasing case. I'm assuming that:




If $a_n$ was decreasing and negative, then $\lim a_n \neq 0$ and we have divergence.




Again, I'm not sure how to prove this.

real analysis - Number of Functions satisfying linear homogeneous function criterion

Let us suppose I have a continuous smooth function



$f : \mathbb R^n\rightarrow \mathbb R$



Satisfying



$f(\lambda x_1,\lambda x_2, \ldots ,\lambda x_n)= \lambda f( x_1,x_2, \ldots , x_n),\space \forall\lambda\in\mathbb R \space \ldots(linear \space condition)$



My objective is to find the function $f$




Attempt:



Expand in Taylor series the function $f$ for $n$ variables $(i.e.\space x_i)\space in \space the\space equation\space of \space linear \space condition$



1)The constant term is zero as $f(0,...,0)=0$.



2)The linear terms cancel either side.



3)With the higher order terms of coefficients c, $$\lambda^{k-1}c_k=c_k$$




This condition holds true $\forall k,\lambda \in \mathbb R$



This implies that all coefficients of higher order(>1) terms are zero.



Hence I conclude that the function is $$f( x_1,x_2, \ldots , x_n)= a_1 x_1 + a_2 x_2 \ldots +a_n x_n$$



for some constants $a_i(1\leq i \leq n)$



My doubt is:




1) Is my reasoning correct, or is there a better way to do it?



2) Is this the only unique solution for the function?

calculus - Proving $frac2pi x le sin x le x$ for $xin [0,frac {pi} 2]$




Prove $\frac2\pi x \le \sin x \le x$ for $x\in [0,\frac {\pi} 2]$.





I tried to do this in two ways, I'm not sure about CMVT and I have a problem with the other way.






Using Cauchy's MVT:



RHS:
$\sin x \le x \implies \frac {\sin x}{x}\le 1$

So define:
$f(x)=\sin x, \ g(x)=x$ then from CMVT:
$\dfrac {f(\frac {\pi} 2)}{g(\frac {\pi} 2)}=\dfrac {f'(c)} {g'(c)}=\cos c$
and from the fact that $c$ is between $0$ and $\pi/2 \implies \cos c \le 1$.



LHS: In the same manner but here I run into some trouble:
$\frac2\pi x \le \sin x\implies \frac {2x}{\pi\sin x}\le 1$
So:
$\dfrac {f(\frac {\pi} 2)}{g(\frac {\pi} 2)}=\dfrac {f'(c)} {g'(c)}\implies\frac {1}{\sin {\frac {\pi}{2}}}=\frac {2}{\pi \cos c}$
Here actually

$\frac {1}{\sin {\frac {\pi}{2}}}=1$ so it's also $\le 1$



Is it correct to use CMVT like this ?






The other way:



We want to show: $f(x)=\sin x - x < 0$ and $g(x)=\frac {2x}{\pi}-sinx <0 $ by deriving both it's easy to show that the inequality stands for $f$ but for $g$ it isn't so obvious that $g'(x)=\frac {2}{\pi}-\cos x$ is negative. In fact for $x=\frac {\pi} 2$ it's positive. Please help figure this out.







This is the same The sine inequality $\frac2\pi x \le \sin x \le x$ for $0 but all the answers there are partial or hints and I want to avoid convexity.



Note: I can't use integrals.


Answer



To show that $\sin x\le x$ you can apply the Cauchy mean value theorem. (Note that you want to show the inequality for any $x\in\left[0,\frac{\pi}{2}\right].$ )Consider, as you have done, $f(x)=\sin x$ and $g(x)=x.$ Apply the theorem in the interval $[0,x]$ and you will get the inequality, as a consequence of $\cos c\le 1.$ Indeed, there exists $c\in(0,x)$ such that $$\sin x=g'(c)(f(x)-f(0))=f'(c)(g(x)-g(0))=(\cos c)\cdot x\le x.$$



To show the other inequality consider $f(x)=\sin x-\frac{2}{\pi}x.$ We have that $f(0)=f(\pi/2)=0.$ Since $f$ is continuous and $[0,\pi/2]$ is compact it attains a global minimum. If the minimum is not achieved at the extrema of the interval then it belongs to the open interval $(0,\pi/2).$ Let $c$ be the point where $f$ achieves its global minimum. Then $f''(c)\ge 0,$ but $f''(c)=-\sin c<0$ for any $c\in(0,\pi/2).$ So the minimum value is $f(0)=f(\pi/2)=0,$ from where $0\le f(x)=\sin x-\frac{2}{\pi}x,$ which shows the inequality.


calculus - Solving integrals; When to apply substitution rule, integral by parts etc?



My weakest point in calculus has always been solving actual integrals. Of course I understand how everything works, and can apply things. The problem keeps that I waste a lot of time on, as discovered later, dead ends...




I have a lot of trouble deciding if I see a random function how I would start. Should I try to use the integration by parts rule? The substitution rule (good substition)? Convert the function first to a different function? Especially when things become non trivial and a combination is required I more often than not first try the wrong method.



Is this problem just a lack of experience and something that can only be solved by exercising more problems. Or are there tricks/standard approaches (ie: first try substitution then ... ) I can follow?



An very simple example of a solution I simply couldn't see is (though this is just an example):



$$\int {\frac{1}{x^2+6x+13}}dx$$


Answer



Part is obviously experience which allows you to more easily recognise what kinds of integrals you're dealing with but a large part is simply understanding the different methods of integration and why they work. This should make it easier for you to "diagnose" problems.




e.g. Integration by substitution often works well when an integral has a similar form to one that you already know (for instance your example and $\int \frac{1}{x^2+1} \mathrm{d}x=\arctan(x)$). Integration by parts often works well when you can identify two multiplying functions, one of which isn't too costly to integrate and the other which differentiates to zero eventually or in some cases to the same type of function. etc.



Of course throughout this process you also need to keep a keen eye out for any simplifications or other changes you might be able to make e.g. trigonometric identities, indices, etc.


combinatorics - Find all 3-digit numbers divisible by a sum of groups of its digits

How to find all three-digit number which are divisible by a sum of specific digit groups explained below?



The original number should have only non-zero and non-repeating digits.



example:
$301$ has a zero digit - cannot be used
$331$ does not have different digits - cannot be used



And the number should be divisible by two-digit group of its own digits, which are made by omitting one of the number's digits.



example:
$785$ should be divisible by $78$, $75$, and $85$.




I have come just to this:



If the number is made of digits $a, b, c$ like this $[abc]$, the number should be divisible by
$(10a + b) + (10b + c) + (10a + c) = 20a + 11b + 2c$



But I am not sure how to find all of the suitable numbers.



Thanks a lot for your time!

calculus - Integrals with the special functions $Ci(x)$ and $erf(x)$



I'm looking for the solutions of the following two integrals:



$$I_1=\int\limits_0^\infty dx\, e^{-x^2}\text{Ci}(ax)$$
and
$$I_2=\int\limits_0^\infty dx\, e^{-ax}\text{erf}(x)$$
with
$$\text{Ci}(x)=\int\limits_\infty^x\frac{\cos(t)}{t}dt$$
and

$$\text{erf}(x)=\frac{2}{\sqrt{\pi}}\int\limits_0^xe^{-t^2}dt$$



Now I'm not 100% sure what is meant by "the solution of the integrals" since these will probably be not-evaluable. But I'm guessing that the question is to reduce the expression to one single special function in stead of the integral of a special function with an elementary function.



Mathematica yields me the answers:
$$I_1=-\frac{\sqrt{\pi}}{4}\Gamma\left(0,\frac{a^2}{4}\right)$$
and
$$I_2=\exp\left(\frac{a^2}{4}\right)\frac{1-\text{erf}(a/2)}{a}$$



A good first step for evaluating these integrals $I_1$ and $I_2$ seemed to fill in the integral representations of these special functions and try to switch te integrals over $x$ and $t$. However this has not (yet) been a success. I also tried to find a differential equation for these integrals, but also this was not so easy to do. Are there any tips/tricks to evaluate these integrals ?



Answer



Hint: Differentiate $I_1(a)$ with regard to a. Similarly, define $~I_2(b)~=~\displaystyle\int_0^\infty e^{-ax}~\text{erf}(bx)~dx,~$ and then differentiate it with regard to b.


linear algebra - When is matrix multiplication commutative?



I know that matrix multiplication in general is not commutative. So, in general:




$A, B \in \mathbb{R}^{n \times n}: A \cdot B \neq B \cdot A$



But for some matrices, this equations holds, e.g. A = Identity or A = Null-matrix $\forall B \in \mathbb{R}^{n \times n}$.



I think I remember that a group of special matrices (was it $O(n)$, the group of orthogonal matrices?) exist, for which matrix multiplication is commutative.



For which matrices $A, B \in \mathbb{R}^{n \times n}$ is $A\cdot B = B \cdot A$?


Answer



Two matrices that are simultaneously diagonalizable are always commutative.




Proof: Let $A$, $B$ be two such $n \times n$ matrices over a base field $\mathbb K$, $v_1, \ldots, v_n$ a basis of Eigenvectors for $A$. Since $A$ and $B$ are simultaneously diagonalizable, such a basis exists and is also a basis of Eigenvectors for $B$. Denote the corresponding Eigenvalues of $A$ by $\lambda_1,\ldots\lambda_n$ and those of $B$ by $\mu_1,\ldots,\mu_n$.



Then it is known that there is a matrix $T$ whose columns are $v_1,\ldots,v_n$ such that $T^{-1} A T =: D_A$ and $T^{-1} B T =: D_B$ are diagonal matrices. Since $D_A$ and $D_B$ trivially commute (explicit calculation shows this), we have $$AB = T D_A T^{-1} T D_B T^{-1} = T D_A D_B T^{-1} =T D_B D_A T^{-1}= T D_B T^{-1} T D_A T^{-1} = BA.$$


Wednesday 21 December 2016

cardinals - Proving two sets are Equinumerous

Prove that $[0,1] \approx [0,a)$ where $a$ is a positive real number.



I know I need to fine a bijective function function between the two sets. The $a$ in the second set is throwing me off. Origninally I was thinking of doing $x=\frac{1}{a}$ that would get me $[0,x)$. But i am unsure if this is the correct route. Also how do I address the open interval on $a$ as oppose to a closed interval?

calculus - What are other methods to Evaluate $int_0^{infty} frac{y^{m-1}}{1+y} dy$?



I am looking for an alternative method to what I have used below. The method that I know makes a substitution to the Beta function to make it equivalent to the Integral I am evaluating.




  1. Usually we start off with the integral itself that we are evaluating (IMO, these are better methods) and I would love to know such a method for this.

  2. Also, I would be glad to know methods which uses other techniques that I am not aware, which does not necessarily follow (1)







$$\Large{\color{#66f}{B(m,n)=\int_0^1 x^{m-1} (1-x)^{n-1} dx}}$$



$$\bbox[8pt,border: 2pt solid crimson]{x=\frac{y}{1+y}\implies dx=\frac{dy}{(1+y)^2}}$$



$$\int_0^{\infty} \left(\frac{y}{1+y}\right)^{m-1} \left(\frac{1}{1+y}\right)^{n-1} \frac{dy}{(1+y)^2}=\int_0^{\infty} y^{m-1} (1-y)^{-m-n} dy$$



$$\Large{n=1-m}$$




$$\Large{\color{crimson}{\int_0^{\infty} \frac{y^{m-1}}{1+y} dy=B(m,1-m)=\Gamma(m)\Gamma(m-1)}}$$



Thanks in advance for helping me expand my current knowledge.


Answer



The integrand function behaves like $y^{m-2}$ on $[1,+\infty)$ and like $y^{m-1}$ in a right neighbourhood of zero, hence we must have $m-2<-1$ and $m>0$ in order to ensure integrability, so $m\in(0,1)$. In such a case we have:
$$ I = \int_{0}^{+\infty}\frac{y^{m-1}}{1+y}\,dy = 2\int_{0}^{+\infty}\frac{y^{2m-1}}{1+y^2}\,dy $$
and since:
$$ \int_{0}^{+\infty}\sin(u) e^{-yu}\,du = \frac{1}{1+y^2},$$
we have:

$$ I = 2\int_{0}^{+\infty}\int_{0}^{+\infty} y^{2m-1} e^{-yu}\sin(u) \,dt\,du = 2\,\Gamma(2m)\int_{0}^{+\infty}\frac{\sin u}{u^{2m}}\,du $$
so the problem boils down to evaluating:
$$ J(\alpha) = \int_{0}^{+\infty}\frac{\sin u}{u}u^{\alpha}\,du $$
for $\alpha=1-2m\in(-1,1)$. The last integral can be computed with standard complex analytic techniques (i.e. Mellin transform) and leads to:
$$ J(\alpha) = \Gamma(\alpha)\sin\left(\frac{\pi\alpha}{2}\right), $$
so:
$$ I = 2\Gamma(2m)\Gamma(1-2m)\cos(\pi m)=\frac{2\pi\cos(\pi m)}{\sin(2\pi m)}=\frac{\pi}{\sin(\pi m)}. $$


integration - How to compute $int {frac{1}{{4 - 9{x^2}}}dx} $?

How can I evaluate the following integral




$$\int {\frac{1}{{4 - 9{x^2}}}dx} $$

Tuesday 20 December 2016

real analysis - Prove $limlimits_{nrightarrow infty}x_n=0$ iff $limlimits_{nrightarrow infty}frac{1}{x_n}=infty$



For $x_n \gt 0$ $\forall n=1,2,3,...$ Prove $$\lim\limits_{n\rightarrow +\infty}x_n=0$$ iff $$\lim\limits_{n\rightarrow +\infty}\frac{1}{x_n}=+\infty$$
Working proof:



$\forall M>0, \exists N$ such that $x_n>M$, $\forall n \ge N$




$x_n>M\Rightarrow(1/x_n)<(1/M)$



Choose $N=[M]+1$ $\Rightarrow$ $\frac{1}{x_n}<(1/x_N)=(1/x_{[M]+1})$



I have no idea what the next step would be or if I'm even headed in the right direction...


Answer



Suppose $x_n \to 0$. Let $M$ in $\mathbb R+$, there exists $N$ such that for all $n ≥ N, |x_n| ≤ 1/M,$ so for all $n ≥ N, |1/x_n| ≥ M$ (since $x > 0 \to 1/x$ decreases). So $1/x_n \to +\infty$



The same for the other direction.



functional equations - Does a function that satisfies the equality $f(a+b) = f(a)f(b)$ have to be exponential?




I understand the other way around, where if a function is exponential then it will satisfy the equality $f(a+b)=f(a)f(b)$. But is every function that satisfies that equality always exponential?


Answer



First see that $f(0)$ is either 0 or 1. If $f(0)=0$, then for all $x\in\mathbb R$, $f(x)=f(0)f(x)=0$. In this case $f(x)=0$ a constant function.




Let's assume $f(0)=1$. See that for positive integer $n$, we have $f(nx)=f(x)^n$ which means $f(n)=f(1)^n$. Also see that:
$$
f(1)=f(n\frac 1n)=f(\frac 1n)^n\implies f(\frac 1n)=f(1)^{1/n}.
$$
Therefore for all positive rational numbers:
$$
f(\frac mn)=f(1)^{m/n}.
$$
If the function is continuous, then $f(x)=f(1)^x$ for all positive $x$. For negative $x$ see that:
$$

f(0)=f(x)f(-x)\implies f(x)=\frac{1}{f(-x)}.
$$
So in general $f(x)=a^x$ for some $a>0$.






Without continuity, consider the relation: $xRy$ if $\frac xy\in \mathbb Q$ (quotient group $\mathbb R/\mathbb Q$). This relation forms an equivalence class and partitions $\mathbb R$ to sets with leaders $z$. In each partition the function is exponential with base $f(z)$.


elementary set theory - Is this proof correct for : Does $F(A)cap F(B)subseteq F(Acap B) $ for all functions $F$?



Is this proof correct? To prove $F(A)\cap F(B)\subseteq F(A\cap B) $ for all functions $F$.




Let any number $y\in F(A)\cap F(B)$. We want to show $y\in F(A\cap B).$




Therefore, $y\in F(A)$ and $y\in F(B)$, by definition of intersection.



By definition of inverse, $y=F(x)$ for some $x\in A$ and $x\in B$



And so, $y=F(x)$ for some $x\in A\cap B$



And therefore, $y\in F(A\cap B)$





I have a gut feeling deep down that something is wrong. Can anyone help me pinpoint the mistake? I am not sure why am I having so much problems with functions. Am I not thinking in the right direction?



Sources : 2nd Ed, P219 9.60 = 3rd Ed, P235 9.12, 9.29 - Mathematical Proofs, by Gary Chartrand,
P214 Theorem 12.4 - Book of Proof, by Richard Hammack,
P257-258 - How to Prove It, by D Velleman.


Answer



The third line is mistaken. You only know that there exists an $x$ in $A$ such that $F(x)=y$, and you know there is a $z\in B$ such that $F(z)=y$.



It is extremely easy to find a counterexample: just draw two sets $A$, $B$ that are disjoint, and map an $a\in A$ and a $b\in B$ to a single point. Then you have that $y\in F(A)\cap F(B)$, but $F(A\cap B)=\emptyset$.


analysis - Is there a continuous function from $[0, 1]$ onto $(0, 1)$?




If there is none, why?



And for the other side, what about open set $(0, 1)$ to closed set $[0, 1]$ with a continuous function?



Thanks


Answer



HINT: For the first one use the fact that, Continuous image of a compact set is compact.


Monday 19 December 2016

calculus - Proving that an additive function $f$ is continuous if it is continuous at a single point



Suppose that $f$ is continuous at $x_0$ and $f$ satisfies $f(x)+f(y)=f(x+y)$. Then how can we prove that $f$ is continuous at $x$ for all $x$? I seems to have problem doing anything with it. Thanks in advance.


Answer



Fix $a\in \mathbb{R}.$



Then



$\begin{align*}\displaystyle\lim_{x \rightarrow a} f(x) &= \displaystyle\lim_{x \rightarrow x_0} f(x - x_0 + a)\\ &= \displaystyle\lim_{x \rightarrow x_0} [f(x) - f(x_0) + f(a)]\\& = (\displaystyle\lim_{x \rightarrow x_0} f(x)) - f(x_0) + f(a)\\ & = f(x_0) -f(x_0) + f(a)\\ & = f(a).
\end{align*}$




It follows $f$ is continuous at $a.$


real analysis - Can we construct a function $f:mathbb{R} rightarrow mathbb{R}$ such that it has intermediate value property and discontinuous everywhere?


Can we construct a function $f:\mathbb{R} \rightarrow \mathbb{R}$ such that it has intermediate value property and discontinuous everywhere?





I think it is probable because we can consider
$$ y =
\begin{cases}
\sin \left( \frac{1}{x} \right), & \text{if } x \neq 0, \\
0, & \text{if } x=0.
\end{cases}
$$
This function has intermediate value property but is discontinuous on $x=0$.



Inspired by this example, let $r_n$ denote the rational number,and define

$$ y =
\begin{cases}
\sum_{n=1}^{\infty} \frac{1}{2^n} \left| \sin \left( \frac{1}{x-r_n} \right) \right|, & \text{if } x \notin \mathbb{Q}, \\
0, & \mbox{if }x \in \mathbb{Q}.
\end{cases}
$$



It is easy to see this function is discontinuons if $x$ is not a rational number. But I can't verify its intermediate value property.

trigonometry - Induction proof of the identity $cos x+cos(2x)+cdots+cos (nx) = frac{sin(frac{nx}{2})cosfrac{(n+1)x}{2}}{sin(frac{x}{2})}$





Prove that:$$\cos x+\cos(2x)+\cdots+\cos (nx)=\frac{\sin(\frac{nx}{2})\cos\frac{(n+1)x}{2}}{\sin(\frac{x}{2})}.\ (1)$$




My attempt:$$\sin\left(\frac{x}{2}\right)\sum_{k=1}^{n}\cos{(kx)}$$$$=\sum_{k=1}^{n}\sin\left(\frac{x}{2}\right)\cos{(kx)} $$ (Applying $\sin x\cos y =\frac{1}{2}\left[\sin(x+y)+\sin(x-y)\right]$)
$$=\sum_{k=1}^{n}\frac{1}{2}\sin\left(\frac{x}{2}-kx\right)+\frac{1}{2}\sin\left(\frac{x}{2}+kx\right).$$Multiplying $(1)$ by $\sin\left(\frac{x}{2}\right)$ ,gives: $$\sum_{k=1}^{n}\sin\left(\frac{x}{2}\right)\cos(kx)=\sin\left(\frac{nx}{2}\right)\cos{\frac{(n+1)x}{2}}$$ $$\Leftrightarrow\sum_{k=1}^{n}\left[\sin\left(\frac{x}{2}-kx\right)+\sin\left(\frac{x}{2}+kx\right)\right]=\sin\left(-\frac{x}{2}\right)+\sin\left(\frac{x}{2}+nx\right).$$Then, by induction on $n$ and using $(\sin x\sin y)$ and $(\sin x +\sin y)$ formulas, I end in here: $$\sum_{k=1}^{n+1}\sin\left(\frac{x}{2}\right)\cos(kx)=\left[2\sin\left(\frac{nx}{2}\right)\cos\frac{(n+1)x}{2}\right]+\left[2\sin\left(\frac{x}{2}\right)\cos(n+1)x\right].$$ Any help would be appreciated, thanks!


Answer




Here is the induction step: it comes down to proving
\begin{gather*}\frac{\sin \dfrac{nx}{2} \cos\dfrac{(n+1)x}{2}}{\sin\dfrac{x}{2}}+\cos(n+1)x=\frac{\sin\dfrac{(n+1)x}{2}\cos\dfrac{(n+2)x}{2}}{\sin(\dfrac{x}{2})}\\
\text{or}\qquad\sin\dfrac{nx}{2}\cos\dfrac{(n+1)x}{2}+\sin\dfrac{x}{2}\cos(n+1)x=\sin\dfrac{(n+1)x}{2} \cos\dfrac{(n+2)x}{2}
\end{gather*}
Now use the linearisation formulae:
\begin{cases}\displaystyle
\sin\frac{nx}{2}\cos\frac{(n+1)x}{2}=\tfrac12\biggl(\sin\frac{(2n+1)x}{2}-\sin\frac x2\biggr),\\[1ex]
\sin\dfrac{x}{2}\cos(n+1)x=\frac12\biggl(\sin\dfrac{(2n+3)x}{2}-\sin\dfrac{(2n+1)x}{2}\biggr),
\end{cases}
whence the l.h.s. is

$$\frac12\biggl(\sin\dfrac{(2n+3)x}{2}-\sin\frac x2\biggr)=\sin\frac{(n+1)x}2\,\cos\frac{(n+2)x}2$$
by the factorisation formula: $\;\sin p-\sin q=2\sin\dfrac{p-q}2\cos\dfrac{p+q}2$.


elementary number theory - Smallest positive integer modular congruence problem

I need help with the following problem:



Find the smallest positive integer $n$ such

that



$$x^n \equiv 1 \pmod{101}$$
for every integer $x$ between $2$ and $40$.



How can I approach this? Using Euclid's? Fermat's?



I know Fermat's says if $p$
is a prime, then $x^{p−1} \equiv 1 (\mod p) $but I don't know how or if I can apply it. As you can probably tell I'm very confused by modular congruences

functional analysis - On injective, bijective, surjective



Consider a function $f: \mathcal{A}\rightarrow \mathbb{R}$ where $\mathcal{A}\equiv \{a_1,a_2,a_3,a_4\}\subset \mathbb{R}$ .



We know that $f$ is injective, i.e., $f(a_j)\neq f(a_k)$ $\forall a_j\neq a_k$.



I would like your help with the following: given that the domain of $f$ is finite (and hence the image set of $f$ is finite), does this mean that $f$ being injective implies $f$ being bijective? If not, can you give an example of $f$ with domain $\mathcal{A}$ that is injective but not bijective?


Answer




Quite the opposite, in fact: there are no bijective (or even surjective) functions from a finite set to an infinite set (this a perfectly reasonable definition of "infinite set"). For an explicit example, take the function $f: \mathcal{A}\to\mathbb{R}$ that sents $a_1$ to $1$, $a_2$ to $2$, $a_3$ to $3$, and $a_4$ to $4$. This is clearly injective, but clearly not bijective: nothing is sent to $5$. Any other injective function $\mathcal{A}\to\mathbb{R}$ would work equally.






However, for the question that I suspect you meant to ask (at any rate, it's the closest thing to your statement that is true): if $f: A \to B$ is a function such that $A$ and $B$ are finite sets, and $|A| = |B|$, then the following are equivalent:




  1. $f$ is injective.

  2. $f$ is surjective.

  3. $f$ is bijective.




To see this, note that clearly (3) implies both (1) and (2), and that if we have both (1) and (2), then we have (3), so we merely need to show that (1) implies (2), and (2) implies (1). For the former, note that if $f$ is injective, then there are $|A|$ distinct elements in its image. But $|A| = |B|$, so all elements of $B$ are in the image of $f$, so $f$ is surjective. For the latter, note that if $f$ is surjective, then all elements of $B$ are in its image, so there are $|B|$ distinct elements in its image, each of which must have at least one element of $A$ sent to it, but since $|B| = |A|$, each must have exactly one sent to it, and so no two elements of $A$ are sent to the same element of $B$, hence $f$ is injective.


real analysis - Big $mathcal{O}$ notation



I'm learning about the big $\mathcal{O}$ notation and I'm a bit confused. Why is it that we can write things like $$\displaystyle x+\frac{x^3}{3}+\frac{x^5}{5}+\mathcal{O}(x^6)$$ when there's no $x^6$ term? Wouldn't it make sense to write $$\displaystyle x+\frac{x^3}{3}+\frac{x^5}{5}+\mathcal{O}(x^7)$$ instead? This is for the $\tanh^{-1}{x}$ series if it makes a difference.



Answer



The estimate
$$
\frac12\log\left(\frac{1+x}{1-x}\right)=x+\frac{x^3}3+\frac{x^5}5+O\left(x^7\right)\tag{1}
$$
is only for small $x$. For $\left|x\right|\le a$, we have that $\left|x\right|^7\le a\left|x\right|^6$; therefore, $(1)$ implies
$$
\frac12\log\left(\frac{1+x}{1-x}\right)=x+\frac{x^3}3+\frac{x^5}5+O\left(x^6\right)\tag{2}
$$
However, $(1)$ gives more information (that is, a closer approximation for small $x$) than $(2)$.



Sunday 18 December 2016

calculus - Prove $int_0^{infty}! frac{mathbb{d}x}{1+x^n}=frac{pi}{n sinfrac{pi}{n}}$ using real analysis techniques only



I have found a proof using complex analysis techniques (contour integral, residue theorem, etc.) that shows $$\int_0^{\infty}\! \frac{\mathbb{d}x}{1+x^n}=\frac{\pi}{n \sin\frac{\pi}{n}}$$ for $n\in \mathbb{N}^+\setminus\{1\}$



I wonder if it is possible by using only real analysis to demonstrate this "innocent" result?



Edit
A more general result showing that
$$\int\limits_{0}^{\infty} \frac{x^{a-1}}{1+x^{b}} \ \text{dx} = \frac{\pi}{b \sin(\pi{a}/b)}, \qquad 0 < a
can be found in another math.SE post



Answer



$$ \int_{0}^{\infty}\frac{1}{1+x^n}\ dx =\int_{0}^{\infty}\int_{0}^{\infty}e^{-(1+x^{n})t}\ dt\ dx $$



$$ =\int_{0}^{\infty}\int_{0}^{\infty}e^{-t}e^{-tx^{n}}\ dx\ dt =\frac{1}{n}\int_{0}^{\infty}\int_{0}^{\infty}e^{-t}e^{-u}\Big(\frac{u}{t}\Big)^{\frac{1}{n}-1}\frac{1}{t}\ du\ dt $$



$$ =\frac{1}{n}\int_{0}^{\infty}t^{-\frac{1}{n}}e^{-t}\int_{0}^{\infty}u^{\frac{1}{n}-1}e^{-u}\ du\ dt =\frac{1}{n}\int_{0}^{\infty}t^{-\frac{1}{n}}e^{-t}\ \Gamma\Big(\frac{1}{n}\Big)\ dt $$



$$ =\frac{1}{n}\ \Gamma\Big( 1-\frac{1}{n}\Big)\Gamma\Big(\frac{1}{n}\Big) =\frac{\pi}{n}\csc\Big(\frac{\pi}{n}\Big) $$


algebra precalculus - How do I simplify $log (1/sqrt{1000})$?

How do I simplify $\log \left(\displaystyle\frac{1}{\sqrt{1000}}\right)$?



What I have done so far:



1) Used the difference property of logarithms
$$\log \left(\displaystyle\frac{1}{\sqrt{1000}}\right) = \log(1) - \log(\sqrt{1000}) $$



2) Used the exponent rule for logarithm




$$\log (1) - \frac{1}{2}\log (1000) $$



I'm stuck at this point. Can someone explain why and what I must do to solve this equation?

elementary number theory - Show that for a and b non-zero integers and c different from zero, then gcd(ca,bc) = |c|gcd(a,b)

I did:




$$ca = cb * k + r \Leftrightarrow \\
ca - cb*k = r \\
c(a-bk)=r \\
a-bk = r/c \\
a = bk +r/c$$



So, $gcd(a,b) = r/c$



What do I do next?

summation - Geometrical interpretation of $(sum_{k=1}^n k)^2=sum_{k=1}^n k^3$

Using induction it is straight forward to show
$$\left(\sum_{k=1}^n k\right)^2=\sum_{k=1}^n k^3.$$
But is there also a geometrical interpretation that "proves" this fact? By just looking at those formulas I don't see why they should be equal.

probability - expectation of this random variable



I have this random variable $X$. I know that $P(X=c)=1$ for some real number $c$. I want to calculate $EX^2$, but I have no idea how to do this. I know that $EX=c$ and I tried to do it with integrating:
$\int_c^c x^2dx$, but this is just 0. So how should I do this?


Answer



Since $\{X=c\}\subseteq\{X^2=c^2\}$ we have that
$$
1=P(X=c)\leq P(X^2=c^2)\leq 1
$$
and hence also $P(X^2=c^2)=1$. Can you calculate the expectation now?




Another approach is to note that $\mathrm{Var}(X)=E[(X-c)^2]=0$ but on the other hand we have the formula
$$
\mathrm{Var}(X)=E[X^2]-E[X]^2,
$$
which enables us to calculate $E[X^2]$.


integration - How do I evaluate $int_{0}^{infty} u^{z-1}(e^{iu}-1) , du$?



I am trying to evaluate the following integral that shows up in this paper



http://arxiv.org/pdf/1103.4306v1.pdf



$I=\int_{0}^{\infty} u^{z-1}(e^{iu}-1)du= \Gamma(z)e^{\frac{iz\pi}{2}}$



for $-1


I have no idea what type of contour I can choose. Can someone please help me?



Thank you in advance.


Answer



Consider the quarter of circle $\Gamma$ of radius $M$ centred at the origin and lying in the first quadrant, thus made of the radius $0$-$M$ the arc $M$-$iM$ and the radius $iM$-$0$.
Consider the function of the complex plane $\zeta=x+iy$
$$
f(\zeta) =\zeta^{z-1}\left( e^{i\zeta}-1\right);
$$
choosing the logarithm branch cut in the negative $x$ axis, $f$ is holomorphic within $\Gamma$, thus

$$
\oint_\Gamma f(\zeta) d\zeta=0
$$
hence
$$
\int_0^M x^{z-1}\left(e^{ix}-1\right)dx +
\int_0^{\pi/2} \left(M e^{i\varphi}\right)^{z-1} \left(e^{iMe^{i\varphi}}-z\right) iM e^{i\varphi}d\varphi
-i^{z}\int_0^M y^{z-1}\left( e^{-y}-1\right)dy=0.
$$
The contribution of the arc is under control, since letting $\Re (z)=\xi$, $\Im (z)=\eta$,

$$
\left|\int_0^{\pi/2} \left(M e^{i\varphi}\right)^{z-1} \left(e^{iMe^{i\varphi}}-z\right) iM e^{i\varphi}d\phi\right|
\le
M^\xi \int_0^{\pi/2}e^{-\eta\varphi}\left| e^{iM\cos\varphi - M\sin\varphi}-1\right|d\varphi\\
\le 2M^\xi \int_0^{\pi/2}e^{-\eta\varphi}d\varphi=2M^{\xi}\frac{1-e^{-\eta\pi/2}}{\eta}\xrightarrow[M\to\infty]{}0,
$$
since $\xi<0$.
Terefore
$$
\int_0^\infty x^{z-1}\left(e^{ix}-1\right)dx=i^{z}\int_0^\infty y^{z-1}\left( e^{-y}-1\right)dy=e^{i\pi z/2}\int_0^\infty y^{z-1}\left( e^{-y}-1\right)dy.

$$
Integrating by parts
$$
\int_0^\infty y^{z-1}\left( e^{-y}-1\right)dy=
\frac{y^z}{z}\left(e^{-y}-1\right)\Big|_0^\infty+\frac{1}{z}\int_0^\infty y^z e^{-y}dy=\frac{1}{z}\Gamma(z+1)=\Gamma(z),
$$
where we have used
$$
\lim_{y\to\infty}\frac{y^z}{z}\left(e^{-y}-1\right)=0
$$

$$
\lim_{y\to0}\frac{y^z}{z}\left(e^{-y}-1\right)=0,
$$
the second one following from the request $\xi>-1$. Putting everything back together,
$$
\int_0^\infty x^{z-1}\left(e^{ix}-1\right)dx=e^{i\pi z/2}\Gamma(z)
$$
if $-1<\Re (z)<0$.


integration - Prove that $int_{-infty}^{infty} frac{e^{2x}-e^x}{x (1+e^{2x})(1+e^{x})}mathrm{d}x=log 2$



This integral popped up recently



$$\int_{-\infty}^{\infty} \frac{e^{2x}-e^x}{x (1+e^{2x})(1+e^{x})}\mathrm{d}x = \log 2$$



A solution using both real and complex analysis is welcome. I tried rewriting it using symmetry, and then the series expansion of $1/(1+e^{nx})$, however this did not quite make it.



\begin{align*}

I & = 2\int_{0}^{\infty} \frac{e^{2x}-e^x}{x e^{3x}(1+e^{-2x})(1+e^{-x})}\mathrm{d}x \\
& = 2 \int_{0}^{\infty}\frac{e^{-2x} -e^{-x}}{x} \left(\sum_{n=0}^\infty (-1)^n e^{-n x}\right)\left(\sum_{m=0}^\infty (-1)^me^{-2mx}\right) \mathrm{d}x
\end{align*}
The first part reminds me of a Frullani integral (it evaluates to $\log 2$). However I am unsure if this is the correct path, any help would be appreciated. =)


Answer



One can start from the identity
$$
\frac{\mathrm e^{2x}-\mathrm e^x}{(1+\mathrm e^{2x})(1+\mathrm e^{x})}=\frac1{1+e^{x}}-\frac1{1+\mathrm e^{2x}}=-\int_1^2\frac{\mathrm d}{\mathrm du}\left(\frac1{1+\mathrm e^{ux}}\right)\cdot\mathrm du,
$$
that is,

$$
\frac{\mathrm e^{2x}-\mathrm e^x}{x(1+\mathrm e^{2x})(1+\mathrm e^{x})}=\int_1^2\frac{\mathrm e^{ux}}{(1+\mathrm e^{ux})^2}\mathrm du.
$$
Note that, for every nonzero $u$,
$$
\int_{-\infty}^\infty\frac{\mathrm e^{ux}}{(1+\mathrm e^{ux})^2}\mathrm dx=\left.\frac{-1}{u(1+\mathrm e^{ux})}\right|_{-\infty}^\infty=\frac1{|u|},
$$
hence Fubini theorem shows that the integral to be computed is
$$
\int_{-\infty}^{\infty} \frac{\mathrm e^{2x}-\mathrm e^{x}}{x (1+\mathrm e^{2x})(1+\mathrm e^{x})}\mathrm{d}x =

\int_1^2\frac{\mathrm du}u=\log2.
$$
More generally, for every positive $a$ and $b$,
$$
\int_{-\infty}^{\infty} \frac{\mathrm e^{ax}-\mathrm e^{bx}}{x (1+\mathrm e^{ax})(1+\mathrm e^{bx})}\mathrm{d}x = \log\left(\frac{a}b\right).
$$
This is rediscovering the fact that, for every monotonous function $\varphi$ with limits at $\pm\infty$,
$$
\int_\mathbb R\frac{\varphi(ax)-\varphi(bx)}x\mathrm dx=(\varphi(+\infty)-\varphi(-\infty))\cdot\log\left(\frac{b}a\right).
$$

in the present case for the function
$$
\varphi(x)=\frac1{1+\mathrm e^x}.
$$


combinatorics - Counting functional graphs with no 1 or 2-cycles.



I can easily work out that the number of functional graphs (directed graphs in which all vertices have out-degree one) of size $n$ is $n^n$ and the number of functional graphs without one-cycles is ${(n-1)}^n$. Is there an straight-forward way to work out the number of functional graphs of size $n$ which have no 1-cycles (fixed points) or 2-cycles?


Answer




The problem of restricted endofunctions is a classic from
combinatorics, gives rise to some fascinating deep math and has been
studied in considerable detail.



Labelled trees are given by
$$\mathcal{T} =
\mathcal{Z} \times \mathfrak{P}(\mathcal{T})$$
which gives the functional equation
$$T(z) = z \exp T(z).$$




The species of endofunctions is then given by
$$\mathcal{Q} = \mathfrak{P}(\mathfrak{C}(\mathcal{T})).$$



Translating to generating functions we get
$$Q(z) = \exp \log \frac{1}{1-T(z)} =
\frac{1}{1-T(z)}.$$



In this particular case we have that there are no
fixed points or two-cycles so we get the species




$$\mathcal{Q} =
\mathfrak{P}(\mathfrak{C}_{\ge 3}(\mathcal{T})).$$



This yields the generating function



$$Q(z) = \exp
\left(-T(z)-T(z)^2/2\right)
\frac{1}{1-T(z)}.$$



Extracting coefficients via poor man's Lagrange inversion we have

$$Q_n
= n! \frac{1}{2\pi i}
\int_{|z|=\epsilon} \frac{1}{z^{n+1}}
\exp\left(-T(z)-T(z)^2/2\right)
\frac{1}{1-T(z)} dz.$$



Put $T(z)=w$ so that $z=w/\exp(w) = w\exp(-w)$ and
$dz = \exp(-w) - w\exp(-w)$
to get
$$n! \frac{1}{2\pi i}

\int_{|w|=\epsilon}
\frac{\exp(w(n+1))}{w^{n+1}}
\frac{\exp(-w-w^2/2)}{1-w} (\exp(-w) - w\exp(-w)) dw
\\ = n! \frac{1}{2\pi i}
\int_{|w|=\epsilon}
\frac{\exp(wn)}{w^{n+1}}
\frac{\exp(-w-w^2/2)}{1-w} (1 - w) dw
\\ = n! \frac{1}{2\pi i}
\int_{|w|=\epsilon}
\frac{\exp(w(n-1)-w^2/2)}{w^{n+1}} dw.$$




Extracting coefficients from this we obtain the closed formula



$$n! \sum_{q=0}^{\lfloor n/2 \rfloor}
\frac{(-1)^q}{2^q \times q!}
\frac{(n-1)^{n-2q}}{(n-2q)!}.$$



This yields the following sequence which we have started at index one
to make things clear:




$$0, 0, 2, 30, 444, 7360, 138690, 2954364, 70469000, 1864204416,
\\ 54224221050, 1721080885480,
59217131089908, 2195990208122880,\ldots$$



This is OEIS A134362 where we find
confirmation of the above analysis as well as many useful links.



Finally note that this
MSE link I and this
MSE link II

may prove useful reading.


Proving inequality of degrees between finite field extensions



Let $E/F$ be a finite field extension, and $D/E$ be a field extension. Let $\alpha \in D$ be algebraic over $F$. Prove: $$ [E(\alpha) : F(\alpha)] \leq [E:F]$$



I'm not even sure why this is intuitively true. I recently did a question proving that the degree of the minimum polynomial for $\alpha$ over $E$ is less than or equal to the minimum polynomial for $\alpha$ over $F$ and I could quickly see that it made sense because $E$ is an extension of $F$, so the min poly in $F$ either reduced in $E$ and loses degree, or has the same degree. When it comes to this, I'm trying to look at the basis of $E$ as an $F$-vector space and similarly for $E(\alpha)$ and $F(\alpha)$ but nothing is clicking.



I was playing around with the facts that





  • Finite extension $\implies$ algebraic $\implies$ finitely generated by elements in $F$

  • $\alpha$ is algebraic in $F$ so it has a min poly in E and F

  • So $E(\alpha)$ = $ E(\gamma_1, \dots, \gamma_n)(\alpha) = E(\gamma_1, \dots ,\gamma_n, \alpha)$ where $\gamma_i \in F$ ... Something tells me that the $\gamma_i$ make up a basis for $E$ over $F$ but I don't know how to prove that (or if it's even true)



But I'm not even sure if trying to look at a basis is the correct way to approach the question -- I'm not aware of any other slick characterizations of the degree of a field extension. I'm also unsure of the significance of $D/E$ being a field extension. I think there must be something I'm missing with regards to its significance in this set-up.



Any help is appreciated! Thanks!



Answer



$[E(a):F]=[E(a):E][E:F]=[E(a):F(a)][F(a):F]$ but $[E(a):E]\le[F(a):F]$, therefore....


linear algebra - Is a square matrix with positive determinant, positive diagonal entries and negative off-diagonal entries an M-matrix?

I'm trying to determine if a certain class of matrices are M-matrices in general. I'm considering square matrices $A$ with the following properties:




  1. $\det(A) > 0$ (strictly),

  2. all the diagonal entries are positive,

  3. all the off diagonal entries are negative.




An M-matrix can be characterized in many ways. I've tried proving this (or finding a counterexample) by looking at the principal minors and have found that $A$ is an M-matrix if it has dimension 2 or 3, but it's hard to make any sort of induction with that. Right now I'm trying two other definitions (they're equivalent) of M-matrices




  1. There is a positive vector such that $Ax > 0$ (component-wise).

  2. $A$ is monotone (i.e. $Ax \geq 0$ implies $x \geq 0$).



Again, 1 isn't hard to show if the matrix is small, but this is hard to generalize, so I thought an easier approach might be using 2 and try to proceed by contradiction. Does anyone here have some suggestions? This is an outside project for a class I'm working on so I don't know if these matrices are or are not M-matrices in general - mostly just looking for tips here.

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...