Thursday 31 January 2019

combinatorics - What is that curve that appears when I use $ln$ on Pascal's triangle?



I made a little program that generates Pascal triangles as images :



I first tried it associating to each pixel a color whose intensity was proportional to the number in the Pascal triangle



The colors being 0-255, i used the following function to convert value to colors: $$f(x)=\frac{x-m}{M-m}255$$
where $x$ is the value in the Pascal triangle, $M$ is the max value in the triangle, and $m$ in the min value in the triangle.
:




size 50*50 : enter image description here



The axis are like in this picture :



enter image description here



However, as you can see, most of the picture is black due to the numbers being really distant (great distance between highs and lows)



Therefore, i thought it would be good to use a logarithmic scale :




$$f(x)=\frac{\ln(1+x-m)}{\ln(1+M-m)}255$$



Which gives me :
size 50*50 : enter image description here



That's way better.



Yet, something was bugging me : as I increased the number of rows, I noticed that some curve was being drawn :



size 50*50 : enter image description here




size 100*100 : enter image description here



size 150*150 : enter image description here



I can't try really high numbers, as my computer isn't good enough, nor is the software I use.



Is there something behind that 'curve' ? If so, what curve would it be ?



Could someone provide explanation why I get such results ?




Thank you.






Progress



We're looking at the level curves of $\ln\binom{N-y}{x},N\in\mathbb{N}^*$



By Stirling, as @TedShifrin remarked, $\ln(n!)\sim n\ln(n)$ therefore $\ln\binom{N-y}{x}\sim y\ln(N-y)-x\ln(x)-(N-y-x)\ln(N-y-x)$ and seem to give us nice curves (cf his answer).




Is there an equation y=f(x) for those curves ?


Answer



OK, at long last: For any $N\in\Bbb N$, consider $\binom{N-y}x$, $0\le x,y\le N$. This indicates the intensity at the point $(x,y)$ in the graph, and we are, in fact, as @JackM suggested, looking at its level curves.



Using Stirling's approximation, $\log(n!)\sim n\log n-n$, we consider the plot of level curves of $(N-y)\log(N-y)-x\log x-(N-y-x)\log(N-y-x)$ — note that the linear terms cancel. Here is a Mathematica plot for $N=100$:



Pascal's image


Wednesday 30 January 2019

discrete mathematics - Using a Direct Proof to show that two integers of same parity have an even sum?

I seem to be having a lot of difficulty with proofs and wondered if someone can walk me through this. The question out of my textbook states:





Use a direct proof to show that if two integers have the same parity, then their sum is even.




A very similar example from my notes is as follows: Use a direct proof to show that if two integers have opposite parity, then their sum is odd. This led to:



Proposition: The sum of an even integer and an odd integer is odd.



Proof: Suppose a is an even integer and b is an odd integer. Then by our definitions of even and odd numbers, we know that integers m and n exist so that a = 2m and b = 2n+1. This means:




a+b = (2m)+(2n+1) = 2(m+n)+1 = 2c+1 where c=m+n is an integer by the closure property of addition.



Thus it is shown that a+b = 2c+1 for some integer c so a+b must be odd.



----------------------------------------------------------------------------



So then for the proof of showing two integers of the same parity would have an even sum, I have thus far:



Proposition: The sum of 2 even integers is even.




Proof: Suppose a is an even integer and b is an even integer. Then by our definitions of even numbers, we know that integers m and n exist so that a=2m and b=2m???

combinatorics - A simple finite combinatorial sum I found, that seems to work, would have good reasons to work, but I can't find in the literature.




I was doing a consistency check for some calculations I'm performing for my master thesis (roughly - about a problem in discrete bayesian model selection) - and it turns out that my choice of priors is only consistent if this identity is true:



$$\sum_{k=0}^{N}\left[ \binom{k+j}{j}\binom{N-k+j}{j}\right] = \binom{N+2j+1}{2j+1}$$



Now, this seems to work for small numbers, but I searched for it a lot in the literature and I can't find it.(I have a physics background so probably my knowledge of proper "literature" is the problem here). Neither I can demonstrate it!
Has anyone of you seen this before? Can this be rewritten equivalently in some more commonly seen way? Can it be proven right...or wrong?
Thanks in advance! :)


Answer



Your identity is correct. I don't know a reference offhand, but here is a proof.




The right side, $\binom{N+2j+1}{2j+1}$, is the number of bitstrings of length $N+2j+1$ consisting of $N$ zeroes and $2j+1$ ones.



The sum on the left counts the same set of bitstrings. Namely, for $0\le k\le N$, the term $\binom{k+j}j\binom{N-k+j}j$ is the number of those bitstrings in which the middle one, with $j$ ones on either side, is in the $k+j+1^\text{st}$ position; i.e., it has $k$ zeroes and $j$ ones to the left, $N-k$ zeroes and $j$ ones to the right.




P.S. I found your identity in László Lovász, Combinatorial Problems and Exercises, North-Holland, 1979 (the first edition), where it is Exercise 1.42(i) on p. 18, with hint on p. 96 and solution on p. 172. Lovász gives the identity in the following (more general) form:
$$\sum_{k=0}^m\binom{u+k}k\binom{v-k}{m-k}=\binom{u+v+1}m.$$
If we set $m=N$, $u=j$, $v=N+j$, this becomes
$$\sum_{k=0}^N\binom{j+k}k\binom{N+j-k}{N-k}=\binom{N+2j+1}N$$
which is plainly equivalent to your identity
$$\sum_{k=0}^N\binom{k+j}j\binom{N-k+j}j=\binom{N+2j+1}{2j+1}.$$



elementary number theory - If $p not = 5$ is an odd prime, prove that either $p^2+1$ or $p^2-1$ is divisible by $10$?




I was able to find a solution to this problem, but only by using a couple extra tools that are later in the book$^{1}$. So far the book only covered basic divisibility, $gcd$, and the fundamental theorem of arithmetic; it did not cover modular arithmetic, and altough we did cover the division algorithm, we did not cover divisibility rules (i.e. if a number ends in $5$ or $0$, then it is divisible by $5$). Is there any way of proving this with only the above tools? (I will point out what I used from future chapters in my solution)



My Solution



Suppose $10 \nmid p^2-1 = (p+1)(p-1)$. Then $5 \nmid (p+1)$ and $5 \nmid (p-1)$.



Odd primes can only have last digits of $1, 3, 7, 9$ (I used the divisibility rule that a number ending in $0$ or $5$ is divisible by $5$, which is in the next chapter). Since $5 \nmid (p+1)$ and $5 \nmid (p-1)$, the last digit of $p$ is either $3$ or $7$. If we write $p$ as $10n+3$ or $10n+7$, then square and add $1$, we get a multiple of $10$. (The fact that any integer with a last digit of $k$ can be written as $10n+k$ is also something from a future chapter)







Elemntary Nuber Theory by David Burton 6th ed., Section 3.1 # 10


Answer



If $p\neq 5$ is an odd prime, its square $p^2$ is also odd, thus $p^2-1$ and $p^2+1$ are both even.



Now, since an odd prime $p\neq 5$ must (as you mention in your post) be: $$p\equiv1,3,7 \textrm{ or }9 \mod 10$$ its square will be
$$
p^2\equiv1,-1,-1\textrm{ or }1 \mod 10
$$
which answers your question.


probability - CDF and PDF of semaphore waiting time



Imagine we have a semaphore that alternates every 40 seconds between green and red.



Waiting time is 0 when the semaphore is green, and when it is red it is the remaining time until it turns green.



I want to model the distribution of waiting times on this semaphore.



Starting with the CDF I have:




$$
F(x) =
\begin{cases}
0 && \text{if } x < 0\\
0.5 && \text{if } x = 0 && \textit{half the time we don't need to wait}\\
0.5 + \frac{0.5}{40} x && \text{if } x > 0 \text{ and } x<=40 && \textit{all waiting times ]0-40] are equally likely}\\
1 && \text{if } x > 40
\end{cases}
$$




Is the PDF of this distribution given by the following function?



$$
PDF(x) =
\begin{cases}
0 && \text{if } x < 0\\
0.5 && \text{if } x = 0\\
\frac{0.5}{40} && \text{if } x > 0 \text{ and } x<=40\\
0 && \text{if } x > 40

\end{cases}
$$



And is the expected time waiting on this semaphore given by:



\begin{align*}
\int_0^{40} x f(x) dx = \int_0^{40} x . \frac{0.5}{40} dx = 10
\end{align*}


Answer



Posting Henry's answer here for future reference.




This distribution's density is not defined at $x=0$, but instead we have a point probability $P(X=0) = 0.5$.



The expected value is calculated with a mix of discrete and continuous calculation:



\begin{align*}
E[\text{waiting time}] &= \int_0^{40} x f(x) dx + 0 * P(X=0)\\
&= \frac{0.5}{40} . \frac{x^2}{2} \bigg|_0^{40} + 0 \\
&= \frac{0.5}{40}.\frac{40^2}{2}\\
&= 10

\end{align*}


Tuesday 29 January 2019

calculus - How to evaluate $int_{-infty}^{infty}frac{x^2e^x}{left(1 + e^xright)^2}dx$




How to evaluate the following integral:



$$\int_{-\infty}^{\infty}\frac{x^2e^x}{\left(1 + e^x\right)^2}dx$$



So far I know that this function is even, so we can take $\int_{0}^{\infty}\frac{x^2e^x}{\left(1 + e^x\right)^2}dx$ or $\int_{-\infty}^{0}\frac{x^2e^x}{\left(1 + e^x\right)^2}dx$ and then multiply by 2.



If we substitute $x = \ln z$ then $\int_{-\infty}^{0}\frac{x^2e^x}{\left(1 + e^x\right)^2}dx = \int_{0}^{1}\frac{\ln^2z}{\left(1 + x\right)^2}dz$



And I don't know what to do next.


Answer




Since the function is even, the integral $I$ is



$$I=2\int_0^\infty\frac{x^2e^{-x}}{(1+e^{-x})^2}\,dx.$$



Now, using that



$$\frac1{(1+x)^2}=\sum_{n\geq1}n(-1)^{n-1}x^{n-1}$$



it follows that




$$I=2\sum_{n\geq1}n(-1)^n\underbrace{\int_0^\infty x^2e^{-nx}\,dx}_{2/n^3}
=4\underbrace{\sum_{n\geq1}\frac{(-1)^{n-1}}{n^2}}_{\pi^2/12}
=\frac{\pi^2}3.$$



Sum and integral can be interchanged by absolute convergence.



Edit:



$$\int_0^\infty x^2e^{-nx}\,dx
\underbrace{=}_{y=nx}\frac1{n^3}\underbrace{\int_0^\infty e^{-y}y^2\,dy}_{\text{gamma function }\Gamma(3)=2}

=\frac{2}{n^3}$$



$$\sum_{n\geq1}\frac{(-1)^{n-1}}{n^2}=
\sum_{n\text{ odd}}\frac1{n^2}-
\sum_{n\text{ even}}\frac1{n^2}
=\sum_{\text{all }n}\frac1{n^2}-2\sum_{n\text{ even}}\frac1{n^2}
=\zeta(2)-2\sum_{n\geq1}\frac1{(2n)^2}
=\zeta(2)-\frac12\zeta(2)=\frac{\pi^2}{12}$$


integration - Why continuity of $X$ needed for $int_{g^{-1}(y)}^infty f_X(x) , dx = 1-F_X(g^{-1}(y))$?



Let $X$ be a random variable and $Y=g(X)$




Define
$$\tag{1}
\chi = \{x: f_X(x)>0\}\quad \text{and}\quad \mathcal{Y} = \{y:y=g(x) \text{ for some } x \in \chi\}
$$



Define $g^{-1}(y) = \{x\in \chi:g(x) = y\}$



Define: A random variable $X$ is continuous if $F_X(x)$ is a continuous function of $x$.





My question is: how come, in the theorem below, the statement in (b) requires X to be a continuous random variable but the statement in (a) does not




The relevant theorem is (Theorem 2.1.3 in Casella and Berger 2nd Edition)




Let $X$ have cdf $F_X(x)$, let $Y=g(X)$, and let $\chi$ and $\mathcal{Y}$ be defined as in (1)




  • (a) If $g$ is an increasing function on $\chi$, $F_Y(y) = F_X(g^{-1}(y))$ for $y\in \mathcal{Y}$



  • (b) If $g$ is a decreasing function on $\chi$ and $X$ is a continuous random variable, $F_Y(y) = 1-F_X(g^{-1}(y))$ for $y\in\mathcal{Y}$








Another way of stating what I am asking is that, prior to stating this theorem, Casella and Berger state




if $g(x)$ is an increasing function, then using the fact that $F_Y(y) = \int_{x\in\chi : g(x)\leq y} f_X(x)dx$, we can write

$$
F_Y(y) = \int_{x\in\chi : g(x)\leq y} f_X(x) \, dx = \int_{-\infty}^{g^{-1}(y)} f_X(x) \, dx = F_X(g^{-1}(y))
$$




If $g(x)$ is decreasing, then we have




$$
F_Y(y) = \int_{g^{-1}(y)}^\infty f_X(x) \, dx = 1-F_X(g^{-1}(y))

$$
"The continuity of $X$ is used to obtain the second equality




My question(restated) is in yellow box below:




My question (restated) is: How come, when $g(x)$ is an increasing function we do not need to use continuity of $X$, but we do for the case when $g(x)$ is decreasing?





  • (A side question, I will accept answer so long as answers the above question): this is continuity of the random variable, but the integral uses the PDF. what is the relation between continuity of $X$ and it's pdf? (specifically, I think there may be some strangeness if $F_X$, the CDF of $X$ is continuous but not differentiable)?




What came to my mind was Fundamental theorem of calculus maybe, but there is a version of it that doesn't require continuity of $f$ I think? Plus, here we have $X$ is continuous, if that matters -- I'm not sure.


Answer



$$
\int_{g^{-1}(y)}^\infty f_X(x) \, dx = 1-F_X\left(g^{-1}(y)\right) \text{ ?}
$$
We have:

$$
1-F_X(g^{-1}(y)) = 1 - \Pr(X\le g^{-1}(y)) = \Pr\left(X>g^{-1}(y)\right)
$$
We may consider continuity of $F$ at $g^{-1}(y)$ or continuity of $F$ at points greater than $g^{-1}(y).$ Nothing about continuity at points less than $g^{-1}(y)$ can matter here.



In the first place $$\Pr(a< X < b) = \int_a^b f_X(x)\,dx\tag 1$$ only if $X$ has a density function $f_X,$ and that in itself requires continuity of $F_X$ (and in fact requires something more than just continuity). If $\Pr(x = c)>0,$ where $c$ is some number between $a$ and $b,$ then line $(1)$ above is not true of any function in the role of $f.$



However, statement $(b)$ of the theorem does not mention integration of any density function. The statement is in effect $\Pr(Y\le y) = 1- \Pr(X>g^{-1}(y))$ if $F_X$ is continuous.



Cumulative distribution functions are non-decreasing. The only kind of discontinuity that a non-decreasing function can have is a jump. A jump in $F_X$ at $g^{-1}(y)$ would mean $\Pr(X = g^{-1}(y))>0.$ If that happens then

\begin{align}
& \Pr(Y\le y) = \Pr(Y=y) + \Pr(Y= {} & \Pr(X=g^{-1}(y)) + \Pr(X>g^{-1}(y)) \\[10pt]
= {} & \Pr(X=g^{-1}(y)) + \int_{g^{-1}(y)}^\infty f_X(x)\,dx.
\end{align}
If the first term in the last line is positive rather than zero, then equality between the second term in the last line and $\Pr(Y\le y)$ is not true.



But now suppose it had said $\Pr(Y\ge y).$ Then we would have
$$
\Pr(Y\ge y) = \Pr(X\le g^{-1}(y)) = F_X(g^{-1}(y)).

$$
The difference results from the difference between $\text{“}<\text{''}$ and $\text{“} \le \text{''}$ in the definition of the c.d.f., which says $F_X(x) = \Pr(X\le x)$ and not $F_X(x) = \Pr(X

As for the relationship between continuity and density functions, that is more involved. The Cantor distribution is a standard example, defined like this: A random variable $X$ will be in the interval $[0,1/3]$ or $[2/3,1]$ according to the result of a coin toss; then it will be in the upper or lower third of the chosen interval according to a second coin toss; then in the upper or lower third of that according to a third coin toss, and so on.



The c.d.f. of this distribution is continuous because there is no individual point between $0$ and $1$ that gets assigned positive probability.



But notice that there is probability $1$ assigned to a union of two intervals of total length $2/3,$ then probability $1$ assigned to a union of intervals that take up $2/3$ of that union of intervals, thus $4/9$ of $[0,1],$ then there is probability $1$ assigned to a set taking up $2/3$ of that space, thus $(2/3)^3 = 8/27,$ and so on. Thus there is probability $1$ that the random variable lies within a certain set whose measure is $\le (2/3)^n,$ no matter how big an integer $n$ is. The measure of that set must therefore be $0.$ If you integrate any function over a set whose measure is $0,$ you get $0.$ Hence there can be no function $f$ such that for every measurable set $A\subseteq[0,1]$ we have
$$
\Pr(X\in A) = \int_A f(x)\,dx,

$$
i.e. there can be no density function.



Thus the Cantor distribution has no point masses and also no probabilities that can be found by integrating a density function.



Thus existence of a density function is a stronger condition on than mere continuity of the c.d.f.


calculus - Prove that $intlimits_0^pi {frac{{sin left( {xt} right)}} {t} mathrm dt} $ is continuous



How can I prove that this function is continuous? $$
f\left( x \right) = \int\limits_0^\pi {\frac{{\sin \left( {xt} \right)}}
{t} \mathrm dt}

$$

Some hint?
Don´t consider the zero in the endpoint of the integration zone, just take it as a limit $$
f\left( x \right) = \mathop {\lim }\limits_{\varepsilon ^ + \to 0} \int\limits_\varepsilon ^\pi {\frac{{\sin \left( {xt} \right)}}
{t} \mathrm dt}
$$

How can I do it? DX!


Answer



First of all, observe that
$$

\lim_{t\to0}\frac{\sin(x\,t)}{t}=x\ ,
$$
so that the integral exists as a bona fide Riemann integral. Next, given $x,y\in\mathbb{R}$,
$$
|f(x)-f(y)|\le\int_0^{\pi}\frac{|\sin(x\,t)-\sin(y\,t)|}{t}\,dt.
$$
Now use the inequality $|\sin a-\sin b|\le\dots$ to conclude that $f$ is continuous.


Monday 28 January 2019

Functional equation basic

Let $f$ be any function defined on set $\Bbb N$.if $f(xy)=f(x)+f(y)$ & $f(2)=9$ then find $f(3)$ and answer for this question is $7$ . Please say how to get it.

number theory - Bezout's Identity proof and the Extended Euclidean Algorithm

I am trying to learn the logic behind the Extended Euclidean Algorithm and I am having a really difficult time understanding all the online tutorials and videos out there. To make it clear, though, I understand the regular Euclidean Algorithm just fine. This is my reasoning for why it works:




If $g = \gcd(a,b)$, then $gi = a$ and $gj = b$ with $\gcd(i,j)=1$. If you subtract the equations, then $g(i-j) = a-b$, which means $g$ divides $a-b$ too.



This implies that $\gcd(a,b) = \gcd(a-kb,b)$ and so the maximum $k$ is $a - kb = 0$ or $a = kb$ or $\lfloor a/b \rfloor = k$, and the value of $a-\lfloor a/b \rfloor b$ is the same as $a$ mod $b$.



But I do not understand the Extended algorithm or the Identity.




  1. Why is there always an $x,y$ such that $ax + by = \gcd(a,b)$ for nonzero (positive?) $a,b$? I don't understand the intuition behind this claim.


  2. I also don't understand how the Extended Euclidean algorithm works at all. Is it correct to say that the Extended Euclidean algorithm is what solves Bezout's Identity? It returns nonzero (positive?) $x,y$ such that $ax + by = \gcd(a,b)$?



  3. Is the idea behind modular inverses included here too? If I want to solve for the inverse of $a$ modulo $m$ then this is the same as solving for $x$ in $ax = 1 \bmod m$ for known integers $a,m$, the same as $ax = 1 - my$, the same as $ax + my = 1$, so this is like using the Extended algorithm on $a,m$ and checking if the gcd is equal to $1$. If so, then the answer is $x$. Is my understanding correct?


real analysis - Calculate: $lim_{ntoinfty} int_{0}^{pi/2}frac{1}{1+xtan^{n} x }dx$



I'm supposed to work out the following limit:




$$\lim_{n\to\infty} \int_{0}^{\pi/2}\frac{1}{1+x \left( \tan x \right)^{n} }dx$$



I'm searching for some resonable solutions. Any hint, suggestion is very welcome. Thanks.


Answer



Note that the integrand is bounded in $[0,\pi/2]$, so if $$\lim_{n\to \infty} \frac{1}{1+x\tan^nx}$$ exists a.e. then we may apply the Dominated Convergence Theorem to show $$\lim_{n\to \infty} \int_0^{\pi \over 2}\frac{1}{1+x\tan^nx}dx = \int_0^{\pi \over 2}\lim_{n\to \infty} \frac{1}{1+x\tan^nx}dx.$$



If $x<\pi/4$ then the integrand converges to 1, and if $x>\pi/4$ then it converges to 0. Thus we have the integral equals
$$
\int_0^{\pi \over 4} 1dx + \int_{\pi \over 4}^{\pi \over 2} 0dx = \frac{\pi}{4}.
$$



Sunday 27 January 2019

calculus - Simpler way to compute a definite integral without resorting to partial fractions?



I found the method of partial fractions very laborious to solve this definite integral :
$$\int_0^\infty \frac{\sqrt[3]{x}}{1 + x^2}\,dx$$



Is there a simpler way to do this ?


Answer



Perhaps this is simpler.



Make the substitution $\displaystyle x^{2/3} = t$. Giving us




$\displaystyle \frac{2 x^{1/3}}{3 x^{2/3}} dx = dt$, i.e $\displaystyle x^{1/3} dx = \frac{3}{2} t dt$



This gives us that the integral is



$$I = \frac{3}{2} \int_{0}^{\infty} \frac{t}{1 + t^3} \ \text{d}t$$



Now make the substitution $t = \frac{1}{z}$ to get



$$I = \frac{3}{2} \int_{0}^{\infty} \frac{1}{1 + t^3} \ \text{d}t$$




Add them up, cancel the $\displaystyle 1+t$, write the denominator ($\displaystyle t^2 - t + 1$) as $\displaystyle (t+a)^2 + b^2$ and get the answer.


group theory - Is the statement that $ operatorname{Aut}( operatorname{Hol}(Z_n)) cong operatorname{Hol}(Z_n)$ true for every odd $n$?




Is the statement that $ \operatorname{Aut}(\operatorname{Hol}(Z_n)) \cong \operatorname{Hol}(Z_n)$ true for every odd $n$? $Hol$ stands here for group holomorph.



This problem appeared, when I stumbled upon the following MO question: https://mathoverflow.net/questions/258886/conditions-for-a-finite-group-to-be-isomorphic-to-its-automorphism-group



OP of that question provided us with the complete list of groups $G$, such that $|G| \leq 506$ and $ \operatorname{Aut}(G) \cong G$. Among those groups there are some looking like holomorphs of all cyclic groups of odd orders up to $23$. Does anybody know, if that pattern continues or is it just a coincidence?


Answer



Here's a proof, which is really an expansion of my comment above.



Let $A=\langle a\rangle$ be a cyclic group of order $n$, with $n$ odd. Let $K=Aut(A)$, and consider the holomorph $G=A\rtimes K$. Because $n$ is odd, $Z(G)$ is trivial: because the inversion map is in $K$, we see that $C_G(a)=A$, and no element of $A$ is central. Also, since $K$ is abelian, we see $A=[G,G]$.




Now let $\phi$ be an automorphism of $G$. Then $\phi(A)=A$, and so there exists $k\in K$ such that $\phi(a)=a^k$. Now $H=\phi(K)$ is a self-centralizing subgroup of $G$ that is a complement to $A$. Because it's a complement, for every $g\in K$, there's a unique element of the form $a^?g\in H$. In particular, consider $\iota\in K$, the inversion map. If $a^r\iota\in H$, then setting $m=-r(n+1)/2$, it is easy to check that $\iota\in a^mHa^{-m}$. By looking at $[\iota,ga^s]$, we see that $C_G(\iota)=K$, and so $a^mHa^{-m}=K$. Thus $\phi$ acts on $G$ exactly like conjugation by $ka^m$, so $\phi$ is inner. Combined with the triviality of $Z(G)$, we see $G$ is complete.



Edit: I might have glossed over one too many details in the end above. Let $\psi\in Aut(G)$ be conjugation by $ka^m$, and let $\alpha=\phi\psi^{-1}$. Then we've shown $\alpha$ fixes $A$ pointwise, and $K$ setwise. But then for any $g\in K$, we have



\begin{align}
a^g &= \alpha(a^g)\\
&= a^{\alpha(g)}
\end{align}
and thus $g$ and $\alpha(g)$ are two automorphisms of $A$ with the same action, meaning $\alpha(g)=g$. Thus $\alpha$ fixes $K$ pointwise, and since $G=AK$, $\alpha$ is the identity map.


algebra precalculus - Splitting / rearranging sigma sums



I am struggling to understand the concept of sigma sum rearrangement. In fact, I don't even know what to call it. That being said, if anybody can recommend sources for me to study this from or let me know what you call this type of problem so that I can study it more, that would be greatly appreciated.



Let $H_{k}$ represent the series of harmonic numbers. Then
\begin{align}

(1&.) \quad \sum\limits_{1 \leq j \leq k} \frac{1}{j} = 1+ \frac{1}{2} + \frac{1}{3} + \dots + \frac{1}{k} \\
(2&.) \quad \sum\limits_{1 \leq j \leq k} \frac{1}{j} = H_{k} \\
\end{align}



Now let:
\begin{align}
(3&.) \quad \sum\limits_{1 \leq k \leq n} kH_{k} \\
(4&.) \quad \sum\limits_{1 \leq k \leq n} k \sum\limits_{1 \leq j \leq k} \frac{1}{j} \\
(5&.) \quad \sum\limits_{1 \leq j \leq k \leq n} k \frac{1}{j} \\
(6&.) \quad \sum\limits_{1 \leq j \leq n} \frac{1}{j} \sum\limits_{j \leq k \leq n} k

\end{align}



Equation $(4.)$ is just a simple substitution but then I begin to get confused. I am used to the following notation:
\begin{align}
\sum_{j=1}^{k} \frac{1}{j}
\end{align}



Thus, I would represent $(4.)$ as:
\begin{align}
\sum_{k=1}^{n} k \sum_{j=1}^{k} \frac{1}{j}

\end{align}



I am thus confused by equation $(5.)$ as I don't know how to represent it in my usual way.



Now to my real question. Equation $(6.)$ contains two visual changes:
1. $\frac{1}{j}$ and $k$ swap places
2. The indices change from $1 \leq k \leq n$ and $1 \leq j \leq k$ change to $1 \leq j \leq n$ and $j \leq k \leq n$



I'm not sure how these indices are changed? Can anyone explain this to me, please? Is there a general method for this?



Thanks!


Answer




If you look closely, the equation change from 5 to 6 is the reverse of what's happening in 4 to 5. Let's take a look at what's happening to the indices with an example.



Let $n=3$, and we'll figure out what pairs of $(j,k)$ are valid for the index set in each equation.



In equation $4$, going through in order, we have $(1,1),(1,2),(2,2),(1,3),(2,3),(3,3)$. As long as we have those six pairs, all the sums will be the same.



Going from equation $4$ to equation $5$ is an application of the distributive law. If you wrote it out, it would look like: $$1(\frac{1}{1})+2(\frac{1}{1}+\frac{1}{2})+3(\frac{1}{1}+\frac{1}{2}+\frac{1}{3}) = 1\cdot\frac{1}{1} +2\cdot\frac{1}{1}+2\cdot\frac{1}{2} +3\cdot\frac{1}{1}+3\cdot\frac{1}{2}+3\cdot\frac{1}{3}$$



In equation $6$, the indices are the same again, just in a different order: $(1,1),(1,2),(1,3),(2,2),(2,3),(3,3)$




And everything is distributed by fractions instead of coefficients:



$$1\cdot\frac{1}{1} +2\cdot\frac{1}{1}+2\cdot\frac{1}{2} +3\cdot\frac{1}{1}+3\cdot\frac{1}{2}+3\cdot\frac{1}{3}= \frac{1}{1}(1 +2+3) +\frac{1}{2}(2+3) +\frac{1}{3}(3)$$



You can generalize what's happening for any $n$, but it's the same operations. As a general strategy, when notation gets difficult, expand it by trying small numbers or simple examples.


calculus - Finding the limit of $frac {n}{sqrt[n]{n!}}$



I'm trying to find
$$\lim_{n\to\infty}\frac{n}{\sqrt[n]{n!}} .$$



I tried couple of methods: Stolz, Squeeze, D'Alambert




Thanks!



Edit: I can't use Stirling.


Answer



Let $\displaystyle{a_n=\frac{n^n}{n!}}$. Then the power series $\displaystyle{\sum_{n=1}^\infty a_n x^n}$ has radius of convergence $R$ satisfying $\displaystyle{\frac{1}{R}=\lim_{n\to \infty} \sqrt[n]{a_n}=\lim_{n\to\infty}\frac{a_{n+1}}{a_n}}$, provided these limits exist. The first limit is what you're looking for, and the second limit is $\displaystyle{\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n}$.



Added: I just happened upon a good reference for the equality of limits above, which gives a more general result which is proved directly without reference to power series. Theorem 3.37 of Rudin's Principles of mathematical analysis, 3rd Ed., says:




For any sequence $\{c_n\}$ of positive numbers,

$$\liminf_{n\to\infty}\frac{c_{n+1}}{c_n}\leq\liminf_{n\to\infty}\sqrt[n]{c_n},$$
$$\limsup_{n\to\infty}\sqrt[n]{c_n}\leq\limsup_{n\to\infty}\frac{c_{n+1}}{c_n}.$$




In the present context, this shows that $$\liminf_{n\to\infty}\left(1+\frac{1}{n}\right)^n\leq\liminf_{n\to\infty}\frac{n}{\sqrt[n]{n!}}\leq\limsup_{n\to\infty}\frac{n}{\sqrt[n]{n!}}\leq\limsup_{n\to\infty}\left(1+\frac{1}{n}\right)^n.$$
Assuming you know what $\displaystyle{\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n}$ is, this shows both that the limit in question exists (in case you didn't already know by other means) and what it is.






From the comments: User9176 has pointed out that the case of the theorem above where $\displaystyle{\lim_{n\to\infty}\frac{c_{n+1}}{c_n}}$ exists follows from the Stolz–Cesàro theorem applied to finding the limit of $\displaystyle{\frac{\ln(c_n)}{n}}$. Explicitly,

$$\lim_{n\to\infty}\ln(\sqrt[n]{c_n})=\lim_{n\to\infty}\frac{\ln(c_n)}{n}=\lim_{n\to\infty}\frac{\ln(c_{n+1})-\ln(c_n)}{(n+1)-n}=\lim_{n\to\infty}\ln\left(\frac{c_{n+1}}{c_n}\right),$$
provided the latter limit exists, where the second equality is by the Stolz–Cesàro theorem.


Saturday 26 January 2019

Proof about real numbers



Question is from Apostol's Vol. 1 One-variable calculus with introduction to linear algebra textbook.



Page 28. Exercise 1. If $x$ and $y$ are arbitrary real numbers with $x

Any hints on how to approach the problem would be appreciated, other exercises seem to be similar so if I solve this I should be able to solve others as well. Thank you in advance.


Answer



Consider $z=\frac{x+y}{2}$. Then show that $x \lt z \lt y$.



elementary set theory - existence of injection from countable sets to u countable sets



While reading the section on countability from Ralph Boa's book 'A Primer on real functions', I came across this sentence 'While those sets cannot be counted can be thought of 'bigger' than those that can be counted. This made me wonder if the following is true 'For any countable set A and an u countable set B, does there exist an injection from A to B?




Now my above conjecture obviously reduces to commenting on existence of injection from $\mathbb{N}$ to $B$(as defined above ), as any countable set has a bijection with the set of natural numbers . Now I am unable to proceed.
Please help.Thanks.


Answer



We define $f\colon \Bbb N\to B$ by recursion - and a bit of choice:



Let $n\in\Bbb N$. Assume we have already defined $f(k)$ for all $k\in\Bbb N$ and $k

In the end, this defines an injective map $f\colon \Bbb N\to B$. (This map is certainly not surjective as $B$ was assumed uncountable)


Friday 25 January 2019

sequences and series - How to replace addition with multiplication to find the next integer value?



Sorry in advance for my lack of mathematical knowledge, I am very new to it.






Yesterday, I posed this question to myself:




"In a world without addition or subtraction, how could we derive the next value in the sequence of natural numbers from $1\to\infty$ with a step size of $1$?"



This lead me to the idea of multiplication to find the next value in a sequence. After analyzing the multipliers between each natural value using:
$$
\frac{(n+1)}{n}
$$



I noticed the pattern of this sequence starts at the high values of $2$ and $1.5$, then converges to a value of $1$.







My two questions:




  • Is it right to assume that the sequence of multipliers should have a more predictable sequence?

  • Are there more elegant ways of producing the next natural number without addition or subtraction?


Answer



With the function $2^n$ and it's inverse, $\log_2$ available,
$n+1=\log_2(2\cdot2^n)$.



real and imaginary part of square root of any complex number




let $k\in\mathbb{C}$. What are the real and imaginary parts of any complex number $x$ so that $x^2=k$ ?



My first idea was writing $k$ and $x$ in polar form: $x=r(\cos{\phi}+i\sin{\phi})$;$k=r'(\cos{\psi}+i\sin{\psi})$. Then use De Moivre's formula such that: $x^2=r^2(\cos{2\phi}+i\sin{2\phi})=r'(\cos{\psi}+i\sin{\psi})$.



Any hints how to go on ?



Another idea could be using roots of unity: We know how $x$ looks like when $x^n=1$


Answer




Well, you just answered yourself.



If $r_1 e^{i\theta_1} = r_2 e^{i \theta_2} $ then $r_1=r_2 , \theta_1 = \theta_2 + 2\pi n $. That means in this case that



$$ r^2 = r' \Rightarrow r=\sqrt{r'}$$
$$ 2\phi = \psi + 2\pi n \Rightarrow \phi = \frac{\psi}{2} , \frac{\psi}{2} + \pi $$
Meaning the solution will be $z= \pm \sqrt{r} (\cos \frac{\psi}{2} + i \sin \frac{\psi}{2})$


A number is prime if and only if ...



Prove that a number $p$ is prime if and only if the $\gcd(\text{numerator},\text{denominator})$ of all fractions of the form $$\frac{1}{p - 1}, \frac{2}{p - 2}, \frac{3}{p - 3}, \ldots, \frac{k}{p - k}, \ldots, \frac{(p - 1)/2}{ (p - 1) / 2 + 1}$$ equals $1$.




The proof in the forwards direction by contradiction is simple because it leads to $\gcd(\text{fraction}) \neq 1$. In the reverse direction I have (by contrapositive): $p$ is composite (odd or even) implies there exists a fraction whose $\gcd$ is not $1$. It's true for even composites because any even $k$ gives a $\gcd$ not equal to $1$, but for an odd composite I'm having trouble seeing how to prove it.



Anyone have any ideas about the reverse direction?


Answer



Suppose that $p\gt 4$ is not prime. Then there exist positive integers $a$ and $b$, with $1\lt a\le b\lt p$ such that $ab=p$. In particular, we have $a\le \frac{p-1}{2}$. To show this, we only need to show that $ab-1\ge 2a$, or equivalently that $a(b-2)\ge 1$. This is clear, since $b\gt 2$.



Finally, observe that $\gcd(a,p-a)=\gcd(a,ab-a)=a\ne 1$.



Remark: If $p$ is even, then $\frac{p-1}{2}$ is not an integer. So it seems likely that we are to assume that $p$ is odd. However, the argument goes through for even $p\gt 4$, using the fractions with numerator up to $\left\lfloor\frac{p-1}{2}\right\rfloor$. Note that if we interpret the product as being up to $\left\lfloor\frac{p-1}{2}\right\rfloor$, then the result is not correct for the composite number $p=4$.


real analysis - Closed-form of $sum_{n=1}^inftyfrac{(-1)^{n+1}}{n}Psi_3(n+1)=-int_0^1frac{ln(1+x)ln^3 x}{1-x},dx$



Does the following series or integral have a closed-form




\begin{equation}
\sum_{n=1}^\infty\frac{(-1)^{n+1}}{n}\Psi_3(n+1)=-\int_0^1\frac{\ln(1+x)\ln^3 x}{1-x}\,dx
\end{equation}





where $\Psi_3(x)$ is the polygamma function of order $3$.






Here is my attempt. Using equation (11) from Mathworld Wolfram:
\begin{equation}
\Psi_n(z)=(-1)^{n+1} n!\left(\zeta(n+1)-H_{z-1}^{(n+1)}\right)
\end{equation}
I got
\begin{equation}

\Psi_3(n+1)=6\left(\zeta(4)-H_{n}^{(4)}\right)
\end{equation}
then
\begin{align}
\sum_{n=1}^\infty\frac{(-1)^{n+1}}{n}\Psi_3(n+1)&=6\sum_{n=1}^\infty\frac{(-1)^{n+1}}{n}\left(\zeta(4)-H_{n}^{(4)}\right)\\
&=6\zeta(4)\sum_{n=1}^\infty\frac{(-1)^{n+1}}{n}-6\sum_{n=1}^\infty\frac{(-1)^{n+1}H_{n}^{(4)}}{n}\\
&=\frac{\pi^4}{15}\ln2-6\sum_{n=1}^\infty\frac{(-1)^{n+1}H_{n}^{(4)}}{n}\\
\end{align}
From the answers of this OP, the integral representation of the latter Euler sum is
\begin{align}

\sum_{n=1}^\infty\frac{(-1)^{n+1}H_{n}^{(4)}}{n}&=\int_0^1\int_0^1\int_0^1\int_0^1\int_0^1\frac{dx_1\,dx_2\,dx_3\,dx_4\,dx_5}{(1-x_1)(1+x_1x_2x_3x_4x_5)}
\end{align}
or another simpler form
\begin{align}
\sum_{n=1}^\infty\frac{(-1)^{n+1}H_{n}^{(4)}}{n}&=-\int_0^1\frac{\text{Li}_4(-x)}{x(1+x)}dx\\
&=-\int_0^1\frac{\text{Li}_4(-x)}{x}dx+\int_0^1\frac{\text{Li}_4(-x)}{1+x}dx\\
&=\text{Li}_5(-1)-\int_0^{-1}\frac{\text{Li}_4(x)}{1-x}dx\\
\end{align}
I don't know how to continue it, I am stuck. Could anyone here please help me to find the closed-form of the series preferably with elementary ways? Any help would be greatly appreciated. Thank you.







Edit :



Using the integral representation of polygamma function
\begin{equation}
\Psi_m(z)=(-1)^m\int_0^1\frac{x^{z-1}}{1-x}\ln^m x\,dx
\end{equation}
then we have
\begin{align}

\sum_{n=1}^\infty\frac{(-1)^{n+1}}{n}\Psi_3(n+1)&=-\sum_{n=1}^\infty\frac{(-1)^{n+1}}{n}\int_0^1\frac{x^{n}}{1-x}\ln^3 x\,dx\\
&=-\int_0^1\sum_{n=1}^\infty\frac{(-1)^{n+1}x^{n}}{n}\cdot\frac{\ln^3 x}{1-x}\,dx\\
&=-\int_0^1\frac{\ln(1+x)\ln^3 x}{1-x}\,dx\\
\end{align}
I am looking for an approach to evaluate the above integral without using residue method or double summation.


Answer



Edited: I have changed the approach as I realised that the use of summation is quite redundant (since the resulting sums have to be converted back to integrals). I feel that this new method is slightly cleaner and more systematic.




We can break up the integral into
\begin{align}

-&\int^1_0\frac{\ln^3{x}\ln(1+x)}{1-x}{\rm d}x\\
=&\int^1_0\frac{\ln^3{x}\ln(1-x)}{1-x}{\rm d}x-\int^1_0\frac{(1+x)\ln^3{x}\ln(1-x^2)}{(1+x)(1-x)}{\rm d}x\\
=&\int^1_0\frac{\ln^3{x}\ln(1-x)}{1-x}{\rm d}x-\int^1_0\frac{\ln^3{x}\ln(1-x^2)}{1-x^2}{\rm d}x-\int^1_0\frac{x\ln^3{x}\ln(1-x^2)}{1-x^2}{\rm d}x\\
=&\frac{15}{16}\int^1_0\frac{\ln^3{x}\ln(1-x)}{1-x}{\rm d}x-\frac{1}{16}\int^1_0\frac{x^{-1/2}\ln^3{x}\ln(1-x)}{1-x}{\rm d}x\\
=&\frac{15}{16}\frac{\partial^4\beta}{\partial a^3 \partial b}(1,0^{+})-\frac{1}{16}\frac{\partial^4\beta}{\partial a^3 \partial b}(0.5,0^{+})
\end{align}
After differentiating and expanding at $b=0$ (with the help of Mathematica),
\begin{align}
&\frac{\partial^4\beta}{\partial a^3 \partial b}(a,0^{+})\\
=&\left[\frac{\Gamma(a)}{\Gamma(a+b)}\left(\frac{1}{b}+\mathcal{O}(1)\right)\left(\left(-\frac{\psi_4(a)}{2}+(\gamma+\psi_0(a))\psi_3(a)+3\psi_1(a)\psi_2(a)\right)b+\mathcal{O}(b^2)\right)\right]_{b=0}\\

=&-\frac{1}{2}\psi_4(a)+(\gamma+\psi_0(a))\psi_3(a)+3\psi_1(a)\psi_2(a)
\end{align}
Therefore,
\begin{align}
-&\int^1_0\frac{\ln^3{x}\ln(1+x)}{1-x}{\rm d}x\\
=&-\frac{15}{32}\psi_4(1)+\frac{45}{16}\psi_1(1)\psi_2(1)+\frac{1}{32}\psi_4(0.5)+\frac{1}{8}\psi_3(0.5)\ln{2}-\frac{3}{16}\psi_1(0.5)\psi_2(0.5)\\
=&-12\zeta(5)+\frac{3\pi^2}{8}\zeta(3)+\frac{\pi^4}{8}\ln{2}
\end{align}
The relation between $\psi_{m}(1)$, $\psi_m(0.5)$ and $\zeta(m+1)$ is established easily using the series representation of the polygamma function.


Thursday 24 January 2019

combinatorics - Hockey-Stick Theorem for Multinomial Coefficients



Pascal's triangle has this famous hockey stick identity.
$$ \binom{n+k+1}{k}=\sum_{j=0}^k \binom{n+j}{j}$$

Wonder what would be the form for multinomial coefficients?


Answer



$$\binom{a_1+a_2+\cdots+a_t}{a_1,a_2,\cdots,a_t}=\sum_{i=2}^t \sum_{j=1}^{i-1} \sum_{k=1}^{a_i} \binom{ a_1+a_2+\cdots+a_{i-1}+k }{a_1,a_2,\cdots,a_j-1,\cdots,a_{i-1},k }$$


real analysis - Showing $int_E f=lim_{ntoinfty}int_E f_n$ for all measurable $E$



The following is an exercise from Carothers' Real Analysis:




Suppose $f$ and $f_n$ are nonnegative, measurable functions, that $f=\lim_{n\to\infty} f_n$ and that $\int f=\lim_{n\to\infty}\int f_n<\infty$. Prove that $\int_E f=\lim_{n\to\infty}\int_E f_n$ for any measurable set $E$. (Hint: Consider both $\int_E f$ and $\int_{E^c} f$.) Give an example showing that this need not be true if $\int f=\lim_{n\to\infty}\int f_n=\infty$.





Attempt:



Using Fatou's Lemma, I can get that



$$\int_E f=\int_E \lim_{n\to\infty} f_n=\int_E\liminf_{n\to\infty} f_n\leq \liminf_{n\to\infty}\int_E f_n=\lim_{n\to\infty}\int_E f_n$$



However, I'm doubtful that this is correct since the hint in the book said to consider both $E^c$ and $E$. I know that if $E$ is measurable, then its complement is measurable as well. A small push in the right direction would be appreciated. Thanks.


Answer




Consider the following:



$$\int f = \int_E f + \int_{E^c} f \le \varliminf \int_E f_n + \varliminf \int_{E^c} f_n \le \varliminf\left(\int_E f_n + \int_{E^c} f_n\right) = \varliminf \int f_n = \int f$$


linear algebra - Inverse of an Integer Matrix

I found a problem on the
Open Problem Garden which asks about the conditions on a rectangular, full-rank, integer matrix such that its right inverse (given by: $A^T (AA^T)^{-1}$ ) is also an integer matrix. The rectangular matrix is constructed in the following way :



Let D be a square diagonal matrix (size $m \times m$) with integer elements $\geq$ 2 along the main diagonal (in order to ensure full rank and thus existence of a right inverse)



Let X be an integer matrix (size $m \times n$) with $n\geq m$.



Now, concatenate the matrices to make a new rectangular matrix M = [D X], giving it dimension $m \times (m+n)$. I am interested in the right inverse of this matrix M.




I have written code to test some matrices, and I have yet to find even one integral element, let alone an entire matrix. I've done some algebraic analysis on a general 2x4 matrix, and intuitively it looks as though some elements will never be a non-zero integer, but it is difficult to prove. If anyone has any advice on how to proceed or any insight, that would be great.



Edits: Clarified the characterization of the matrices in question. Renamed matrices for consistency.

Wednesday 23 January 2019

real analysis - Question regarding Lebesgue Integrability in $sigma$ -finite spaces

I'm taking a course in measure theory and we defined integrability in a $\sigma$
-finite space as follows: Suppose $\left(X,\mathcal{F},\mu\right)$
is a $\sigma$-finite measure space, a measurable function $f:X\to\mathbb{R}$
is said to be integrable on $X$ (denoted $f\in L^{1}\left(X,\mathcal{F},\mu\right)$) if for every collection $\left\{ X_{m}\right\} _{m=1}^{\infty}$
such that $X_{m}\uparrow X$
, $X_{m}\in\mathcal{F}$
and $\mu\left(X_{m}\right)<\infty$

the following apply:




  1. $f$
    is integrable on every set $A\subseteq X$
    such that $\mu\left(A\right)<\infty$
    .


  2. The limit $\lim\limits _{m\to\infty}\int_{X_{m}}\left|f\right|d\mu$
    exists and does not depend on the choice of $\left\{ X_{m}\right\} _{m=1}^{\infty}$
    .



  3. The limit $\lim\limits _{m\to\infty}\int_{X_{m}}fd\mu$
    does not depend on the choice of $\left\{ X_{m}\right\} _{m=1}^{\infty}$
    .




If said conditions apply then we define $\int_{X}fd\mu=\lim\limits _{m\to\infty}\int_{X_{m}}\left|f\right|d\mu$



Now suppose $\mathcal{G}\subseteq\mathcal{F}$
is a $\sigma$
-algebra on $X$

. Let $f:X\to\mathbb{R}$
be a $\mathcal{G}$
-measurable function such that $f\in L^{1}\left(X,\mathcal{G},\mu\right)$
, is $f$
necessarily in $L^{1}\left(X,\mathcal{F},\mu\right)$
?
Obviously $\mathcal{G}$
-measurability implies $\mathcal{F}$
-measurability but what about integrability?




EDIT: It seems the construction of the integral we did is quite unorthodox, I'll elaborate further on the definitions: Suppose $\left(X,\mathcal{F},\mu\right)$ is a measure space and let $A\subseteq X$ be a subset of finite measure. We define a simple function $f:X\to\mathbb{R}$ to be any function taking a countable collection of real values $\left\{ y_{n}\right\} _{n=1}^{\infty}$. Denote $A_{n}=\left\{ x\in A\,|\, f\left(x\right)=y_{n}\right\}$. Assuming $f$ is measurable we say that $f$ is integrable on $A$ if the series ${\sum_{n=1}^{\infty}{\displaystyle y_{n}\mu\left(A_{n}\right)}}$ is absolutely convergent in which case we define: $$\int_{A}fd\mu={\displaystyle \sum_{n=1}^{\infty}}y_{n}\mu\left(A_{n}\right)$$
Furthermore, given any measurable function $f:X\to\mathbb{R}$ we say $f$ is integrable on $A$ if there is a sequence of simple functions (as defined) which are integrable on $A$ and converging uniformly to $f$ on $A$. In which case we define: $$\int_{A}fd\mu=\lim_{n\to\infty}\int_{A}f_{n}d\mu$$



Thanks in advance.

Tuesday 22 January 2019

calculus - Need some advice to solve this integral $intfrac{sin^2x}{1+sin^2x}mathrm dx$




I'm trying to use this subtitution $t=\tan(x/2)$. But I don´t get anywhere. I've tried $t=\tan(x)$ too. Appreciate your help.



$$\int\dfrac{\sin^2x}{1+\sin^2x}\mathrm dx$$


Answer



You can use the substitution $x=\arctan(t/2)$ and you will need the identity




$$ \sin( \arctan(t/2) ) = \frac{t}{\sqrt{t^2+1}} $$





to reach the form




$$ I= \int \frac{t^2}{(t^2+2)(t^2+4)}dt. $$




I think you can finish it now!


Monday 21 January 2019

calculus - Calculating Limits with the use of L'Hospital

I can't figure out the next equation. The answer is negative infinity, but i don't know how to get there by using L'Hospital.
The equation is:



$$\lim_{x\to 0} \frac{x-\sin(2x)}{x^3}$$

real analysis - Antiderivative of discontinuous function



I am having confusion regarding anti-derivative of a function.



$$f(x) =
\left\{\begin{array}{ll}
-\frac{x^2}{2} + 4 & x \le 0 \\
\phantom{-} \frac{x^2}{2} + 2 & x > 0
\end{array} \right.
$$




Consider the domain $[-1, 2]$.
Clearly the function is Riemann integrable as it is discontinuous at finite number of point. However is there a function $g(x)$ such that $g'(x) = f(x) \forall x \in [-1,2] $ ?


Answer



There is no such function, because by Darboux's theorem (cf. http://en.wikipedia.org/wiki/Darboux's_theorem_(analysis) ), every derivative has to fulfill the intermediate value theorem, but $f$ does not.


calculus - Prove that $arcsin(z)=frac{pi}{2}+ilog(z+isqrt{1-z^2})$



I'm stuck here:




Prove that $w=\arcsin(z)=\frac{\pi}{2}+i\log(z+i\sqrt{1-z^2})$




By using the definition of $z=\sin(w)$, I found that




$$\arcsin(z)=\frac{1}{i}\log(iz+\sqrt{1-z^2})$$



But where that $\frac{\pi}{2}$ comes from? How can I rearrange my expression to have $\frac{\pi}{2}+i\log(z+i\sqrt{1-z^2})$?



Thanks for your time.


Answer



In order to get to the final expression you need to use the log identity:



$$

\ln(a+b)=\ln(a)+\ln(1+\frac{b}{a})
$$



So in your case:



$$
\frac{1}{i}\left[\ln(iz+\sqrt{1-z^2})\right] = \frac{1}{i}\left[\ln(iz) + \ln\left(1+\frac{\sqrt{1-z^2}}{iz}\right)\right]
$$



We know that when $z>0$,




$$
\ln(iz)=\frac{i\pi}{2}+\ln(z)
$$



Also, by doing some algebra:



$
\ln\left(1+\frac{\sqrt{1-z^2}}{iz}\right) = \ln\left(1+\frac{i\sqrt{1-z^2}}{(-1)z}\right) = \ln\left(\frac{z}{z}-\frac{i\sqrt{1-z^2}}{z}\right)
$




$
= \ln\left(\frac{z-i\sqrt{1-z^2}}{z}\right)=\ln\left(z-i\sqrt{1-z^2}\right) - \ln(z)
$



Putting it all together:



$
\frac{1}{i}\left[\ln(z)+\frac{i\pi}{2}+\ln(z-i\sqrt{1-z^2}) -\ln(z)\right]
$




$
=\frac{\pi}{2}+\frac{i}{i^2}\ln(z-i\sqrt{1-z^2}) =
=\frac{\pi}{2}-i\ln(z-i\sqrt{1-z^2})
$



Doing some more algebra:



$
-i\ln(z-i\sqrt{1-z^2}) = i \ln\left(\left(\frac{1}{z-i\sqrt{1-z^2}}\right)\left(\frac{z+i\sqrt{1-z^2}}{z+i\sqrt{1-z^2}}\right)\right)

$



$
= i\ln\left(\frac{z+i\sqrt{1-z^2}}{z^2-i^2(1-z^2)}\right)=i\ln(z+i\sqrt{1-z^2})
$



Hence the expression is true:



$$
\arcsin(z)=\frac{\pi}{2}+i\ln(z+i\sqrt{1-z^2})

$$


Sunday 20 January 2019

sequences and series - Why does $limlimits_{ntoinfty} e^{-n}sum_{i=1}^{n}frac{n^i}{i!} = frac{1}{2}$ and not 1?




The limit

$$\lim_{n\to\infty} e^{-n}\sum_{i=1}^{n}\frac{n^i}{i!}$$
can be seen to be $\frac{1}{2}$, yet isn't the sum in this expression just going to be $\lim\limits_{n\to\infty}e^{n}$, making the limit 1?



I'm having trouble wrapping my head around why this isn't the case. Any help would be much appreciated.


Answer




  1. The problem with your reasoning is that the two terms, $e^{-n}$ and $\sum_{i=1}^n \frac{n^i}{i!}$, can't be analyzed separately. Notice that $e^{-n}$ approaches $0$, and the second term approaches $\infty$, so the limit of the product would be $\boldsymbol{0 \cdot \infty}$, an indeterminate form. A limit of the form $0 \cdot \infty$ might equal any real number, or might even equal $\infty$.


  2. It may be instructive to consider a different expression where some $n$s are replaced by $m$. The following limit can be evaluated as you say (I also made the sum start from $i = 0$ for simplicity):
    $$
    \lim_{n \to \infty} e^{-\color{red}{m}} \sum_{i=0}^{\color{blue}{n}} \frac{{\color{red}{m}}^i}{i!} = 1,

    $$

    because it is the product of limits, $e^{-m} \cdot e^m = 1$. And if we instead take the limit as $m \to \infty$, then we get
    $$
    \lim_{m \to \infty} e^{-m} \sum_{i=0}^n \frac{m^i}{i!} = 0,
    $$

    because the exponential beats the polynomial, and goes to $0$. In your problem, essentially, $m$ and $n$ are both going to $\infty$ at the same time, so we might imagine that the two possible results ($0$ and $1$) are "competing"; we don't know which one will win (and it turns out that the result is $\frac12$, somewhere in the middle).


  3. How can we show that your limit is $\frac12$? This is a difficult result; please take a look at this question for several proofs (thanks to TheSilverDoe for posting).



    In that question, the summation starts from $i=0$ instead of $i=1$. However, note that we can add $e^{-n}$ to your limit and it will not change (since $\lim_{n \to \infty} e^{-n} = 0$). So this gives
    $$

    \lim_{n \to \infty} \left(\left( e^{-n} \sum_{i=1}^n \frac{n^i}{i!} \right) + e^{-n} \right)
    = \lim_{n \to \infty} e^{-n} \sum_{i=\color{red}{0}}^n \frac{n^i}{i!}.
    $$



Saturday 19 January 2019

real analysis - Is there a bijection between $mathbb{R}^2$ and $(0,1)$?




True or false: there exists a bijection between $\mathbb{R}^2$ and the open interval $(0, 1)$.




I think this is false, because $R^2 - \{0\}$ is connected but $(0,1) = R$ as $R-\{0\}$ is not connected, as the continuous image of a connected set is connected.


Answer




The answer is True.



At this time I cannot provide the explicit bijection BUT the idea is this:




Schroder–Bernstein Theorem:



Assume there exists a $1–1$ function $f : X \rightarrow Y$ and another $1–1$ function $g : Y \rightarrow X$. Then there exists a $1–1$, onto function $h : X \rightarrow Y$ and hence $X ∼ Y$ .





Define $f:(0,1) \rightarrow (0,1)\times (0,1)$ by $$f(x)=(x,1/3)$$
Then $f$ is injective.



Define $g:(0,1)\times (0,1) \rightarrow (0,1)$ by $$g(x,y)=0.x_1y_1x_2y_2....$$
where $x=0.x_1x_2x_3....$ and $y=0.y_1y_2y_3....$ and where we make the convention that we always use the terminating form over the repeating $9's$ form when the situation arises.



Then $g$ is injective. (Prove this!)



Hence $$(0,1) \sim (0,1) \times (0,1)$$




We know $(0,1)$ and $\Bbb{R}$ are equivalent,via $$x \mapsto \tan \pi (2x-1)/2 $$



So, next map $\Bbb{R}^2$ bijectively onto the open unit square $(0, 1)\times (0,1)$ by mapping each $\Bbb{R}$ bijectively onto the open interval $(0, 1)$



Hence $$ (0, 1)\times (0,1) \sim \Bbb{R}^2 $$




Summary:



$$(0,1) \sim (0,1) \times (0,1) \sim \Bbb{R}^2$$





In addition, if your function is continuous, then this is not true!


Friday 18 January 2019

Thursday 17 January 2019

combinatorics - Can the Basel problem be solved by Leibniz today?



It is well known that Leibniz derived the series
$$\begin{align}
\frac{\pi}{4}&=\sum_{i=0}^\infty \frac{(-1)^i}{2i+1},\tag{1}
\end{align}$$
but apparently he did not prove that
$$\begin{align}
\frac{\pi^2}{6}&=\sum_{i=1}^\infty \frac{1}{i^2}.\tag{2}

\end{align}$$
Euler did, in 1741 (unfortunately, after the demise of Leibniz). Note that this was also before the time of Fourier.



My question: do we now have the tools to prove (2) using solely (1) as the definition of $\pi$? Any positive/negative results would be much appreciated. Thanks!



Clarification: I am not looking for a full-fledged rigorous proof of (1)$\Rightarrow$(2). An estimate that (2) should hold, given (1), would qualify as an answer.


Answer



Note that proving $$\sum_{k=1}^{\infty} \dfrac1{k^2} = \dfrac{\pi^2}6 \,\,\,\,\,\, (\spadesuit)$$ is equivalent to proving $$\sum_{k=0}^{\infty} \dfrac1{(2k+1)^2} = \dfrac{\pi^2}{8} \,\,\,\,\,\, (\clubsuit)$$ The hope for proving $(\clubsuit)$ instead of $(\spadesuit)$ is that squaring $\dfrac{\pi}4$ gives $\dfrac{\pi^2}{16}$ and adding this twice gives us $\dfrac{\pi^2}8$. We will in fact prove that $$\sum_{k=-\infty}^{\infty} \dfrac1{(2k+1)^2} = \left(\sum_{k=-\infty}^{\infty} \dfrac{(-1)^k}{(2k+1)} \right)^2$$
Since we know $$\sum_{k=0}^{\infty} \dfrac{(-1)^k}{(2k+1)} = \dfrac{\pi}4$$ and $$\sum_{k=0}^{N} \dfrac{(-1)^k}{(2k+1)} = \sum_{k=-(N+1)}^{-1} \dfrac{(-1)^k}{(2k+1)},$$
we have that $$\sum_{k=-\infty}^{\infty} \dfrac{(-1)^k}{(2k+1)} = \dfrac{\pi}2$$

Square the above to get
\begin{align}
\left(\sum_{k=-N-1}^{k=N} \dfrac{(-1)^k}{(2k+1)} \right)^2 & = \left(\sum_{k=-N-1}^{k=N} \dfrac{(-1)^k}{(2k+1)} \right) \left( \sum_{j=-N-1}^{j=N} \dfrac{(-1)^j}{(2j+1)} \right)\\
& = \sum_{k=-N-1}^{k=N} \sum_{j=-N-1}^{j=N} \dfrac{(-1)^{j+k}}{(2k+1)(2j+1)}\\
& = \sum_{\overset{j,k=-N-1}{j \neq k}}^{k=N} \dfrac{(-1)^{j+k}}{2(j-k)} \left(\dfrac1{(2k+1)} - \dfrac1{(2j+1)} \right) + \sum_{k=-N-1}^{k=N} \dfrac{(-1)^{2k}}{(2k+1)(2k+1)}
\end{align}
Hence,
$$\left(\sum_{k=-N-1}^{k=N} \dfrac{(-1)^k}{(2k+1)} \right)^2 - \sum_{k=-N-1}^{k=N} \dfrac{1}{(2k+1)(2k+1)} = \sum_{\overset{j,k=-N-1}{j \neq k}}^{k=N} \dfrac{(-1)^{j+k}}{2(j-k)} \left(\dfrac1{(2k+1)} - \dfrac1{(2j+1)} \right)$$
Let us now show that
$$\overbrace{\sum_{\overset{j,k=-N-1}{j \neq k}}^{k=N} \dfrac{(-1)^{j+k}}{2(j-k)} \left(\dfrac1{(2k+1)} - \dfrac1{(2j+1)} \right)}^{(\heartsuit)} \to 0 \text{ as } N \to \infty$$

We have
\begin{align}
(\heartsuit) & = \sum_{\overset{j,k=-N-1}{j \neq k}}^{k=N} \dfrac{(-1)^{j+k}}{2(j-k)} \dfrac1{(2k+1)} - \sum_{\overset{j,k=-N-1}{j \neq k}}^{k=N} \dfrac{(-1)^{j+k}}{2(j-k)} \dfrac1{(2j+1)}\\
& = \sum_{\overset{j,k=-N-1}{j \neq k}}^{k=N} \dfrac{(-1)^{j+k}}{2(j-k)} \dfrac1{(2k+1)} + \sum_{\overset{j,k=-N-1}{j \neq k}}^{k=N} \dfrac{(-1)^{j+k}}{2(k-j)} \dfrac1{(2j+1)}\\
& = 2 \times \left(\sum_{\overset{j,k=-N-1}{j \neq k}}^{k=N} \dfrac{(-1)^{j+k}}{2(j-k)} \dfrac1{(2k+1)} \right)\\
& = \sum_{\overset{j,k=-N-1}{j \neq k}}^{k=N} \dfrac{(-1)^{j+k}}{(j-k)} \dfrac1{(2k+1)} = \sum_{k=-N-1}^N \dfrac{(-1)^k}{(2k+1)} \underbrace{\left(\sum_{\overset{j=-N-1}{j \neq k}}^N \dfrac{(-1)^{j}}{(j-k)}\right)}_{(\diamondsuit_{k})}
\end{align}
Let us simplify $(\diamondsuit_{k})$ a bit. Assuming $k \neq -N-1$, we have
\begin{align}
\sum_{\overset{j=-N-1}{j \neq k}}^N \dfrac{(-1)^{j}}{(j-k)} & = \sum_{j=k+1}^N \dfrac{(-1)^{j}}{(j-k)} + \sum_{j=-N-1}^{k-1} \dfrac{(-1)^{j}}{(j-k)}\\

& = \left(\dfrac{(-1)^{k+1}}{1} + \dfrac{(-1)^{k+2}}{2} + \cdots + \dfrac{(-1)^{N}}{N-k}\right)\\
& + \left(\dfrac{(-1)^{k-1}}{(-1)} + \dfrac{(-1)^{k-2}}{(-2)} + \cdots + \dfrac{(-1)^{-N-1}}{-N-1-k}\right)\\
& = (-1)^{k+1} \sum_{j=N-\vert k \vert +1}^{N+\vert k \vert +1} \dfrac{(-1)^j}{j}
\end{align}
If $k = -N-1$, we have
$$\sum_{\overset{j=-N-1}{j \neq k}}^N \dfrac{(-1)^{j}}{(j-k)} = \sum_{j=-N}^N \dfrac{(-1)^{j}}{(j+N+1)} = (-1)^{N-1} \sum_{j=1}^{2N+1} \dfrac{(-1)^j}{j}$$
We now have
$$(\heartsuit) = \sum_{k=0}^N \dfrac{(-1)^k \diamondsuit_k + (-1)^{-k-1} \diamondsuit_{-k-1}}{2k+1} = \sum_{k=0}^N \dfrac{(-1)^k \left(\diamondsuit_k - \diamondsuit_{-k-1} \right)}{2k+1}$$
Now for $k \geq 0$
\begin{align}

\left(\diamondsuit_k - \diamondsuit_{-k-1} \right) & = (-1)^{k+1} \sum_{j=N-k +1}^{N+k +1} \dfrac{(-1)^j}{j} - (-1)^{-k} \sum_{j=N-(k+1) +1}^{N+(k+1) +1} \dfrac{(-1)^j}{j}\\
& = 2 \cdot (-1)^{k+1} \cdot \sum_{j=N-k+1}^{N+ k +1} \dfrac{(-1)^j}{j} + (-1)^{N+1} \left( \dfrac1{N+k+2} + \dfrac1{N-k}\right)
\end{align}
Hence,
$$\left \vert \diamondsuit_k - \diamondsuit_{-k-1} \right \vert = \mathcal{O} \left( \dfrac1N\right)$$
$$\left \vert (\heartsuit) \right \vert \leq \sum_{k=0}^N \dfrac1{2k+1} \mathcal{O}(1/N) = \mathcal{O}(\log(2N+1)/N) \to 0$$
Hence,
$$\left(\sum_{k=-N-1}^{k=N} \dfrac{(-1)^k}{(2k+1)} \right)^2 - \sum_{k=-N-1}^{k=N} \dfrac{1}{(2k+1)(2k+1)} \to 0$$
Hence,
$$\left(\sum_{k=-\infty}^{\infty} \dfrac{(-1)^k}{(2k+1)} \right)^2 = \sum_{k=-\infty}^{\infty} \dfrac{1}{(2k+1)(2k+1)} = 2 \sum_{k=0}^{\infty} \dfrac{1}{(2k+1)^2} = 2 \cdot \dfrac34 \cdot \zeta(2)$$

Hence,$$\boxed{\zeta(2) = \dfrac23 \cdot \dfrac{\pi^2}4 = \dfrac{\pi^2}6}$$


complex numbers - Show: $cos left( frac{ 3pi }{ 8 } right) = frac{1}{sqrt{ 4 + 2 sqrt{2} }}$



I'm having trouble showing that:
$$\cos\left(\frac{3\pi}{8}\right)=\frac{1}{\sqrt{4+2\sqrt2}}$$
The previous parts of the question required me to find the modulus and argument of $z+i$ where $z=\operatorname{cis{\theta}}$. Hence, I found the modulus to be $2\cos{\left(\frac{\pi}{4}-\frac{\theta}{2}\right)}$ units and that the argument would be $\operatorname{arg}(z+i)=\frac{\pi}{4}+\frac{\theta}{2}$.




Now, the next step that I took was that I replaced every theta with $\frac{3\pi}{8}$ in the polar form of the complex number $z+i$. So now it would look like this:
$$z+i=\left[2\cos{\left(\frac{\pi}{8}\right)}\right]\operatorname{cis}{\left(\frac{3\pi}{8}\right)}$$
Then, I expanded the $\operatorname{cis}{\left(\frac{3\pi}{8}\right)}$ part to become $\cos{\left(\frac{3\pi}{8}\right)}+i\sin{\left({\frac{3\pi}{8}}\right)}$. So now I've got the $\cos\left({\frac{3\pi}{8}}\right)$ part but I don't really know what to do next. I've tried to split the angle up so that there would be two angles so I can use an identity, however, it would end up with a difficult fraction instead. So if the rest of the answer or a hint would be given to finish the question, that would be great!!



Thanks!!


Answer



As $\frac{3\pi}{8}$ and $\frac{\pi}{8}$ are complementary angles, we get
$$\begin{align}
\cos\frac{3\pi}{8}&=\sin\frac{\pi}{8}\\
&=\sin\frac{\pi/4}{2}\\

&=\sqrt{\frac{1-\cos(\pi/4)}{2}}\\
&=\sqrt{\frac{1-(1/\sqrt{2})}{2}}\\
&=\sqrt{\frac{\sqrt{2}-1}{2\sqrt{2}}}\\
&=\sqrt{\frac{\sqrt{2}-1}{2\sqrt{2}}\cdot \frac{\sqrt{2}+1}{\sqrt{2}+1}}\\
&=\sqrt{\frac{1}{4+2\sqrt{2}}}\\
&=\frac{1}{\sqrt{4+2\sqrt{2}}}
\end{align}$$


integration - Integral $int_0^1 frac{dx}{sqrt[3]{x(1-x)}(1-x(1-x))}$




Prove, using elementary methods, that
$$\int_0^1 \frac{dx}{\sqrt[3]{x(1-x)}(1-x(1-x))}=\frac{4\pi}{3\sqrt 3}$$





I have seen this integral in the following post, however all answers presented exploits complex analysis or heavy series.



But according to mickep's answer even the indefinite integral possess a primitive in terms of elementary functions. I'm not that insane to try and find that by hand, however it gives me great hope that we can find an elementary approach for the definite integral.



Although I kept coming back to it for the past months, I still got no success, or relevant progress and I would appreciate some help.


Answer



Given the function $f : (0,\,1) \to \mathbb{R}$
$$
f(x) := \frac{1}{\sqrt[3]{x\,(1 - x)}\left(1 - x(1 - x)\right)}\,,

$$

we are interested in the calculation of
$$
I := \int_0^1 f(x)\,\text{d}x\,.
$$

First of all it is good to observe that:
$$
f(1 - x) = f(x), \quad \forall \, x \in (0,\,1)
$$

then:

$$
I = 2 \int_0^{\frac{1}{2}} f(x)\,\text{d}x\,.
$$

At this point, since:
$$
f\left(\frac{1 - \sqrt{4\,t^3 + 1}}{2}\right) = -\frac{1}{t\left(t^3 + 1\right)}\,, \quad \quad \frac{\text{d}}{\text{d}t}\left(\frac{1 - \sqrt{4\,t^3 + 1}}{2}\right) = -\frac{3\,t^2}{\sqrt{4\,t^3 + 1}}
$$

it follows that:
$$
I = -6 \int_{-\frac{1}{\sqrt[3]{4}}}^0 \frac{t}{t^3 + 1}\,\frac{\text{d}t}{\sqrt{4\,t^3 + 1}}\,.

$$

Now we can take advantage of the power of Rubi in Wolfram Mathematica:



PacletInstall["https://github.com/RuleBasedIntegration/Rubi/
releases/download/4.16.1.0/Rubi-4.16.1.0.paclet"];
<< Rubi`
Steps@Int[t/((t^3 + 1) Sqrt[4 t^3 + 1]), t]


enter image description here




enter image description here



from which:
$$
I =
-6 \int_{-\frac{1}{\sqrt[3]{4}}}^0 \left[
\frac{2\,t - 1}{6\,(t + 1)\,\sqrt{4\,t^3 + 1}} +
\frac{t^2}{2\left(t^3 + 1\right)\sqrt{4\,t^3 + 1}} - \\
\frac{2\,t^3 - 3\,t^2 - 1}{6\,t\left(t^2 - t + 1\right)\sqrt{4\,t^3 + 1}} -

\frac{1}{6\,t\,\sqrt{4\,t^3 + 1}}
\right]\text{d}t\,.
$$

Hence a primitive in terms of elementary functions is:
$$
I =
-6\left[
- \frac{\arctan\left(\frac{\sqrt{3}\,(1 + 2\,t)}{\sqrt{4\,t^3 + 1}}\right)}{3\sqrt{3}} +
\frac{\arctan\left(\frac{\sqrt{4\,t^3 + 1}}{\sqrt{3}}\right)}{3\sqrt{3}} - \\
\frac{1}{3}\,\text{arctanh}\left(\frac{1 - 2\,t}{\sqrt{4\,t^3 + 1}}\right) +

\frac{1}{9}\,\text{arctanh}\left(\sqrt{4\,t^3 + 1}\right)
\right]_{t = -\frac{1}{\sqrt[3]{4}}}^{t = 0}
$$

and therefore as desired:
$$
I =
\frac{4\pi}{3\sqrt{3}}\,.
$$







Like any other CAS system, also Rubi follows the rules written by the programmers, so it's always possible to proof by hand how much is executed. Specifically, the theory on which the above rule introduced by Martin Welz is based can be studied in E. GOURSAT Note sur quelques intégrales pseudo-elliptiques (1887). Therefore, based on what is written on page 114, the resolving technique of the integral under consideration can be studied in S. GÜNTHER Sur l’évaluation de certaines intégrales pseudo-elliptiques (1882).



In this case:
$$
S \equiv \int \frac{t}{t^3 + 1}\,\frac{\text{d}t}{\sqrt{4\,t^3 + 1}}
$$

then imposing:
$$
\frac{t}{t^3 + 1} =

\frac{\alpha\,t^2}{t^3 + 1} +
\frac{\alpha_1\,t^2 + \beta_1\,t + \gamma_1}{t + 1} +
\frac{\alpha_2\,t^2 + \beta_2\,t + \gamma_2}{t^2 - t + 1}
$$

the identification gives the values:
$$
\alpha = \frac{1}{2}\,, \quad
\alpha_1 = 0\,, \quad
\beta_1 = \frac{1}{3}\,, \quad
\gamma_1 = -\frac{1}{6}\,, \quad

\alpha_2 = -\frac{1}{3}\,, \quad
\beta_2 = \frac{1}{3}\,, \quad
\gamma_2 = \frac{1}{6}
$$

ie:
$$
\frac{t}{t^3 + 1} =
\frac{t^2}{2\left(t^3 + 1\right)} +
\frac{2\,t - 1}{6\left(t + 1\right)} -
\frac{2\,t^2 - 2\,t - 1}{6\left(t^2 - t + 1\right)}

$$

from which:
$$
S =
\int \frac{t^2}{2\left(t^3 + 1\right)}\,\frac{\text{d}t}{\sqrt{4\,t^3 + 1}} +
\int \frac{2\,t - 1}{6\left(t + 1\right)}\,\frac{\text{d}t}{\sqrt{4\,t^3 + 1}} +
\int \frac{2\,t^2 - 2\,t - 1}{-6\left(t^2 - t + 1\right)}\,\frac{\text{d}t}{\sqrt{4\,t^3 + 1}} \,.
$$

Now, for the first integral:
$$

S_1 \equiv \int \frac{t^2}{2\left(t^3 + 1\right)}\,\frac{\text{d}t}{\sqrt{4\,t^3 + 1}}
$$

according to the method described in the paper:
$$
u = \frac{\alpha\,t^3 + \beta\,t^2 + \gamma\,t + \delta}{\sqrt{4\,t^3 + 1}}
$$

then imposing:
$$
\frac{\text{d}u}{m\,u^2 + n} = \frac{t^2}{2\left(t^3 + 1\right)}\,\frac{\text{d}t}{\sqrt{4\,t^3 + 1}}
$$


ie:
$$
\frac{\text{d}u}{\text{d}t}\,\frac{2\left(t + 1/t^2\right)\sqrt{4\,t^3 + 1}}{m\,u^2 + n} = 1
$$

the identification gives the values:
$$
\alpha = 0\,, \quad
\beta = 0\,, \quad
\gamma = 0\,, \quad
\delta = - \frac{1}{3}\,, \quad

m = 27\,, \quad
n = 1
$$

ie:
$$
S_1
= \int \frac{\text{d}u}{27\,u^2 + 1}
= \frac{\arctan\left(3\sqrt{3}\,u\right)}{3\sqrt{3}} + c_1
= -\frac{\arctan\left(\frac{\sqrt{3}}{\sqrt{4\,t^3 + 1}}\right)}{3\sqrt{3}} + c_1\,.
$$


Similarly, for the second integral:
$$
S_2 \equiv \int \frac{2\,t - 1}{6\left(t + 1\right)}\,\frac{\text{d}t}{\sqrt{4\,t^3 + 1}}
$$

putting:
$$
u = \frac{\alpha\,t^2 + \beta\,t + \gamma}{\sqrt{4\,t^3 + 1}}
$$

then imposing:
$$

\frac{\text{d}u}{m\,u^2 + n} = \frac{2\,t - 1}{6\left(t + 1\right)}\,\frac{\text{d}t}{\sqrt{4\,t^3 + 1}}
$$

ie:
$$
\frac{\text{d}u}{\text{d}t}\,\frac{\frac{6\left(t + 1\right)}{2\,t - 1}\sqrt{4\,t^3 + 1}}{m\,u^2 + n} = 1
$$

the identification gives the values:
$$
\alpha = 0\,, \quad
\beta = -\frac{2}{3}\,, \quad

\gamma = -\frac{1}{3}\,, \quad
m = 27\,, \quad
n = 1
$$

ie:
$$
S_2
= \int \frac{\text{d}u}{27\,u^2 + 1}
= \frac{\arctan\left(3\sqrt{3}\,u\right)}{3\sqrt{3}} + c_2
= -\frac{\arctan\left(\frac{\sqrt{3}\left(2\,t + 1\right)}{\sqrt{4\,t^3 + 1}}\right)}{3\sqrt{3}} + c_2\,.

$$

For the third integral
$$
S_3 \equiv \int \frac{2\,t^2 - 2\,t - 1}{-6\left(t^2 - t + 1\right)}\,\frac{\text{d}t}{\sqrt{4\,t^3 + 1}}
$$

this transformation fails and therefore the only hope that remains about the pseudo-ellipticity of the integral consists in further decomposing the rational fraction; in particular, imposing:
$$
\frac{2\,t^2 - 2\,t - 1}{-6\left(t^2 - t + 1\right)}
= \frac{\alpha}{-6\,t} + \frac{\alpha_1\,t^3 + \beta_1\,t^2 + \gamma_1\,t + \delta_1}{-6\,t\left(t^2 - t + 1\right)}
$$


the identification gives the values:
$$
\alpha = 1\,, \quad
\alpha_1 = 2\,, \quad
\beta_1 = -3\,, \quad
\gamma_1 = 0\,, \quad
\delta_1 = -1
$$

ie:
$$

\frac{2\,t^2 - 2\,t - 1}{-6\left(t^2 - t + 1\right)} =
\frac{1}{-6\,t} + \frac{2\,t^3 - 3\,t^2 - 1}{-6\,t\left(t^2 - t + 1\right)}
$$

from which:
$$
S_3 =
\int \frac{1}{-6\,t}\,\frac{\text{d}t}{\sqrt{4\,t^3 + 1}} +
\int \frac{2\,t^3 - 3\,t^2 - 1}{-6\,t\left(t^2 - t + 1\right)}\,\frac{\text{d}t}{\sqrt{4\,t^3 + 1}} \,.
$$

Now, again, for the first integral:

$$
S_{3,1} \equiv \int \frac{1}{-6\,t}\,\frac{\text{d}t}{\sqrt{4\,t^3 + 1}}
$$

putting:
$$
u = \frac{\alpha\,t^3 + \beta\,t^2 + \gamma\,t + \delta}{\sqrt{4\,t^3 + 1}}
$$

then imposing:
$$
\frac{\text{d}u}{m\,u^2 + n} = \frac{1}{-6\,t}\,\frac{\text{d}t}{\sqrt{4\,t^3 + 1}}

$$

ie:
$$
\frac{\text{d}u}{\text{d}t}\,\frac{-6\,t\,\sqrt{4\,t^3 + 1}}{m\,u^2 + n} = 1
$$

the identification gives the values:
$$
\alpha = 0\,, \quad
\beta = 0\,, \quad
\gamma = 0\,, \quad

\delta = \frac{1}{9}\,, \quad
m = -81\,, \quad
n = 1
$$

ie:
$$
S_{3,1}
= \int \frac{\text{d}u}{-81\,u^2 + 1}
= \frac{1}{9}\,\text{arctanh}(9\,u) + c_{3,1}
= \frac{1}{9}\,\text{arctanh}\left(\frac{1}{\sqrt{4\,t^3 + 1}}\right) + c_{3,1}\,.

$$

Finally, for the second integral
$$
S_{3,2} \equiv \int \frac{2\,t^3 - 3\,t^2 - 1}{-6\,t\left(t^2 - t + 1\right)}\,\frac{\text{d}t}{\sqrt{4\,t^3 + 1}}
$$

putting:
$$
u = \frac{\alpha\,t^2 + \beta\,t + \gamma}{\sqrt{4\,t^3 + 1}}
$$

then imposing:

$$
\frac{\text{d}u}{m\,u^2 + n} = \frac{2\,t^3 - 3\,t^2 - 1}{-6\,t\left(t^2 - t + 1\right)}\,\frac{\text{d}t}{\sqrt{4\,t^3 + 1}}
$$

ie:
$$
\frac{\text{d}u}{\text{d}t}\,\frac{\frac{-6\,t\left(t^2-t+1\right)}{2\,t^3-3\,t^2-1}\,\sqrt{4\,t^3 + 1}}{m\,u^2 + n} = 1
$$

the identification gives the values:
$$
\alpha = 0\,, \quad

\beta = \frac{2}{3}\,, \quad
\gamma = -\frac{1}{3}\,, \quad
m = -9\,, \quad
n = 1
$$

ie:
$$
S_{3,2}
= \int \frac{\text{d}u}{-9\,u^2 + 1}
= \frac{1}{3}\,\text{arctanh}(3\,u) + c_{3,2}

= \frac{1}{3}\,\text{arctanh}\left(\frac{2\,t - 1}{\sqrt{4\,t^3 + 1}}\right) + c_{3,2}\,.
$$

In conclusion, the searched primitive family is
$$
S = S_1 + S_2 + S_{3,1} + S_{3,2}\,,
$$

which is completely equivalent to that returned by Rubi and therefore evaluating it at the extremes returns what we wanted to prove.



An elementary alternative to avoid the determination of the primitive consists in the parametric method of derivation and integration under the sign of integral (also known as Richard Feynman's trick), but if it isn't possible to identify a winning strategy it's impractical, similar to the method here exposed.


algebra precalculus - An incorrect method to sum the first $n$ squares which nevertheless works



Start with the identity



$\sum_{i=1}^n i^3 = \left( \sum_{i = 1}^n i \right)^2 = \left(\frac{n(n+1)}{2}\right)^2$.



Differentiate the left-most term with respect to $i$ to get



$\frac{d}{di} \sum_{i=1}^n i^3 = 3 \sum_{i = 1}^n i^2$.




Differentiate the right-most term with respect to $n$ to get



$\frac{d}{dn} \left(\frac{n(n+1)}{2}\right)^2 = \frac{1}{2}n(n+1)(2n+1)$.



Equate the derivatives, obtaining



$\sum_{i=1}^n i^2 = \frac{1}{6}n(n+1)(2n+1)$,



which is known to be correct.




Is there any neat reason why this method happens to get lucky and work for this case?


Answer



Let $f_k(n)=\sum_{i=1}^n i^k$. We all know that $f_k$ is actually
a polynomial of degree $k+1$. Also $f_k$ can be characterised by the two
conditions:
$$f_k(x)-f_k(x-1)=x^k$$
and
$$f_k(0)=0.$$
Differentiating the first condition gives

$$f_k'(x)-f_k'(x-1)=k x^{k-1}.$$
Therefore the polynomial $(1/k)f_k'$ satisfies the first of the two
conditions that $f_{k-1}$ does. But it may not satisfy the second. But then
$(1/k)(f_k'(x)-f_k'(0))$ does. So
$$f_{k-1}(x)=\frac{f_k'(x)-f_k'(0)}k.$$



The mysterious numbers $f_k'(0)$ are related to the Bernoulli numbers,
and when $k\ge3$ is odd they obligingly vanish...


Wednesday 16 January 2019

real analysis - why here $x^2$ is used ? why not $x ?$




Does there exists a function $f : \mathbb{R }\rightarrow \mathbb{R}$ which is differentiable only at the point $0.$?




My attempt : I found the answer here Is there a function $f: \mathbb R \to \mathbb R$ that has only one point differentiable?




But i didn't understands the answer , my doubts given below



enter image description here


Answer



Because while $x p(x)$ is continuous at $0$, it is not differentiable.



In particular, the fraction
$$
\frac{(0+h)p(0+h)-0p(0)}{h}
$$


has value $0$ or $1$ depending on whether $h$ is rational or not. So it has no limit as $h\to 0$, which by definition of derivative means that $xp(x)$ had no derivative at $0$.



On the other hand, the fraction
$$
\frac{(0+h)^2p(0+h)-0^2p(0)}{h}
$$

has value $h$ or $0$ depending on whether $h$ is rational or irrational. Thus it does have a limit as $h\to0$, which is to say that $x^2p(x)$ has derivative $0$ at $x=0$.


calculus - What is the correct approach for $intfrac{x^{4}+1}{x^{6}+1}dx$?



I was looking for tricky integrals to give something more challenging a try, and I stumbled upon [this] (Ignoring the definite part since I'm just interested in solving the integral):



$$\int\frac{x^{4}+1}{x^{6}+1}dx$$



My first reaction was to try substituting $[t=x^{2}; \frac{dx}{dt}=\frac{1}{2\sqrt{t}}]$, and everything went off the rails from there:




$$\int\frac{t^2}{t^3+1}\frac{1}{2\sqrt{t}}dt+\int\frac{1}{t^3+1}\frac{1}{2\sqrt{t}}dt$$



after that I tried getting $3t^2$ in the first integral, but it's pointless since it's a product and not an addition. I have also tried integration by parts, but I get things like $-\frac{2}{(2\sqrt{t})^3}$, which make everything way worse than it was before. There are no trigonometric identities involved, and I'm not sure I can apply rational integration since $x^6+1$ doesn't actually have any roots as far as I know. I have also tried other substitutions, like $[t=x^3]$, but I haven't been able to go further with those.



I'm totally out of ideas, I've checked all the books I have available for clues or methods I could have missed, but I didn't find a thing.



What am I missing? There is obviously an approach I have missed, I really don't think that substitution was the way to go. Any clues about what method to use? (I'm not looking for the solution)


Answer



As $\displaystyle x^6+1=(x^2+1)(x^4-x^2+1)$




and $x^4-x^2+1=(x^2+1)^2-3x^2=(x^2+1-\sqrt3x)(x^2+1+\sqrt3x)$



Using
Partial Fraction Decomposition,



we can write $$\frac{x^4+1}{x^6+1}=\frac{Ax+B}{x^2+1}+\frac{Cx+D}{x^2-\sqrt3x+1}+\frac{Ex+F}{x^2+\sqrt3x+1}$$ where $A,B,C,D,E,F$ are arbitrary constants



Now multiply either sides by $x^6+1$ and compare the coefficients of the different powers of $x$ to find $A,B,C,D,E,F$



Again as $x^2-\sqrt3x+1=\left(x-\frac{\sqrt3}2\right)^2+\left(\frac12\right)^2$




using Trigonometric substitution, set $x-\frac{\sqrt3}2=\frac12\tan\phi$



Similarly for $x^2+\sqrt3x+1$


sequences and series - Evaluate the expression $sqrt{2}^{sqrt{2}^{sqrt{2}^{cdots}}}$

Is there a way to check whether the expression below converges to a specific number.
$$
\sqrt{2}^{\sqrt{2}^{\sqrt{2}^{\cdots}}}

$$
or in other words does the sequence defined by $x_0=\sqrt{2}$ and $x_n=\sqrt{2}^{x_{n-1}}$ converges?


Trying with a calculator to evaluate an eight-length $\sqrt{2}$ construct I got $1.9656648865173187$ but I couldn't yet confirm convergence.

Tuesday 15 January 2019

sequences and series - Convergent sum of reciprocals?

Let n denote a positive integer and let $\sigma(n)$ denote the sum
of all divisors of $n$, so that $\sigma(n)$ is larger than $n$ (for $n > 1$)
but not by much since it's bounded above by $c\ n\log \log n$ .



This invites comparison between situations in which the two are
interchanged. As an example, consider the following:



Let $A$ be a subset of the positive integers. Suppose that
the sum of $1/n$ , for all $n$ in $A$ , converges. Then the sum

of 1/$\sigma(n)$ , over all $n$ in $A$, clearly converges as well.



Is the converse true?

real analysis - "Strong" derivative of a monotone function



It is well known that if a function $f\colon \mathbb{R} \to \mathbb{R}$ is monotone then $f'$ exists almost everywhere.



Is it true that if $f$ is monotone then there exists (edit: I mean exists a.e.) "strong" derivative
$$

f'_s(x) = \lim_{(y,z)\to (x,x)} \frac{f(y) - f(z)}{y-z}?
$$



(If this is not true, what about the case when $y \nearrow x$ and $z \nearrow x$?)



In general (if $f$ is not monotone) it is not true of course. For example $x \mapsto x^2 \sin{\frac{1}{x}}$ has derivative at $0$ but does not have "strong" derivative at $0$.



Moreover if $f'$ is continuous at $x$ then the strong derivative $f'_s(x)$ exists and is of course equal to $f'(x)$. This follows easily from mean value theorem.


Answer



I believe there exists a function $f:[0,1] \rightarrow {\mathbb R}$ that is both strictly increasing and Lipschitz such that $f_{S}'(x)$ does not exist for each $x \in [0,1].$ According to some notes that I happen to have with me, an example is given in 14.1.4 (pp. 317-318) of Garg's book (cited below; may have to add the identity function $x$ to Garg's function to get strictly increasing). To obtain the example, Garg makes use of a measure dense $F_{\sigma}$ set that has a dense complement, and thus it would provide another application in my answer to the recent StackExchange question Importance of a result in measure theory . Unfortunately, my copy of Garg's book is not where I'm presently at, so I can't verify this right now. According to my notes, you have to read closely because Garg uses the phrase strongly differentiable for finitely strongly differentiable in your sense and the phrase strongly derivable for finitely-or-infinitely strongly differentiable in your sense.




Krishna M. Garg, Theory of Differentiation: A Unified Theory of Differentiation via New Derivate Theorems and New Derivatives, John Wiley and Sons, 1998.



(ADDED TWO DAYS LATER)



Here is Garg’s example. In what follows, “measure” is “Lebesgue measure”. Let $E$ be a measurable subset of $[0,1]$ such that both $E$ and $[0,1] - E$ intersect every subinterval of $[0,1]$ in a set of positive measure. (Note: The standard methods for obtaining such sets produce $F_{\sigma}$ sets.) Let $f(x) = \int_{0}^{x} \; g(t) \; dt,$ where $g$ is the characteristic function of $E.$ Since $g$ is nonnegative, it follows that $f$ is nondecreasing. Also, it is not difficult to see that all difference quotients of $f$ are bounded between $-1$ and $1,$ so $f$ is Lipschitz continuous with Lipschitz constant $1.$ By a standard theorem in Lebesgue integration theory, there exists $E’ \subseteq [0,1]$ such that $[0,1] - E’$ has measure zero and $f’(x) = g(x)$ for each $x \in E’$. (This last part only requires that $g$ is integrable.) Since $E’$ also has the property that both $E’$ and $[0,1] - E’$ intersect every subinterval of $[0,1]$ in a set of positive measure, it follows (as a fairly weak implication) that $\{x \in [0,1]: \; f’(x) = 0 \}$ is (topologically) dense in $[0,1]$ and $\{x \in [0,1]: \; f’(x) = 1 \}$ is (topologically) dense in $[0,1].$ Hence (this is immediate), it follows that the lower strong derivative of $f$ (the $\liminf$ of the “strong difference quotients of $f$” as one approaches $x$) is equal to $0$ on a set that is dense in $[0,1]$ and the upper strong derivative of $f$ is equal to $1$ on a set that is dense in $[0,1].$ Therefore, by a result that Garg had previously proved (lower semicontinuity of the of the lower strong derivative and upper semicontinuity of the upper strong derivative), it follows that the lower strong derivative of $f$ is equal to $0$ at each $x \in [0,1]$ and the upper strong derivative of $f$ is equal to $1$ at each $x \in [0,1].$ In particular, the strong derivative of $f$ does not exist at each $x \in [0,1].$



As to your question about “unstraddled difference quotient” approaches, I don’t know the answer off-hand, but the following ideas may be of interest to you. Consider the $2$-variable function $D(x,y) \; = \; \frac{f(y) - f(x)}{y - x}$ defined in the upper half plane (i.e. the region lying above the line $y = x$). We can view various differentiation notions of $f(x)$ at $x = c$ as the limiting behavior of $D(x,y)$ as we approach the point $(c,c)$ along various paths in the upper half plane: The left derivative of $f(x)$ at $x = c$, if it exists, will be the limit of $D(x,y)$ as $(x,y) \rightarrow (c,c)$ along the horizontal path $y = c$ in the upper half plane. The right derivative of $f(x)$ at $x = c$, if it exists, will be the limit of $D(x,y)$ as $(x,y) \rightarrow (c,c)$ along the vertical path $x = c$ in the upper half plane. The symmetric derivative of $f(x)$ at $x = c$, if it exists, will be the limit of $D(x,y)$ as $(x,y) \rightarrow (c,c)$ along the path normal to $y = x$ in the upper half plane. When viewed in this way, the strong derivative shows its true colors as being a limit that seems much less likely to exist than the left and right limits that characterize the existence of the ordinary derivative. In order for the strong derivative of $f(x)$ to exist at $x = c$, we have to have the limit of $D(x,y)$ exist and be the same for every possible manner of approach to $(c,c)$ lying in the upper half plane. There are a lot of theorems involving tangential and non-tangential approaches to a boundary that may be relevant (google “non-tangential approach” or “non-tangential limit”, each with and without the additional phrase “cluster set”), but the results I know about have to do with how they differ from point to point. Also of possible interest is Note A. A Property of Differential Coefficients on pp. 104-105 of the following book. [Fowler’s porosity condition can also be found in Charles Chapman Pugh’s Real Mathematical Analysis (2002 edition, Chapter 3, Exercise 11 on p. 187) and Edward D. Gaughan’s Introduction to Analysis (1975 2nd edition, Exercise 4.1 on p. 127).]



Ralph Howard Fowler, The elementary differential geometry of plane curves, Cambridge University Press, 1920, vii + 105 pages.




http://books.google.com/books?id=CV07AQAAIAAJ



http://www.archive.org/details/elementarydiffer00fowlrich



Finally, here are some (not all!) references I happen to know about that may be of use. Besides strong derivative, three other names I’ve encountered in the literature are unstraddled derivative (Andrew M. Bruckner), strict derivative (Ludek Zajicek and several people who work in the area of higher and infinite dimensional convex analysis and nonlinear analysis), and sharp derivative (Brian S. Thomson and Vasile Ene).



Charles Leonard Belna, Michael Jon Evans, and Paul Daniel Humke, Symmetric and strong differentiation, American Mathematical Monthly 86 #2 (February 1979), 121-123.



Henry Blumberg, A theorem on arbitrary functions of two variables with applications, Fundamenta Mathematicae 16 (1930), 17-24.




http://matwbn.icm.edu.pl/ksiazki/fm/fm16/fm1613.pdf



Andrew Michael Bruckner and Casper Goffman, The boundary behavior of real functions in the upper half plane, Revue Roumaine de Mathematiques Pures et Appliquees 11 (1966), 507-518.



Andrew Michael Bruckner and John Leonard Leonard, Derivatives, American Mathematical Monthly 73 #4 (Part 2) (April 1966), 24-56. [See p. 35.]



A freely available .pdf file for this paper is at: http://www.mediafire.com/?4obbdtnzwob



Charles L. Belna, G. T. Cargo, Michael Jon Evans, and Paul Daniel Humke, Analogues of the Denjoy-Young-Saks theorem, Transactions of the American Mathematical Society 271 #1 (May 1982), 253-260.




Hung Yuan Chen, A theorem on the difference quotient $(f(b) - f(a))/(b - a)$, Tohoku Mathematical Journal 42 (1936), 86-89.



http://tinyurl.com/2g5whdp



Szymon Dolecki and Gabriele H. Greco, Tangency vis-a-vis differentiability by Peano, Severi and Guareschi, to appear (has appeared?) in Journal of Convex Analysis.



arXiv:1003.1332v1 version, 5 March 2010, 35 pages:



http://arxiv.org/abs/1003.1332




Paul [Paulus] Petrus Bernardus Eggermont, Noncentral difference quotients and the derivative, American Mathematical Monthly 95 #6 (June-July 1988), 551-553.



Martinus Esser and Oved Shisha, A modified differentiation, American Mathematical Monthly 71 #8 (October 1964), 904-906.



Michael Jon Evans and Paul Daniel Humke, Parametric differentiation, Colloquium Mathematicum 45 (1981), 125-131.



Russel A. Gordon, The integrals of Lebesgue, Denjoy, Perron, and Henstock, American Mathematical Society, 1994.



Earle Raymond Hedrick, On a function which occurs in the law of the mean, Annals of Mathematics (2) 7 (1906), 177-192.




Paul Daniel Humke and Tibor Salat, Remarks on strong and symmetric differentiability of real functions, Acta Mathematica Universitatis Comenianae 52-53 (1987), 235-241.



Vojtech Jarnik, O funkcich prvni tridy Baireovy [The functions in Baire's first class], Rozpravy II. Tridy Ceske Akademie 35 #2 (1926), 13 pages.



Vojtech Jarnik, Sur les fonctions de la premiere classe de Baire [On functions in the first class of Baire], Bulletin International de l'Academie des Sciences de Boheme 27 (1926), 350-360. [French version of previous Cech paper. The French version omits a few (but very few) details present in the Cech version.]



Benoy Kumar Lahiri, On nonuniform differentiability, American Mathematical Monthly 67 #7 (Aug.-Sept. 1960), 649-652.



S. N. Mukhopadhyay, On differentiability, Bulletin of the Calcutta Mathematical Society 59 (1967), 181-183.




Albert Nijenhuis, Strong derivatives and inverse mappings, American Mathematical Monthly 81 #9 (November 1974), 969-980.



Giuseppe Peano, Sur la definition de la derivee, Mathesis Recueil Mathematique (2) 2 (1892), 12-14.



http://books.google.com/books?id=kKAKAAAAYAAJ&pg=PA12



Gustave Robin, Theorie Nouvelle des Fonctions, Exclusivement Fondee sur l'Idee de Nombre [New Theory of Functions, Exclusively Founded on the Idea of Number], Scientific Works [of Robin] gathered and published by Louis Raffy, Gauthier-Villars (Paris), 1903, vi + 215 pages. [Uniform differentiation widely used throughout, especially on pp. 104-132.]



http://books.google.com/books?id=qjEPAAAAIAAJ




Ludwig Scheeffer, Zur theorie der stetigen funktionen einer reellen veranderlichen [On the theory of continuous functions of one real variable], Acta Mathematica 5 (1884), 279-296.



http://books.google.com/books?id=PQbyAAAAMAAJ&pg=PA295



Paul Houston Schuette, A question of limits, Mathematics Magazine 77 #1 (February 2004), 61-68. [See pp. 65-66.]



William Henry Young, The Fundamental Theorems of the Differential Calculus, Hafner Publishing Company, 1910, ix + 72 pages.



http://www.archive.org/details/cu31924001536311




Ludek Zajicek, On the symmetry of Dini derivates of arbitrary functions, Commentationes Mathematicae Universitatis Carolinae 22 (1981), 195-209.



http://dml.cz/handle/10338.dmlcz/106064



Ludek Zajicek, Strick differentiability via differentiability, Acta Universitatis Carolinae 28 (1987), 157-159.



http://dml.cz/dmlcz/701936



Ludek Zajicek, Frechet differentiability, strict differentiability and subdifferentiability, Czechoslovak Mathematical Journal 41 (1991), 471-489.




http://dml.cz/dmlcz/102482


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...