Thursday 9 January 2020

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$



How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule?

I know when I use lhopital I easy get

$$ \lim_{h\rightarrow 0}\frac{\cos(ah)a}{1} = a$$ but I don't know how to behave without that way


Answer



Hint:



$$\frac{\sin(ha)}{h} = a\cdot\frac{\sin(ha)}{ha}$$



Also, remember what $$\lim_{x\to 0}\frac{\sin(x)}{x}$$ is equal to?


summation - Equality of the sums $sumlimits_{v=0}^k frac{k^v}{v!}$ and $sumlimits_{v=0}^k frac{v^v (k-v)^{k-v}}{v!(k-v)!}$



How can one proof the equality

$$\sum\limits_{v=0}^k \frac{k^v}{v!}=\sum\limits_{v=0}^k \frac{v^v (k-v)^{k-v}}{v!(k-v)!}$$
for $k\in\mathbb{N}_0$?



Induction and generating functions don't seem to be useful.



The generation function of the right sum is simply $f^2(x)$ with $\displaystyle f(x):=\sum\limits_{k=0}^\infty \frac{(xk)^k}{k!}$



but for the left sum I still don't know.



It is $\displaystyle f(x)=\frac{1}{1-\ln g(x)}$ with $\ln g(x)=xg(x)$ for $\displaystyle |x|<\frac{1}{e}$.



Answer



Recall the combinatorial class of labeled trees which is



$$\def\textsc#1{\dosc#1\csod}
\def\dosc#1#2\csod{{\rm #1{\small #2}}}\mathcal{T} = \mathcal{Z}\times \textsc{SET}(\mathcal{T})$$



which immediately produces the functional equation



$$T(z) = z \exp T(z)
\quad\text{or}\quad

z = T(z) \exp(-T(z)).$$



By Cayley's theorem we have



$$T(z) = \sum_{q\ge 1} q^{q-1} \frac{z^q}{q!}.$$



This yields



$$T'(z) = \sum_{q\ge 1} q^{q-1} \frac{z^{q-1}}{(q-1)!}
= \frac{1}{z} \sum_{q\ge 1} q^{q-1} \frac{z^{q}}{(q-1)!}

= \frac{1}{z} \sum_{q\ge 1} q^{q} \frac{z^{q}}{q!}.$$



The functional equation yields



$$T'(z) = \exp T(z) + z \exp T(z) T'(z)
= \frac{1}{z} T(z) + T(z) T'(z)$$



which in turn yields



$$T'(z) = \frac{1}{z} \frac{T(z)}{1-T(z)}$$




so that



$$\sum_{q\ge 1} q^{q} \frac{z^{q}}{q!}
= \frac{T(z)}{1-T(z)}.$$



Now we are trying to show that



$$\sum_{v=0}^k \frac{v^v (k-v)^{k-v}}{v! (k-v)!}
= \sum_{v=0}^k \frac{k^v}{v!}.$$




Multiply by $k!$ to get



$$\sum_{v=0}^k {k\choose v} v^v (k-v)^{k-v}
= k! \sum_{v=0}^k \frac{k^v}{v!}.$$



Start by evaluating the LHS.

Observe that when we multiply two
exponential generating functions of the sequences $\{a_n\}$ and
$\{b_n\}$ we get that




$$ A(z) B(z) = \sum_{n\ge 0} a_n \frac{z^n}{n!}
\sum_{n\ge 0} b_n \frac{z^n}{n!}
= \sum_{n\ge 0}
\sum_{k=0}^n \frac{1}{k!}\frac{1}{(n-k)!} a_k b_{n-k} z^n\\
= \sum_{n\ge 0}
\sum_{k=0}^n \frac{n!}{k!(n-k)!} a_k b_{n-k} \frac{z^n}{n!}
= \sum_{n\ge 0}
\left(\sum_{k=0}^n {n\choose k} a_k b_{n-k}\right)\frac{z^n}{n!}$$



i.e. the product of the two generating functions is the generating

function of $$\sum_{k=0}^n {n\choose k} a_k b_{n-k}.$$



In the present case we have
$$A(z) = B(z) = 1 + \frac{T(z)}{1-T(z)}
= \frac{1}{1-T(z)} $$
by inspection.




We added the constant term to account for the fact that $v^v=1$ when
$v=0$ in the convolution. We thus have




$$\sum_{v=0}^k {k\choose v} v^v (k-v)^{k-v}
= k! [z^k] \frac{1}{(1-T(z))^2}.$$



To compute this introduce



$$\frac{k!}{2\pi i}
\int_{|z|=\epsilon}
\frac{1}{z^{k+1}} \frac{1}{(1-T(z))^2} \; dz$$



Using the functional equation we put $z=w\exp(-w)$ so that $dz =

(\exp(-w)-w\exp(-w)) \; dw$
and obtain



$$\frac{k!}{2\pi i}
\int_{|w|=\gamma}
\frac{\exp((k+1)w)}{w^{k+1}} \frac{1}{(1-w)^2}
(\exp(-w)-w\exp(-w)) \; dw
\\ = \frac{k!}{2\pi i}
\int_{|w|=\gamma}
\frac{\exp(kw)}{w^{k+1}} \frac{1}{1-w} \; dw$$




Extracting the coefficient we get



$$k! \sum_{v=0}^k [w^v] \exp(kw) [w^{k-v}] \frac{1}{1-w}
= k! \sum_{v=0}^k \frac{k^v}{v!}$$



as claimed.


Remark. This all looks very familiar but I am unable to locate the
duplicate among my papers at this time.


elementary number theory - How does one show that for $k in mathbb{Z_+},3mid2^{2^k} +5$ and $7mid2^{2^k} + 3, forall space k$ odd.




For $k \in \mathbb{Z_+},3\mid2^{2^k} +5$ and $7\mid2^{2^k} + 3, \forall \space k$ odd.




Firstly,




$k \geq 1$



I can see induction is the best idea:



Show for $k=1$:



$2^{2^1} + 5 = 9 , 2^{2^1} + 3 = 7$



Assume for $k = \mu$




so: $3\mid2^{2^\mu} + 5 , \space 7\mid2^{2^\mu} + 3$



Show for $\mu +2$



Now can anyone give me a hint to go from here? My problem is being able to show that $2^{2^{\mu+2}}$ is divisible by 3, I can't think of how to begin how to show this.


Answer



You have already shown that the base cases hold.



Assume $3\mid 2^{2^k}+5$. Then $2^{2^k}\equiv 1$ mod $3$. Hence:
$$2^{2^{k+1}}=2^{2^k*2}=\left(2^{2^k}\right)^2\equiv 1 \text{ mod } 3$$

Hence $3\mid 2^{2^{k+1}}+5$.



In the same way:



Assume $7\mid 2^{2^{k}}+3$. Then $2^{2^{k}}\equiv 4$ mod $7$. Hence:
$$2^{2^{k+2}}=\left(2^{2^k}\right)^4\equiv 4^4 \text{ mod } 7$$
And since $4^4=256=36*7+4$, we see that $256\equiv 4\text{ mod }7$. So $7\mid 2^{2^{k+2}}+3$.


summation - How can you prove that $1+ 5+ 9 + cdots +(4n-3) = 2n^{2} - n$ without using induction?

Using mathematical induction, I have proved that



$$1+ 5+ 9 + \cdots +(4n-3) = 2n^{2} - n$$



for every integer $n > 0$.



I would like to know if there is another way of proving this result without using PMI. Is there any geometric solution to prove this problem? Also, are there examples of problems where only PMI is our only option?



Here is the way I have solved this using PMI.




Base Case: since $1 = 2 · 1^2 − 1$, the formula holds for $n = 1$.



Assuming that the
formula holds for some integer $k ≥ 1$, that is,



$$1 + 5 + 9 + \dots + (4k − 3) = 2k^2 − k$$



I show that



$$1 + 5 + 9 + \dots + [4(k + 1) − 3] = 2(k + 1)^2 − (k + 1).$$




Now if I use hypothesis I observe.



$$
\begin{align}
1 + 5 + 9 + \dots + [4(k + 1) − 3]
& = [1 + 5 + 9 + \dots + (4k − 3)] + 4(k + 1) −3 \\
& = (2k^2 − k) + (4k + 1) \\
& = 2k^2 + 3k + 1 \\
& = 2(k + 1)^2 − (k + 1)

\end{align}
$$



$\diamond$

calculus - What is wrong with treating $dfrac {dy}{dx}$ as a fraction?




If you think about the limit definition of the derivative, $dy$ represents $$\lim_{h\rightarrow 0}\dfrac {f(x+h)-f(x)}{h}$$, and $dx$ represents




$$\lim_{h\rightarrow 0}$$
. So you have a $\;\;$$\dfrac {number}{another\; number}=a fraction$, so why can't you treat it as one? Thanks! (by the way if possible please keep the answers at a calc AB level)


Answer



The derivative, when it exists, is a real number (I'm restricting here to real values functions only for simplicity). Not every real number is a fraction (i.e., $\pi$ is not a fraction), but every real number is trivially a quotient of two real numbers (namely, $x=\frac{x}{1}$). So, in which sense is the derivative a fraction? answer: it's not. And now, in which sense is the derivative a quotient to two numbers? Ahhh, let's try to answer that then: By definition $f'(x)=\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}$. Well, that is not a quotient of two numbers, but rather it is a limit. A limit, when it exists, is a number. This particular limit is a limit of quotients of a particular form (still, not of fractions in general, but quotients of real numbers).



The meaning of the derivative $f'(x)$ is the instantaneous rate of change of the value of $f$ at the point $x$. It is defined as a certain limit. If you now intuitively think of $h$ as an infinitesimal number (warning: infinitesimals do not exist in $\mathbb R$, but they exist in certain extensions of the reals) then you can consider the single expression $\frac{f(x+h)-f(x)}{h}$. In systems where infinitesimals really do exist one can show that this single expression, when the derivative exists, is actually infinitesimally close to the actural derivative $f'(x)$. That is, when $h\ne 0$ is infinitesimal, $f'(x)-\frac{f(x+h)-f(x)}{h}$ is itself infinitesimal. One can them compute with this expression as if it were the derivative (with some care). This can be done informally, and to some extend this is how the creators of early calculus (prior to Cauchy) argued, or it can be done rigorously using any one of a number of different techniques to introduce infinitesimals into calculus. However, getting infinitesimals into the picture comes with a price. There are logical/set-theoretical issues with such models rendering all of them not very explicit.


Wednesday 8 January 2020

real analysis - Sin(n) and cos(n) dense in $[-1,1]

We knows that $sin(x)$ and $cos(x)$ are two function with value in the closed set $[-1,1]$. How can I prove that $X=({sin(n)|n\in\mathbb{N}})$ and $Y=({cos(n)|n\in\mathbb{N}})$ are or not dense in $[-1,1]$.

linear algebra - Given a Characteristic Polynomial of a Matrix...



This question contains three parts. I have already answered the first two. The last part is confusing me.




Suppose $A$ is a $4 \times 4$ matrix whose characteristic polynomial is $p(x) = (x - 1)(x + 2)^2(x - 3)$.



Part (a): Show that $A$ is invertible. Find the characteristic polynomial of $A^{-1}$.



We have that the roots of a characteristic polynomial are the eigenvalues of $A$. That is, $\lambda = -2, -2, 1, 3$ are our eigenvalues. The determinant of an $n \times n$ matrix is the product of its eigenvalues. Hence, det$A = 12$. An $n \times n$ matrix is invertible if and only if its determinant is nonzero. Therefore, $A$ is invertible.



Since none of the eigenvalues are zero, we have that $\lambda$ is an eigenvalue of $A$ if and only if $\frac{1}{\lambda}$ is an eigenvalue of $A^{-1}$. Then, the characteristic polynomial for $A^{-1}$ is $q(x) = (x - 1)(x + 1/2)^2(x - 1/3)$.



Part (b): Find the determinant and trace of $A$ and $A^{-1}$.




This is easy since the determinant of an $n \times n$ matrix is the product of its eigenvalues and the trace of an $n \times n$ matrix is the sum of its eigenvalues.



Part (c): Express $A^{-1}$ as a polynomial in $A$. Explain your answer.



Not really sure what part (c) is getting at.


Answer



By the Cayley-Hamilton theorem, we have $(A-1)(A+2)^2(A-3)=0$, that is,
$A^4-9A^2-4A+12I=0$.
Multiply both sides by $A^{-1}$, and be amazed!


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...