Sunday 30 June 2019

What algorithms exist to quickly compute the inverse factorial?

I'm interested in algorithms to quickly compute the inverse factorial.
I've noted that large factorials have a unique number of digits. How can I use this fact to quickly compute the factorial? Is there a formula,
n = f(n!) = #digits( (n!) )?



I'm mostly interested in the case where we know our input is correct. But, error checking for values that are not factorials would be a bonus. (Perhaps, someone has thought of a way to do the inverse gamma function quickly?)



I'm interested in inputs that have over a million digits, so simply dividing 1,2,3,...,n will not work.

calculus - A limit without using L'hoptial's rule

Define the natural number $e$ by $e=\lim_{x\to 0} (1+x)^{1/x}$.



Then, I can prove $\lim_{x\to 0} \frac{e^x-1}{x}=1$.




Let $z=e^x-1$. Then, $x=\ln(z+1)$ and $$\lim_{x\to 0} \frac{e^x-1}{x}=\lim_{z\to 0} \frac{z}{\ln(z+1)}=\frac{1}{\ln e}=1\text{.}$$



Using a similar trick (without L'Hoptial's rule), can we prove $\lim_{x \to 0}\frac{e^x-1-x}{x^2}=\frac{1}{2}$?

inequality - Find the minimum value of $f(x,y,z)=frac{x^2}{(x+y)(x+z)}+frac{y^2}{(y+z)(y+x)}+frac{z^2}{(z+x)(z+y)}$.



Find the minimum value of $f(x,y,z)=\frac{x^2}{(x+y)(x+z)}+\frac{y^2}{(y+z)(y+x)}+\frac{z^2}{(z+x)(z+y)}$ for all notnegative value of $x,y,z$.



I think that minimum value is $\frac{3}{4}$ when $x=y=z$ but I have no prove.



Answer



By C-S $$\sum_{cyc}\frac{x^2}{(x+y)(x+z)}\geq\frac{(x+y+z)^2}{\sum\limits_{cyc}(x+y)(x+z)}=\frac{\sum\limits_{cyc}(x^2+2xy)}{\sum\limits_{cyc}(x^2+3xy)}\geq\frac{3}{4},$$
where the last inequality it's $$\sum_{cyc}(x-y)^2\geq0.$$
The equality occurs for $x=y=z$, which says that we got a minimal value.



Another way:



We need to prove that:
$$4\sum_{cyc}x^2(y+z)\geq3\prod_{cyc}(x+y)$$ or
$$\sum_{cyc}(x^2y+x^2z-2xyz)\geq0$$ or

$$\sum_{cyc}z(x-y)^2\geq0.$$


Saturday 29 June 2019

probability - Doubt with a Question on Linearity of Expectation

Question:
With each purchase at SlurpeeShack, you receive one random piece of the puzzle seen at right.
Once you collect all 12 pieces, you get a free Slurpee!
What is the expected value for the number of purchases you will need to make in order to collect all 12 pieces?



My solution:
The probability to collect any piece is p=1/(12).




The expected value to collect any piece i at a step is E(i)=1/p= 12 steps.



By linearity of expectation: the expected value of the sum of random variables is equal to the sum of their individual expected values, regardless of whether they are independent.



So, why can't we write
E(x)= E(1) + E(2)+ E(3)+....E(12)=144.



The answer is 37.



Can anyone explain where am I wrong in my approach.

limits - Sequence of norms

Suppose that we have a sequence of norms $(\|\cdot\|_n)_{n\geq 1}$ on a finite dimensional space $X$ such that, as functions from $X$ to $\mathbb{R}_+$, they point-wise converge to a limit norm $\|\cdot\|_\infty$. Since we are in a finite-dimensional setting, all of the norms are equivalent among them, and, in particular, all of the norms in the sequence are equivalent to $\|\cdot\|_\infty$. An implication of this fact is that there is a sequence of constants $c_n$ such that
$$ \forall n \geq 1, \forall x\in X, \quad \|x\|_n \geq c_n\|x\|_\infty. $$



Now, the equivalence constants $c_n$ are not necessarily unique, but, since the norms $\|\cdot\|_n$ approach $\|\cdot\|_\infty$ in the limit, I am wondering how can one prove that they can be chosen to have $1$ as a limit (i.e. $\lim_{n\to\infty} c_n = 1$) ?

calculus - Derive the formula for the cosine of the difference of two angles from the dot product formula



My book states that:




The formula for the cosine of the difference of two angles is deduced
as an application of the scalar product of two vectors:




$$\cos(\alpha - \beta) = \cos\alpha\cos\beta+\sin\alpha\sin\beta$$



From this formula, we can deduce the formula for the sine of the
difference:



$$\sin(\alpha-\beta)=\cos[\frac{\pi}{2}-(\alpha-\beta)]=\\\cos[\frac{\pi}{2}-\alpha-(-\beta)]
=\\ \cos(\frac{\pi}{2}-\alpha)\cos(-\beta)+\sin(\frac{\pi}{2}-\alpha)\sin(-\beta)$$



$$\\\\$$

$$\\\\\\\\\\\\\sin(\alpha-\beta )= \sin\alpha\cos\beta-\cos\alpha\sin\beta$$



Deduce the following expression starting from the formulas above:




  • $\cos(\alpha+\beta)=\cos\alpha\cos\beta-\sin\alpha\sin\beta$




I have two questions:





  1. I don't understand how my book means by




The formula for the cosine of the difference of two angles is deduced as an application of the scalar product of two vectors:




Could you explain to me what this means?





  1. How do I solve the given problem?


Answer



Draw two position vectors, $\mathbf{v}_1$ and $\mathbf{v}_2$ with unit magnitude and at angles $\alpha, \beta$ to the positive $x$-axis. Then the angle between the two is $\alpha - \beta$ (assuming $\alpha > \beta$ w.l.o.g). But $\mathbf{v}_1 \cdot \mathbf{v}_2$ is the cosine of the angle between them. So $\cos (\alpha - \beta) = \mathbf{v}_1 \cdot \mathbf{v}_2$.



But remember that the two vectors lie on the unit circle and have components $\mathbf{v}_1 = \cos \alpha \mathbf{i} + \sin \alpha \mathbf{j}$ and $\mathbf{v}_2 = \cos \beta \mathbf{i} + \sin \beta \mathbf{j}$. By the definition of the dot product $$\cos (\alpha - \beta) = \cos \alpha \cos \beta + \sin \alpha \sin \beta.$$







To solve the given problem: note that $\beta \mapsto -\beta$ gives $$\cos (\alpha - (-\beta)) = \cos \alpha \cos (-\beta) + \sin \alpha \sin (-\beta) = \cos \alpha \cos \beta - \sin \alpha \sin \beta = \cos(\alpha + \beta)$$ using the oddity of $\sin$ and even-ness of $\cos$.


Friday 28 June 2019

number theory - Multiplicative group of $(mathbb Z/p^r)^times$



I am trying to show that the multiplicative group of $(\mathbb Z/p^r)^\times$ is cyclic. I have established that the order of this group is $p^{r-1}(p-1)$. So, to show that it is cyclic, it suffices to product an element of order $p^{r-1}$ and an element of order $p-1$, for then it must be a product of cyclic groups of these two orders, and since $p^{r-1}$ and $p-1$ are relatively prime, it follows that it is cyclic. But, I am having trouble finding elements with these orders. I tried computing the order of several elements using the binomial formula, but it got pretty messy. Any suggestions for which elements to try and how to prove that they have the desired orders, or for another way to do the proof?


Answer



Here is an outline of a possible proof (provided $p$ is odd):




  • Let $x,y\in \left(\mathbb Z/N\mathbb Z\right)^\times$ be of respective order $n$ and $m$, and such that $\gcd(x,y)=1$. Then $xy$ has order $nm$ (modulo $N$).

  • From the previous result, and the fact that $\left(\mathbb Z/p\mathbb Z\right)^\times$ is cyclic, prove that $\left(\mathbb Z/p^2\mathbb Z\right)^\times$ is cyclic.


  • Use induction.


number theory - $3^2+2=11$, $33^2+22=1111$, $333^2+222=111111$, and so on.

$3^2+2=11$



$33^2+22=1111$



$333^2+222=111111$




$3333^2+2222=11111111$



$\vdots$



The pattern here is obvious, but I could not have a proof.




Prove that $\underset{n\text{ }{3}\text{'s}}{\underbrace{333\dots3}}^2+\underset{n\text{ }{2}\text{'s}}{\underbrace{222\dots2}}=\underset{2n\text{ }{1}\text{'s}}{\underbrace{111\dots1}}$ for any natural number $n$.





Dear, I am not asking you to prove, I just want a hint, how to start proving it. Thanks.

Thursday 27 June 2019

algebra precalculus - Distributive Property Theory question



I've been wondering about the theory behind the distributive property lately.
For example: 2(pi * r^2) is just 2 * pi * r^2.
However, when you add a positive number like +3. You get 2(PI*r^2 +3). But, that isn't just 2*PI*r^2+3.
Its: 2pi*r^2 + 2*3.
So I was wondering why that is. Why do you only have to multiply once with the whole multiplication part(with pi^r2), instead of having to multiply 2 by both pi and r^2.
So then I thought isn't it just: (pi * r^2 + 3) + (pi * r^2 + 3)? Then, I tried to simplify that, thinking it would help me understand why...but all that did was make me more confused than when I started. Could someone help me understand please?


Answer



The distributive property,

and most basic properties of real numbers,
comes from geometry.



A non-negative value
corresponds to the length of a line segment.



Adding values corresponds to placing
two segments together
and measuring their length.
Since the order that the segments are placed

does not change the total length,
addition is commutative.
Looking at three segments,
the length is the same if
the first two are placed and then the third,
or the last two are placed and then the first.
Therefore
addition is associative.



Multiplying two segments

corresponds to getting the area
of a rectangle with sides the lengths
of the two segments.
Since swapping the two segments
just rotates the rectangle by 90 degrees,
which does not change the area,
multiplication is commutative.



Consider two rectangles
with a common height $h$

and bases $a$ and $b$.
Their areas are
$ah$ and $bh$,
and the sum of the areas is
$ah+bh$.
Place these two rectangles together
so their common height lines up.
They now form a single rectangle
with base $a+b$ and
height $h$,

and the area of this rectangle
is $(a+b)h$.
This means that
$ah+bh = (a+b)h$,
which is the distributive law.



These laws were, of course,
extended to other type of numbers
(real, complex, fields, ...),
but they all started with geometry.




Off topic but interesting:
Try to prove that
$\sqrt{2}$ is irrational
using only geometric concepts
and proofs.
No algebra is allowed.
(I think I'll even propose this as a question.)


complex analysis - Imaginary Part of log(f(z))

Let



$f(z)=z^2+4$



My question is, how the picture of




$M=\{z\in\mathbb C:\operatorname{Im}(\log(f(z)) > 0\}$ looks like.



My attempt is that
$\operatorname{Im}(\log(f(z))=\arg(f(z))$
which let's me guess, that $\operatorname{Im}(f(z))$ and $\operatorname{Re}(f(z))$ both had to be greater than zero or smaller than zero, as $\arctan(x) > 0$, if $x>0$.
Can anyone help me with this issue?
Thanks a lot.

Wednesday 26 June 2019

abstract algebra - Constructing Finite Field Tables



Could someone please walk me through how to construct Finite Field tables? My biggest confusion is how to get the elements from the respectable fields.




For example. I'm asked to construct the table with 8, 9, and 16 elements.
My first thought is to take the elements and factor them as a product of primes: $2^3, 3^2, 2^4$ for each of the tables. However, I don't know how to come up with the polynomials.



Terras makes this statement in her book: "If $\alpha$ is a root of $f(x)$, the field obtained by adjoining $\alpha$ to $\mathbb{F}_p$ is $\mathbb{F}_q\cong \mathbb{F}_p[x]/f(x)$...Example: $\mathbb{F}_4\cong \mathbb{F}_2[x]/(x^2+x+1)=\mathbb{F}_2(\alpha)=\{0,1,\alpha,\alpha+1\}$"



How does she get that set of polynomials? Where does that polynomial that the cosets are formed from? For $\mathbb{F}_3$, what elements would I have in my set? I'm new to field theory, and this book makes a lot of jumps.


Answer



To combine and maybe condense the content of the comments, let’s say this:



If you want a (the) field with $p^m$ elements, you start with the field $\Bbb F_p=\Bbb Z/p\Bbb Z$. For simplicity, let’s call this field $k$. Then you find and fix a monic poynomial $f(x)$ of degree $m$ that’s irreducible over $k$.




Now your elements of $\Bbb F_{p^m}$, I’ll call this bigger field $K$, may be represented as polynomials of degree less than $m$, say $a_0+a_1x+\cdots a_{m-1}x^{m-1}$, with coefficients in $k$. You add two of these things in the obvious way, but for multiplication, you do the standard obvious thing, but then take your product $P(x)$, and work Euclidean Division of Polynomials on it, by dividing by $f(x)$ and using now the remainder (necessarily of degree $

Example: $p^m=9$, $k=\Bbb F_3$ with elements $\{0,1,2\}$. Since $-1=2$ is not a square there, you can use $x^2+1$ as your irreducible polynomial, and call your basic element now $i$ instead of $x$ — makes things very transparent. So now $(2+i)+(1+i)=2i$, but $(2+i)(1+i)=1$. Easy now to get the reciprocal of an element, too, same as in high-school. For higher degree than $2$, it’s a bit more of a pain to get reciprocal, but there are tricks that I won’t go into here.


algebra precalculus - What equation intersects only once with $f(x)=sqrt{1-(x-2)^2}$




Being $f(x)=\sqrt{1-(x-2)^2}$ I have to know what linear equation only touches the circle once(only one intersection), and passes by $P(0,0)$.



So the linear equation must be $y=mx$ because $n=0$.



I have a system of 2 equations:
\begin{align}
y&=\sqrt{1-(x-2)^2}\\
y&=mx
\end{align}




So I equal both equations and I get \begin{align}
mx&=\sqrt{1-(x-2)^2}\\
m&=\frac{\sqrt{1-(x-2)^2}}{x}
\end{align}



$m$ can be put in the $y=mx$ equation, which equals to:
\begin{align}
y&=\left(\frac{\sqrt{1-(x-2)^2}}{x}\right)x\\
&=\sqrt{1-(x-2)^2}
\end{align}




But that equation has $\infty$ intersections, and I want only the equation who has $1$ interception.




What is the good way to know this? And how can it be calculated?



Answer



As you have in your post, we have $y = mx$ as the straight line. For this line to touch the semi-circle, we need that $y = mx$ and $y = \sqrt{1 - (x-2)^2}$ must have only one solution. This means that the equation $$mx = \sqrt{1 - (x-2)^2}$$ must have only one solution.



Hence, we need to find $m$ such that $m^2x^2 = 1 - (x-2)^2$ has only one solution.




$(m^2 + 1)x^2 -4x +4 = 1 \implies (m^2+1) x^2 - 4x + 3 = 0$.



Any quadratic equation always has two solution. The two solutions collapse to a single solution when the discriminant of the quadratic equation is $0$. This is seen from the following reasoning.



For instance, if we have $ax^2 + bx+c = 0$, then we get that $$x = \frac{-b \pm \sqrt{b^2 -4ac}}{2a}$$
i.e. $\displaystyle x_1 = \frac{-b + \sqrt{b^2 -4ac}}{2a}$ and $\displaystyle x_2 = \frac{-b - \sqrt{b^2 -4ac}}{2a}$ are the two solutions. If the two solutions to collapse into a single solution i.e. if $x_1 = x_2$, we get that $$ \frac{-b + \sqrt{b^2 -4ac}}{2a} = \frac{-b - \sqrt{b^2 -4ac}}{2a}$$ This gives us that $\displaystyle \sqrt{b^2 -4ac} = 0$. $D = b^2 - 4ac$ is called the discriminant of the quadratic.



The discriminant of the quadratic equation, $(m^2+1) x^2 - 4x + 3 = 0$ is $D = (-4)^2 - 4 \times 3 \times (m^2+1)$.




Setting the discriminant to zero gives us that $(-4)^2 - 4 \times 3 \times (m^2 + 1) =0$ which gives us $ \displaystyle m^2 + 1 = \frac43 \implies m = \pm \frac1{\sqrt{3}}$.



Hence, the two lines from origin that touch the circle are $y = \pm \dfrac{x}{\sqrt{3}}$.



Since you have a semi-circle, the only line that touches the circle is $\displaystyle y = \frac{x}{\sqrt{3}}$. (Thanks to @Joe Johnson 126 for pointing this out).


Tuesday 25 June 2019

integration - References on Breaking Integrals into Logarithms

I've seen that (tough) integrals may be broken into answers in logarithmic form. In other words, many integrals have an alternate answer that is in the form of a function involving logarithms. An example is this question, which gives an alternate answer in terms of logarithms.



I'd like to know much more about breaking integrals into logarithms. Is there a method that can accomplish this without luck? I've read a reference (actually pictures of a book, I believe) that stated something like any integral can be broken into this logarithmic form. I'd like to know what is known about this, and I'd be delighted if someone could reference this research.



I'm looking into an algorithm to do very tough integration, and wonder if this technique is anywhere close to feasible.

Monday 24 June 2019

algebra precalculus - How to determine linear function from "at a price (p) of 220 the demand is 180"




In my Math book I can look-up the answers for exercises. And as I do I have no idea how I would solve the following example. Probably my mind is stuck as I don't find new ways to think about the issue.



"The supply function of a good is a linear function. At a price (p) of 220 the demand is 180 units. At a price of 160 the demand is 240 units."




  1. Determine the demand function.



"Also the supply function is linear. At a price of 150 the supply is 100 units and at a price of 250 the supply is 300 units".





  1. Determine the supply function.



Could someone explain to me how I would approach to solve these two questions as the book doesn't provide the explanation but only the answers? Thank you.


Answer



You know that the demand function is a linear function of price $p$, say $D(p)=\alpha\cdot p+\beta$ for suitable parameters $\alpha,\beta\in\mathbb R$. From the conditions given in your problem, you know that



$$
D(220)=\boldsymbol{220\alpha+\beta=180}\qquad\text{and}\qquad

D(160)=\boldsymbol{160\alpha+\beta=240}.
$$



From the bold equations (system of two linear equations with two variables), one simply obtains the coeffcients $\alpha=-1$, $\beta=400$ that enables you to write down the demand function as $D(p)=400-p$.



In fact, the same can be done for the supply function.


linear algebra - Work out the values of a and b (in the below question):

$1^2 = 1$



$1^2 + 2^2 = 5$



$1^2 + 2^2 + 3^2 = 14$



$1^2 + 2^2 + 3^2 + 4^2 = 30$




$1^2 + 2^2 + 3^2 + 4^2 + ...... + n^2 = an^3 + bn^2 + (n/6)$



Work out the values of $a$ and $b$.

summation - How Are the Solutions for Finite Sums of Natural Numbers Derived?




So, I've been learning set theory on my own (Lin, Shwu-Yeng T., and You-Feng Lin. Set Theory: An Intuitive Approach. Houghton Mifflin Co., 1974.) and have come across infinite sums of natural numbers. Since I took Algebra II many years ago, I've known of the results of these sums for the purpose of solving summations. (I also know of the formula (and its flaws) which states the sum of the set of natural numbers is $-1/12$). Just for reference, I've listed six infinite series of natural numbers below (they are the six listed in the 44 year old textbook I'm using):



$$\sum_{k=1}^{n}k=\frac{n(n+1)}{2}$$
$$\sum_{k=1}^{n}k(k+1)=\frac{n(n+1)(n+2)}{3}$$
$$\sum_{k=1}^{n}k^2=\frac{n(n+1)(2n+1)}{6}=\frac{n^3}{3}+\frac{n^2}{2}+\frac{n}{6}$$
$$\sum_{k=1}^{n}k^3=\frac{n^2(n+1)^2}{4}=\frac{n^4}{4}+\frac{n^3}{2}+\frac{n^2}{4}$$
$$\sum_{k=1}^{n}(2k-1)=n^2$$
$$\sum_{k=1}^{n}\frac{1}{k(k+1)}=\frac{n}{n+1}$$




Now that I've started learning set theory, I now know how to prove these results using mathematical induction (which admittedly, I had a lot of fun doing). However, I still have a few questions about this. Firstly, through my own research, I found a list of mathematical series on Wikipedia, but this list does not have all the series listed in the textbook. So, is there a list elsewhere of all series of natural numbers, and if so then where? (Now that I think of it, what if there is an infinite amount of infinite series; although this may be the case, obviously not all of them would be practical, as many maybe could be simplified into general cases). Second (and most important), although I know how to prove these results using mathematical induction, I do not know how to derive them. How would one go about actually deriving such a result for an infinite series? The method could not possibly be trial and error by using mathematical induction on random expressions. I cannot think of a method myself at this time, but I know there must be some way of doing this. And lastly, if you can think of a better title for the question, please let me know, since I did have trouble coming up with a suitable title. Thank you in advance to whoever is able to help!


Answer



Note that



$$\sum_{k=1}^{n}k=\frac{n(n+1)}{2}$$



is a classical result which can be easily proved by the following trick



enter image description here




and also



$$\sum_{k=1}^{n}k^2=\frac{n(n+1)(2n+1)}{6}=\frac{n^4}{4}+\frac{n^3}{2}+\frac{n^2}{4}$$



can be derived by a similar trick in 3D



enter image description here



Note that




$$\sum_{k=1}^{n}k(k+1)=\frac{n(n+1)(n+2)}{3}=\frac{n^3}{3}+\frac{n^2}{2}+\frac{n}{6}$$



is simply



$$\sum_{k=1}^{n}k(k+1)=\sum_{k=1}^{n}k^2+\sum_{k=1}^{n}k$$



and



$$\sum_{k=1}^{n}(2k-1)=n^2$$




is



$$\sum_{k=1}^{n}(2k-1)=2\sum_{k=1}^{n} k -\sum_{k=1}^{n} 1=2\left(\sum_{k=1}^{n} k\right) - n$$



More in general this kind of sums can be computated by Faulhaber's formula and can be derived one from the previous by a nice telescopic trick.



For example for $\sum k^2$ note that



$$(k+1)^3-k^3=3k^2+3k+1 \implies n^3-1=3\sum_{k=1}^{n} k^2+3 \sum_{k=1}^{n} k +n $$




from which $\sum_{k=1}^{n} k^2$ can be derived.



The latter argument prove that $\sum_{k=1}^{n} k^m$ is expressed by a polynomial of degree $m+1$.



For the last sum $\sum_{k=1}^{n}\frac{1}{k(k+1)}=\frac{n}{n+1}$ refer to the discussion by Ross Millikan.


Sunday 23 June 2019

calculus - Evaluating $int_0^infty frac {cos {pi x}} {e^{2pi sqrt x} - 1} mathrm d x$



I am trying to show that$$\displaystyle \int_0^\infty \frac {\cos {\pi x}} {e^{2\pi \sqrt x} - 1} \mathrm d x = \dfrac {2 - \sqrt 2} {8}$$



I have verified this numerically on Mathematica.




I have tried substituting $u=2\pi\sqrt x$ then using the cosine Maclaurin series and then the $\zeta \left({s}\right) \Gamma \left({s}\right)$ integral formula but this doesn't work because interchanging the sum and the integral isn't valid, and results in a divergent series.



I am guessing it is easy with complex analysis, but I am looking for an elementary way if possible.


Answer



This integral is one of Ramanujan's in his Collected Papers where he also shows the connection with the sin case.



Define $$\int_{0}^{\infty}\frac{\cos(\frac{a\pi x}{b})}{e^{2\pi \sqrt{x}}-1}dx$$



If a and b are both odd. In this case, they are both a=b=1.




Then, $$\displaystyle \frac{1}{4}\sum_{k=1}^{b}(b-2k)\cos\left(\frac{k^{2}\pi a}{b}\right)-\frac{b}{4a}\sqrt{b/a}\sum_{k=1}^{a}(a-2k)\sin\left(\frac{\pi}{4}+\frac{k^{2}\pi b}{a}\right)$$



letting a=b=1 results in your posted solution.


Saturday 22 June 2019

probability - Finding the mean number of times a die is thrown if it is thrown until a number at least as high as the result of the first throw is obtained



A die is thrown and the number is noted. Then the die is thrown again repeatedly until a number at least as high as the number obtained on the first throw is thrown. Find the mean number of times the die is thrown, including the first throw.



mean_number_of_throws_calculation




The answer is $3.45$, but I am getting $2.53$.


Answer



If your first roll was a $4$ then for each roll thereafter there is a $\frac{3}{6}$ chance to roll a number at least a large as a $4$ again. Looking at this specific case, the expected number of rolls until doing so would be $\frac{6}{3}=2$.



In general, if you have chance $p$ for success, it will take on average $\frac{1}{p}$ many independent attempts to get your first success.



Noting that having rolled a four as your first roll only accounts for $\frac{1}{6}$ of the time and calculating the rest of the related probabilities and finally accounting for the initial roll we get the final answer.




$1+\frac{1}{6}(\frac{6}{1}+\frac{6}{2}+\frac{6}{3}+\frac{6}{4}+\frac{6}{5}+\frac{6}{6})=3.45$ The $1$ comes from the initial roll, the $\frac{1}{6}$ comes from the chance to be in each respective case, and each $\frac{6}{k}$ comes from the expected number of rolls until rolling a $7-k$ or greater.




Friday 21 June 2019

trigonometry - Sides of a triangle are in Arithmetic Progression, then find $tan (alpha+ frac{beta}{2})$

The sides of a triangle are in Arithmetic Progression. If the smallest angle of the triangle is $\alpha$ and largest angle of the triangle exceeds the smallest angle by $\beta$, then find the value of $\tan (\alpha+ \frac{\beta}{2})$



Would it be correct to assume sides of triangle of as $1,2,3$ and then apply cosine rule to find angles? Or could someone propose a better approach?

calculus - Power series solution for ODE



The ODE I have is $$y'(x)+e^{y(x)}+\frac{e^x-e^{-x}}{4}=0, \hspace{0.2cm} y(0)=0$$

I want to determine the first five terms (coefficients $a_0,\ldots, a_5$) of the power series solution $$y(x)=\sum_{k=0}^{\infty} a_kx^k$$ So far, I know that $$y'(x)=\sum_{k=1}^{\infty} a_kkx^{k-1}$$
Now I plug these back into the equation and get:
$$\sum_{k=1}^{\infty} a_kkx^{k-1} + e^{\sum_{k=0}^{\infty} a_kx^k} + \frac{e^x-e^{-x}}{4}=0$$. Now I'm not sure how to continue with this. Please help.


Answer



Since you just need a few terms, setting $$y=\sum_{n=1}^6 a_i x^i$$ (because of the condition $y(0)=0$) you could develop $e^y$ as a Taylor series around $x=0$ and get
$$e^y=1+a_1 x+\frac{1}{2} \left(a_1^2+2 a_2\right) x^2+\frac{1}{6} \left(a_1^3+6 a_2 a_1+6
a_3\right) x^3+$$ $$\frac{1}{24} \left(a_1^4+12 a_2 a_1^2+24 a_3 a_1+12 a_2^2+24
a_4\right) x^4+$$ $$\frac{1}{120} \left(a_1^5+20 a_2 a_1^3+60 a_3 a_1^2+60 a_2^2
a_1+120 a_4 a_1+120 a_2 a_3+120 a_5\right) x^5+$$ $$\frac{1}{720} \left(a_1^6+30 a_2
a_1^4+120 a_3 a_1^3+180 a_2^2 a_1^2+360 a_4 a_1^2+720 a_2 a_3 a_1+720 a_5

a_1+120 a_2^3+360 a_3^2+720 a_2 a_4+720 a_6\right) x^6$$



Expanding the term $\frac{e^x-e^{-x}}{4}$ as a Taylor series too, the differential equation then write
$$(a_1+1)+\left(a_1+2 a_2+\frac{1}{2}\right) x+\frac{1}{2} \left(a_1^2+2 a_2+6
a_3\right) x^2+\frac{1}{12} \left(2 a_1^3+12 a_2 a_1+12 a_3+48 a_4+1\right)
x^3+$$ $$\frac{1}{24} \left(a_1^4+12 a_2 a_1^2+24 a_3 a_1+12 a_2^2+24 a_4+120
a_5\right) x^4+$$ $$\frac{1}{240} \left(2 a_1^5+40 a_2 a_1^3+120 a_3 a_1^2+120 a_2^2
a_1+240 a_4 a_1+240 a_2 a_3+240 a_5+1440 a_6+1\right) x^5+$$ $$\frac{1}{720}
\left(a_1^6+30 a_2 a_1^4+120 a_3 a_1^3+180 a_2^2 a_1^2+360 a_4 a_1^2+720 a_2
a_3 a_1+720 a_5 a_1+120 a_2^3+360 a_3^2+720 a_2 a_4+720 a_6\right)

x^6=0$$



Cancelling the coefficients lead to $$a_1=-1\qquad a_2=\frac{1}{4}\qquad a_3=-\frac{1}{4}\qquad a_4=\frac{7}{48}\qquad a_5=-\frac{19}{160}$$



I hope I did not make any mistake since my results do not coincide with DaveNine's answer.


algebra precalculus - How can $frac{4}{3} times 3=4$ if $ frac{4}{3}$ is $1.3$?

Ok use your closest calculator, and type $\frac{4}{3}$, which is $1.3333333333$,and then multiply it with $3$ which is $3.9999999999$ but then type $\frac{4}{3} \times 3=4$ how?. How can it be $4$ if $\frac{4}{3}$ is $1.3333333333$ and when you multiply it with $3$ is $3.9999999999$.

elementary number theory - Exponent of Prime in a Factorial

I was just trying to work out the exponent for $7$ in the number $343!$. I think the right technique is $$\frac{343}{7}+\frac{343}{7^2}+\frac{343}{7^3}=57.$$ If this is right, can the technique be generalized to $p$ a prime number, $n$ any positive integer, then the exponent of $p$ in $n!$ will be $$\sum_{k=1}^\infty\left\lfloor\frac{n}{p^k}\right\rfloor\quad ?$$ Here, $\lfloor\cdot\rfloor$ denotes the integer less than or equal to $\cdot$ .




Obviously the sum is finite, but I didn't know if it was correct (since its veracity depends on my first solution anyway).

Proof involving functional equation



I'm trying to prove that if $$f(x+n)=f(x)f(n)$$ for all $x\in \Bbb R$ and $n \in \Bbb N$, then it also holds for $x,n \in \Bbb R$. One "argument" I came up with was regarding the symmetry. There's no reason why one should be constrained to the integers, while the other can be any real number, but that's not a proper argument.






Another thing I thought of is this: If we set $n=1$ then we get $$\begin{align} f(x+1) &= f(1)f(x) \tag{1} \end{align}$$ which is true for all $x \in \Bbb R$, now if we instead set $x=1$ then we get $f(n+1)=f(n)f(1)$ which must also be true for $n \in \Bbb R$ because $(1)$ is. What keeps me from being satisfied is that $n\in \Bbb R$ under the assumption that $x=1$.




Is my reasoning valid or not?



Edit: Forgot an important bit of information: $f$ is a continuous function.


Answer



Your proof is not valid; you are noticing that if $x$ is an integer then $n$ could vary anywhere in the reals, but this is just stating that the relation $f(x+n)=f(x)f(n)$ has a symmetry where we swap $x$ and $n$ and doesn't tell us anything about whether $f(x+y)=f(x)f(y)$ when both are real.



More directly, the statement you're trying to prove is false, so obviously your proof of it is not valid. Let $f$ be the function
$$f(x)=a^{x+g(x)}$$
where $g(x)$ is a function such that $g(x+1)=g(x)$ and $g(0)=0$, and clearly, for any integer $n$, it holds that $g(x+n)=g(x)$ and $g(n)=0$. Then, for integer $n$ and real $x$, it olds that
$$f(x+n)=a^{x+n+g(x+n)}$$

but we can write $x+n+g(x+n)=(x+g(x))+(n+g(n))$ so we have



$$f(x+n)=a^{x+g(x)}a^{n+g(n)}=f(x)f(n).$$
However, it's easy to choose reals for which $f(x+y)=f(x)f(y)$ does not hold; for instance, choosing $x=y=\frac{1}2$ and letting $k=g(\frac{1}2)+\frac{1}2$, we get
$$f(1)=f\left(\frac{1}2\right)f\left(\frac{1}2\right)$$
$$a = a^k\cdot a^k = a^{2k}$$
which does not hold if $a\neq 1$ and $g(\frac{1}2)\neq 0$.


Thursday 20 June 2019

real analysis - Proving that $f(x)=frac{1}{x^2 ln x} $ is Lebesgue measurable on $(2, + infty)$

I have that a set $E$ is Lebesgue measurable if the outer measure:
$$\mu^*(E)=\inf_{I_1,...,I_n} \mu (I), E \subseteq I_1 \cup I_2 ,...\cup I_n , I_i-\text{intervals}$$



satisfy the three properties of measure.
Proving that $$f(x)=\frac{1}{x^2 \ln x} $$ is Lebesgue measurable would suggest that I must prove that the set $f((2,+\infty))$ satisfies these properties. Is this correct, and how do I go about proving this? Maybe using the fact that the function is a continuous bijection and a map of an open set would also be open and this would be a subset of Borel- sigma algebra. Help ?

calculus - How to show that $sqrt{x}$ grows faster than $ln{x}$.




So I have the limit $$\lim_{x\rightarrow \infty}\left(\frac{1}{2-\frac{3\ln{x}}{\sqrt{x}}}\right)=\frac{1}2,$$ I now want to motivate why $(3\ln{x}/\sqrt{x})\rightarrow0$ as $x\rightarrow\infty.$ I cam up with two possibilites:




  1. Algebraically it follows that $$\frac{3\ln{x}}{\sqrt{x}}=\frac{3\ln{x}}{\frac{x}{\sqrt{x}}}=\frac{3\sqrt{x}\ln{x}}{x}=3\sqrt{x}\cdot\frac{\ln{x}}{x},$$
    Now since the last factor is a standard limit equal to zero as $x$ approaches infinity, the limit of the entire thing should be $0$. However, isn't it a problem because $\sqrt{x}\rightarrow\infty$ as $x\rightarrow \infty$ gives us the indeterminate value $\infty\cdot 0$?


  2. One can, without having to do the arithmeticabove, directly motivate that the function $f_1:x\rightarrow \sqrt{x}$ increases faster than the function $f_2:x\rightarrow\ln{x}.$ Is this motivation sufficient? And, is the proof below correct?




We have that $D(f_1)=\frac{1}{2\sqrt{x}}$ and $D(f_2)=\frac{1}{x}$. In order to compare these two derivateives, we have to look at the interval $(0,\infty).$ Since $D(f_1)\geq D(f_2)$ for $x\geq4$, it follows that $f_1>f_2, \ x>4.$



Answer




  1. This is a standard result from high school

  2. If you nevertheless want to deduce it from the limit of $\dfrac{\ln x}x$, use the properties of logarithm:
    $$\frac{\ln x}{\sqrt x}=\frac{2\ln(\sqrt x)}{\sqrt x}\xrightarrow[\sqrt x\to\infty]{}2\cdot 0=0$$


Wednesday 19 June 2019

analysis - Does specific function exist?




Check if exists function $f(x,y):R^2->R$ such that f(x,y) has directional derivatives in point (0,0) in each direction and (0,0) is point of discontinuity.


Answer



Put f equal to zero everywhere but on the curve $y=x^2$, where it is 1. Try to fill in the details.



Tuesday 18 June 2019

Evaluating limit $lim_{xrightarrowinfty}2x(a+x(e^{-a/x}-1))$



I'm stumped by the following limit: $\lim_{x\rightarrow\infty}2x(a+x(e^{-a/x}-1))$



Mathematica gives the answer as $a^2$ but I'd like to know the evaluation steps. I've staring at it for a while, but can't figure it out. Seems like you can't use L'Hopital's rule on this one. I've tried substitution $y=1/x$ but that didn't help. I am probably not seeing something simple. Any hints?



Answer



HINT:



$$
\lim\limits_{x\to\infty} 2x \left(a + x \left(\mathrm{e}^{-a/x}-1\right)\right) = 2 a^2 \lim\limits_{x\to\infty} \frac{ \exp\left(-\frac{a}{x}\right) - 1 + \frac{a}{x} }{\left(\frac{a}{x}\right)^2} = 2 a^2 \lim\limits_{y\to 0^+} \frac{ \exp\left(-y\right) - 1 + y }{y^2}
$$
Now use l'Hospital's rule twice.


trigonometry - Understanding Euler's Identity



I would like to understand one specific moment in Euler's Identity, namely




$$e^{j\theta}=\cos(\theta)+j\sin(\theta)$$



where $j=\sqrt{-1}$. We also know that



$$e^{j2(\pi)}=\cos(2\pi)+j\sin(2\pi)$$



but $\sin(2\pi)=0$ and $\cos(2\pi)=1$, and $1=e^{0}$, so we get that $e^{j2(\pi)}=e^{0}$.



But we get that $j2\pi=0$ which means that $j=0$, but on the other hand $j=\sqrt{-1}$. I want to ask one question: why is it allowed to use such symbols in identity, which finally may cause some strange equality?


Answer




The mistake is that $\exp: \mathbb C \to \mathbb C$ is no longer a bijection. Thus
$$e^{2\pi i} = e^{0} \not\Rightarrow 2\pi i = 0$$
In general
$$\exp(z) = \exp(z+2\pi i) \qquad \forall\ z\in\mathbb C$$
because of the periodicity of $\sin$ and $\cos$ and the definition
$$\exp(z) = \underbrace{\exp(\Re z)}_{\exp: \mathbb R\to\mathbb R} \cdot (\cos(\Im z) + i\sin(\Im z))$$


calculus - Evaluate $mathop {lim }limits_{x to 0} left( {{1 over {{{sin }^2}x}} - {1 over {{x^2}}}} right)$



I tried l'Hospital but that will require a lot (and I mean A LOT!!!) of differentiating




Is there a shortcut?
$$\mathop {\lim }\limits_{x \to 0} \left( {{1 \over {{{\sin }^2}x}} - {1 \over {{x^2}}}} \right)$$



Thanks in advance


Answer



Of course there is!



$$\sin x \sim x - \frac{x^3}{6}$$




$$\sin^2 x \sim x^2 - \frac{x^4}{3}$$



So $$\mathop {\lim }\limits_{x \to 0} \left( {{1 \over {{{\sin }^2}x}} - {1 \over {{x^2}}}} \right)$$
$$= \lim_{x \to 0} \frac{x^2 - \sin^2 x}{x^2 \cdot \sin^2 x} = \lim_{x \to 0} \frac{\frac{x^4}{3}}{x^4 - \frac{x^6}{3}} = \frac{1}{3}$$



(cause also $x^4 \pm x^6 \sim x^4$ if $x \to 0$)


real analysis - Are there any functions that are (always) continuous yet not differentiable? Or vice-versa?



It seems like functions that are continuous always seem to be differentiable, to me. I can't imagine one that is not. Are there any examples of functions that are continuous, yet not differentiable?



The other way around seems a bit simpler -- a differentiable function is obviously always going to be continuous. But are there any that do not satisfy this?


Answer



It's easy to find a function which is continuous but not differentiable at a single point, e.g. f(x) = |x| is continuous but not differentiable at 0.



Moreover, there are functions which are continuous but nowhere differentiable, such as the Weierstrass function.




On the other hand, continuity follows from differentiability, so there are no differentiable functions which aren't also continuous. If a function is differentiable at $x$, then the limit $(f(x+h)-f(x))/h$ must exist (and be finite) as $h$ tends to 0, which means $f(x+h)$ must tend to $f(x)$ as $h$ tends to 0, which means $f$ is continuous at $x$.


Monday 17 June 2019

calculus - Evaluate $lim_{x to 0} frac{1-cos(sin(4x))}{sin^2(sin(3x))}$ without L'Hospital




$$\lim_{x \to 0} \frac{1-\cos(\sin(4x))}{\sin^2(\sin(3x))}$$



How can I evaluate this limit without using the L'Hospital Rule? I've expanded $\sin(4x)$ as $\sin(2x+2x)$, $\sin(3x) = \sin(2x + x)$, but none of these things worked.


Answer



The simplest way is to note $\sin x \simeq x$ for small $x$ and $\cos x \simeq 1-\frac{x^2}{2}$ for small $x$. Then, you can obtain
$$\frac{1-\cos\sin 4x}{\sin^2\sin 3x} \simeq \frac{1-\cos 4x}{\sin^2 3x} \simeq \frac{8x^2}{9x^2} = \frac{8}{9}.$$
You need to use Taylor series to formalize this type of argument.


probability - Keep the value of an 8-sided die roll, or gamble by taking a 12-sided die roll. What's the best strategy?



Consider a dice game in which you try to obtain the largest number (e.g. you win a prize based on the final dice roll).




  1. You roll an 8-sided die, with numbers 1–8 on the sides.

  2. You may either keep the value you rolled, or choose to roll a 12-sided die, with numbers 1–12 on the sides.




What's the best strategy for choosing what to do in step #2?



I know the 8-sided die has expected payoff of 4.5, and the 12-sided die has expected payoff of 6.5. So I think relying on the 12-sided die is better — but how do I show the probability of this?


Answer



If you are trying to maximise the expected score, then since the expected value of the 12-sided die is $6.5$, it makes sense to stop when the 8-sided die shows greater than $6.5$, i.e. when it shows $7$ or $8$, each with probability $\frac18$. So with probability $\frac34$ you throw the 12-sided die.



The expected score is then $$7 \times \frac18 + 8 \times \frac18 + 6.5 \times \frac34 = 6.75.$$


calculus - Elegant way to make a bijection from the set of the complex numbers to the set of the real numbers




Make a bijection that shows $|\mathbb C| = |\mathbb R| $



First I thought of dividing the complex numbers in the real parts and the complex parts and then define a formula that maps those parts to the real numbers. But I don't get anywhere with that. By this I mean that it's not a good enough defined bijection.





Can someone give me a hint on how to do this?




Maybe I need to read more about complex numbers.


Answer



You can represent every complex number as $z=a+ib$, so let us denote this complex number as $(a,b) , ~ a,b \in \mathbb R$. Hence we have cardinality of complex numbers equal to $\mathbb R^2$.



So finally, we need a bijection in $\mathbb R$ and $\mathbb R^2$.




This can be shown using the argument used here.




Note that since there is a bijection from $[0,1]\to\Bbb R$ (see appendix), it is enough to find a bijection from the unit square $[0,1]^2$ to the unit interval $[0,1]$. By constructions in the appendix, it does not really matter whether we consider $[0,1]$, $(0,1]$, or $(0,1)$, since there are easy bijections between all of these.




Mapping the unit square to the unit interval





There are a number of ways to proceed in finding a bijection from the unit square to the unit interval. One approach is to fix up the "interleaving" technique I mentioned in the comments, writing $\langle 0.a_1a_2a_3\ldots, 0.b_1b_2b_3\ldots\rangle$ to $0.a_1b_2a_2b_2a_3b_3\ldots$. This doesn't quite work, as I noted in the comments, because there is a question of whether to represent $\frac12$ as $0.5000\ldots$ or as $0.4999\ldots$. We can't use both, since then $\left\langle\frac12,0\right\rangle$ goes to both $\frac12 = 0.5000\ldots$ and to $\frac9{22} = 0.40909\ldots$ and we don't even have a function, much less a bijection. But if we arbitrarily choose to the second representation, then there is no element of $[0,1]^2$ that is mapped to $\frac12$, and if we choose the first there is no element that is mapped to $\frac9{22}$, so either way we fail to have a bijection.



This problem can be fixed.



First, we will deal with $(0,1]$ rather than with $[0,1]$; bijections between these two sets are well-known, or see the appendix. For real numbers with two decimal expansions, such as $\frac12$, we will agree to choose the one that ends with nines rather than with zeroes. So for example we represent $\frac12$ as $0.4999\ldots$.



Now instead of interleaving single digits, we will break each input number into chunks, where each chunk consists of some number of zeroes (possibly none) followed by a single non-zero digit. For example, $\frac1{200} = 0.00499\ldots$ is broken up as $004\ 9\ 9\ 9\ldots$, and $0.01003430901111\ldots$ is broken up as $01\ 003\ 4\ 3\ 09\ 01\ 1\ 1\ldots$.



This is well-defined since we are ignoring representations that contain infinite sequences of zeroes.




Now instead of interleaving digits, we interleave chunks. To interleave $0.004999\ldots$ and $0.01003430901111\ldots$, we get $0.004\ 01\ 9\ 003\ 9\ 4\ 9\ldots$. This is obviously reversible. It can never produce a result that ends with an infinite sequence of zeroes, and similarly the reverse mapping can never produce a number with an infinite sequence of trailing zeroes, so we win. A problem example similar to the one from a few paragraphs ago is resolved as follows: $\frac12 = 0.4999\ldots$ is the unique image of $\langle 0.4999\ldots, 0.999\ldots\rangle$ and $\frac9{22} = 0.40909\ldots$ is the unique image of $\langle 0.40909\ldots, 0.0909\ldots\rangle$.



Sunday 16 June 2019

calculus - Integral $intfrac{(sin(x))^2}{x^2+1} dx$

I have no idea of variable changement to use or other to calculate this integral :



$$
\int_0^{\infty}\frac{(\sin(x))^2}{x^2+1}\,dx

$$



Wolfram alpha gives me the result but really no idea ...



I seek a calculus without residues theorem or other big theorems like it because I didn't learn them.. so just by part or by variable changement if possible sure..



Thanks in advance,



Shadock

Saturday 15 June 2019

complex numbers - How to simplify $sqrt{-8}$

How would I go about simplifying square root of $-8$?



I know I can rewrite that as $\sqrt{(-1)(8)}$, and then I would get $i\sqrt{8}$, but how do I simplify that $8$ further?




Thanks for your help.

real analysis - Question about a functional equation



We are looking at a theorem which characterizes the affine term structure (ats) models in interes rate theory. What follows is from "Filipović, D. (2009): "Term-structure models: A graduate course", Springer-Verlag".




We denote by $F(t,r,T)$ the bond price and say it is of (ats) if and only if



$$F(t,r,T)=e^{-A(t,T)-B(t,T)r}$$



for smooth functions $A,B$. $r$ denotes the interest rate and is a stochastic process. Then the theorem states




a short rate model of the form
$$dr(t)=b(t,r)dt+\sigma(t,r)dW(t)\tag{*}$$

for continuous $b,\sigma$ provides ats if and only if
$$\sigma^2(t,r)=a(t)+\alpha(t)r \mbox{ and } b(t,r)=b(t)+\beta(t)r$$
for continuous function $a,\alpha,b,\beta$, and the functions $A,B$ satisfy the system of ODE, for all $t\le T$:
$$\partial_tA(t,T)=\frac{1}{2}a(t)B^2(t,T)-b(t)B(t,T), \mbox{ } A(T,T)=0$$
$$\partial_tB(t,T)=\frac{1}{2}\alpha(t)B^2(t,T)-\beta(t)B(t,T)-1, \mbox{ } B(T,T)=0$$




The key point of the proof is that $F$ should satisfy the following equation



$$ \partial_t F+b\partial_rF+\frac{1}{2}\sigma^2\partial_{rr}F-rF=0\tag{1}$$




where $b,\sigma$ are from $(*)$. For the proof, you put the explicit formula of $F$ into $(1)$, we see that the short rate model provides an ats if and only if



$$\frac{1}{2}\sigma^2B^2-bB=\partial_tA+(\partial_tB+1)r \tag{2}$$



where I wrote $B$ for $B(t,T)$ and the same for $A$. Looking about the equation above the direction $"\Leftarrow"$ is proved. For the direction $"\Rightarrow"$, they first assume that $B$ and $B^2$ are linearly independent for fixed $t$ and show the claim. After that the only case which we now have to look at, is
$$B(t,T)=c(t)B^2(t,T)\tag{3}$$
for some constant $c(t)$. I guess we also fix here $t$. Then they conclude the following things, which I do not understand: $(3)$ should imply that $B(t,\cdot)=B(t,t)=0$. Why is that true?
From there they say, well then $(2)$ implies that $\partial_tB(t,T)=-1$. I also do not get that conclusion.




After all they conclude that the set of elements $t$, for which $B(t,\cdot)$ and $B^2(t,\cdot)$ are linearly independent is open and dense in $\mathbb{R}_+$



I have no idea how one can conclude all these things. Some help would really be appreciated.


Answer



Assume $B(t,T) = c(t)B^2(t,T)$ for some $t$. Since this must hold for all $T \geq t$ we have $B(t,T) = B(t)$ is a constant independent of $T$. If you look into the book you mentioned you'll also notice there that $B(T,T) = 0$ for all $T$ (as a consequence of normalization on $F$). Therefore we actually have $B(t,T) = 0$ for all $T$. Inspecting the $r$-part of $(2)$ it follows that $\partial B(t,T) = -1$ for all $T$.



Putting this together, if $B$ and $B^2$ are linearly dependent for some $t$, the function $B(t,T)$ has an isolated zero at $t$ for all $T$. Therefore the set of $t$ where $B$ and $B^2$ are linearly independent is open (it is a union of open intervals between isolated zeros) and dense (since zeros are at the boundaries of those open intervals). Finally, since everything in sight is continuous, you can continue the results from the case where $B$ and $B^2$ are independent to all $t$.







EDIT: As for why the zeros are isolated. Let's illustrate this on a simple example of function $f(t) = -t$. We have $f(0) = 0$ and $\partial_t f(t) = -1$. The zero in this example is isolated trivially, since it is the only zero of $f$. But the situation also applies for every $T$ to $B(\cdot, T)$ from the above discussion: it will near $t$ look like $f$ around $0$ i.e. like a straight line with downward slope. So it should be clear that there is no other zero in some $\epsilon$-neighborhood.



Maybe it would be also useful to illustrate with a counterexample of a function that also has a non-isolated zero. For example $f(x) = xsin(1/x)$ for $x \neq 0$ and $f(0) = 0$. But such a function necessarily isn't smooth (which $B$ is, therefore it can't have a non-isolated zero).



As for the denseness. We say that subset $A \subset B$ is dense if the closure $\bar A = B$. In our case $B$ is a closed interval of $\mathbb R$ and $A = B \setminus Z$ where $Z$ is the set of zeros. So it is enough to show that every zero is in the closure of $A$. But this is immediate from the above discussion, since if $t$ is a zero then there is an $s$ such that the interval $(s,t) \subset A$ doesn't contain a zero. Moreover the closure of $ {(s,t)}$ is $[s,t] $, i.e. $t$ belongs to the closure of $A$. Therefore the closure of $A$ contains $Z$ and so is equal to $B$, as was to be proved.


Friday 14 June 2019

gcd and lcm - Suppose $gcd(a,y)=s$ and $gcd(b,y)=t$. Prove that $gcd(gcd(a,b),y)=gcd(s,t)$.



All I have so far is that $$s|a, s|y, t|b, \text{ and } t|y.$$ I also know



$$\gcd(\gcd(a,b),y)=\gcd(a,b,y)=\gcd(a,gcd(b,y))$$



by the associative property of gcd. It would suffice to show $$\gcd(a,b,y)=\gcd(gcd(a,y),\gcd(b,y)).$$
I'm just not sure how to prove it. Thanks for your help.


Answer



I would approach it a bit differently. Let $d=\gcd(\gcd(a,b),y)$. Then $d\mid\gcd(a,b)$, and $d\mid y$. Since $d\mid\gcd(a,b)$, we also know that $d\mid a$ and $d\mid b$. Since $d\mid a$ and $d\mid y$, we know that $d\mid s$; similarly, $d\mid t$, so $d\mid\gcd(s,t)$.




Now let $e=\gcd(s,t)$ and make a similar argument to show that $e\mid d$. Since $d,e\ge 1$, $d\mid e$, and $e\mid d$, it must be the case that $d=e$.


logarithms - Sieve of Eratosthenes Time Complexity Clarification



I've found plenty of sources claiming that the time complexity of the prime sieving algorithm Sieve of Eratosthenes is $O(n\log(\log n))$ where $n$ is the input. However, is this $\log_{10}$ or $\ln$? I assume it's $\ln$ but according to some notational conventions, just $\log$ refers to $\log_{10}$ and I can't find a source that clarifies this problem.



Does anyone know?




EDIT: I know that in a case where you have only one logarithm, you can scale between different bases using constants. However, this is not true when you have several nested logarithms. I.e. $\log_{10}(\log_{10} n)/\ln(\ln n))$ does not equal $\log_{10}(\log_{10} (n+1))/\ln(\ln (n+1)))$. Because of this, the logarithmic base does matter here. (I think, correct me if I'm wrong).


Answer



Big $O$ complexity in terms of nested logarithms is base independent for the same reason it is in the case of single logarithms. For example, $\log_{10} x = c \log_2 x$ for the constant $c=\log_{10} 2$, as you have noted.



Likewise, using a specific example, $\log_{10} \log_{10} x = d \log_2 \log_2 x$ for some $d$ that approaches (but does not equal) $\log_{10} 2$. This is because $\log_{10} \log_{10} x = \log_{10}( c \log_2 x ) = \log_{10} c + \log_{10} \log_2 x = \log_{10} c + c \log_2 \log_2 x$. The factor by which the $c$ in the second term is multiplied, $\log_2 \log_2 x$, approaches infinity, so $c$ can be replaced with a slightly larger $d$, and eventually that will increase the value of the second term enough to account for the constant added to it, $\log_{10} c$.



For this reason, most logarithmic expressions (but not all) that typically show up in big $O$ notation are base independent, so no base need be specified.



That being said, if no base is specified, and there is no obvious reason it should be $10$ or something else, you can assume a base of $e$. It is on the author if that assumption is false when no other base was specified.



real analysis - Convergence from $L^p$ to $L^infty$





If $f$ is a function such that $f \in L^\infty \cap L^ {p_0}$ where $L^\infty$ is the space of essentially bounded functions and $ 0 < p_0 < \infty$. Show that $ || f|| _{L^p} \to ||f || _{L^\infty} $ as $ p \to \infty$. Where $|| f||_{L^\infty} $ is the least $M \in R$ such that $|f(x)| \le M$ for almost every $x \in X$.



The hint says to use the monotone convergence theorem, but i can't even see any pointwise convergence of functions.
Any help is appreciated.


Answer



Hint: Let $M\lt\|f\|_{L^\infty}$ and consider
$$
\int_{E_M}\left|\frac{f(x)}{M}\right|^p\,\mathrm{d}x
$$
where $E_M=\{x:|f(x)|\gt M\}$. I believe the Monotone Convergence Theorem works here.




Further Hint: $M\lt\|f\|_{L^\infty}$ implies $E_M$ has positive measure. On $E_M$, $\left|\frac{f(x)}{M}\right|^p$ tends to $\infty$ pointwise. MCT says that for some $p$, the integral above exceeds $1$.


Thursday 13 June 2019

Find a bijective function between two sets

I want to find a bijective function from $(\frac{1}{2},1]$ into $[0,1]$. So, What is a bijective function $f:(\frac{1}{2},1]\to[0,1]$?

calculus - Does $sum_{n=1}^infty left(frac{1}{n} - 1right)^n$ and $sum_{n=1}^infty left(frac{1}{n} - 1right)^{n^2}$ converge?




My task is this:



Determin whether $\sum_{n=1}^\infty \left(\frac{1}{n} - 1\right)^n$ and $\sum_{n=1}^\infty \left(\frac{1}{n} - 1\right)^{n^2}$ converge or diverge.



My thoughts:



For large $n$ one should expect $n^{-1} - 1 \to -1 \neq 0 $ raising that to $n$th power should yeald $\{-1,1\}$, again for big $n$, but I'm probably way off here and need some hints tips or better approach to this.



Thanks in advance!


Answer




Let $a_n = ( \frac{1}{n} - 1)^{n^2}$



Then $|a_n| = (1- \frac{1}{n})^{n^2} =\exp(n^2 \ln (1-\frac{1}{n})) \sim e^{-n+\frac{1}{2}}$



$ \sum e^{-n+\frac{1}{2}}$ converges so $ \sum |a_n|$ converges then $ \sum a_n$ also converges.



So the second sum does converge.



For the first sum, as already mentioned by H Potter, the general term does not converge to 0 so the sum does not converge.


Wednesday 12 June 2019

probability - Conditional Expected value of number of rolls in a die



A die is rolled repeatily. Let $X$ be the random variable that denotes the number of rolls to get a 4 and $Y$ be the random variable that denotes the number of rolls to get a 1. What is $E[X|Y=7]?



My thoughts were $\dfrac{1}{\dfrac{1}{6}} + 7$ since the expected value for rolling a 4 is 6 and we are given that we rolled 7 times (but we know on the 7th roll we did not get a 4)) but I know the answer is not right. Since we must factor in the probabilites of rolling a 4 in the first 6 rolls. How do I do this?


Answer



Hint: Note that $X$ is a geometric random variable. $Y=7$ implies that rolls one through to 6 was not a $1$. So we can consider two cases: $X \le 6$ and $X\gt 7$




By definition $$\begin{align*} E(X \, | \, Y = 7) & = \sum_{k=1}^{\infty} \, k \,P(X = k \, | \, Y = 7)\\
& = E(X \, | \, Y = 7, X\lt 7) \cdot P(X \le 6 \, | \, Y = 7) \\
&\,\,\,\,\,\,\,\,\,\,\,+ E(X \, | \, Y = 7, X\gt 7) \cdot P(X \gt 7 \, | \, Y = 7)
\end{align*}$$



Can you take it from here?


Monday 10 June 2019

sequences and series - How to solve this multiple summation?




How to solve this summation ?



$$\sum_{0\le x_1\le x_2...\le x_n \le n}^{}\binom{k+x_1-1}{x_1}\binom{k+x_2-1}{x_2}...\binom{k+x_n-1}{x_n}$$
where $k$ , $n$ are known.



Due to hockey-stick identity ,
$$\sum_{i=0}^n\binom{i+k-1}{i}=\binom{n+k}{k}$$


Answer



Suppose we seek to evaluate
$$\sum_{0\le x_1\le x_2\cdots \le x_n \le n}

{k+x_1-1\choose x_1}
{k+x_2-1\choose x_2}
\cdots
{k+x_n-1\choose x_n}.$$



Using the Polya Enumeration Theorem and the cycle index of the
symmetric group this becomes
$$Z(S_n)
\left(Q_0+Q_1+Q_2+\cdots +Q_n\right)$$




evaluated at
$$Q_m = {k-1+m\choose m}.$$



Now the OGF of the cycle index $Z(S_n)$ of the symmetric group is
$$G(z) = \exp
\left(a_1 \frac{z}{1}
+ a_2 \frac{z^2}{2}
+ a_3 \frac{z^3}{3}
+ \cdots \right).$$




The substituted generating function becomes
$$H(z) =
\exp
\left(\sum_{p\ge 1} \frac{z^p}{p}
\sum_{m=0}^n {k-1+m\choose m}^p\right)
= \exp
\left(\sum_{m=0}^n
\sum_{p\ge 1} \frac{z^p}{p}
{k-1+m\choose m}^p\right)
\\ = \exp

\left(\sum_{m=0}^n
\log\frac{1}{1-{k-1+m\choose m} z}\right)
= \prod_{m=0}^n
\frac{1}{1-{k-1+m\choose m} z}.$$



Some thought shows that this could have been obtained by inspection.


We use partial fractions by residues on this function which we
re-write as follows:
$$(-1)^{n+1} \prod_{m=0}^n {k-1+m\choose m}^{-1}

\prod_{m=0}^n
\frac{1}{z-1/{k-1+m\choose m}}.$$



Switching to residues we obtain
$$(-1)^{n+1} \prod_{m=0}^n {k-1+m\choose m}^{-1}
\sum_{m=0}^n
\frac{1}{z-1/{k-1+m\choose m}}
\\ \times \prod_{p=0, \; p\ne m}^n
\frac{1}{1/{k-1+m\choose m}-1/{k-1+p\choose p}}.$$




Preparing to extract coefficients we get
$$(-1)^{n} \prod_{m=0}^n {k-1+m\choose m}^{-1}
\sum_{m=0}^n
\frac{{k-1+m\choose m}}{1-z{k-1+m\choose m}}
\\ \times \prod_{p=0, \; p\ne m}^n
\frac{1}{1/{k-1+m\choose m}-1/{k-1+p\choose p}}.$$



Doing the coefficient extraction we obtain
$$(-1)^{n} \prod_{m=0}^n {k-1+m\choose m}^{-1}
\sum_{m=0}^n

{k-1+m\choose m}^{n+1}
\\ \times \prod_{p=0, \; p\ne m}^n
\frac{1}{1/{k-1+m\choose m}-1/{k-1+p\choose p}}
\\ = (-1)^{n} \prod_{m=0}^n {k-1+m\choose m}^{-1}
\sum_{m=0}^n
{k-1+m\choose m}^{2n+1}
\\ \times \prod_{p=0, \; p\ne m}^n
\frac{1}{1-{k-1+m\choose m}/{k-1+p\choose p}}
\\ = (-1)^{n}
\sum_{m=0}^n

{k-1+m\choose m}^{2n}
\prod_{p=0, \; p\ne m}^n
\frac{1}{{k-1+p\choose p}-{k-1+m\choose m}}.$$



The complexity here is good since the formula has a quadratic number
of terms in $n.$ The number of partitions that a total enumeration
would have to consider is given by



$$Z(S_n)
\left(Q_0+Q_1+Q_2+\cdots +Q_n\right)$$




evaluated at $Q_0 = Q_1 = Q_2 = \cdots = Q_n = 1$ which gives the
substituted generating function



$$A(z) = \exp\left((n+1)\log\frac{1}{1-z}\right)
= \frac{1}{(1-z)^{n+1}}.$$



This yields for the total number of partitions
$${n+n\choose n} = {2n\choose n}$$
which by Stirling has asymptotic

(consult OEIS A00984)
$$\frac{4^n}{\sqrt{\pi n}} \quad\text{and}\quad
n^2\in o\left(\frac{4^n}{\sqrt{\pi n}}\right).$$



For example when $n=24$ and $k=5$ we would have to consider
${48\choose 24} = 32.247.603.683.100$ partitions but the formula
readily yields
$$424283851839410438109261697709077430045882514844\\
665327684062172306602549601581316037895634544256\\
47212676100.$$




Additional exploration of these formulae may be undertaken using the
following Maple code which contrasts total enumeration and the closed
formula.




A :=
proc(n, k)
option remember;
local iter;


iter :=
proc(l)
if nops(l) = 0 then
add(iter([q]), q=0..n)
elif nops(l) < n then
add(iter([op(l), q]), q=op(-1, l)..n)
else
mul(binomial(k-1+l[q], l[q]), q=1..n);
fi;

end;

iter([]);
end;

EX :=
proc(n, k)
option remember;

(-1)^n*add(binomial(k-1+m,m)^(2*n)*

mul(1/(binomial(k-1+p,p)-binomial(k-1+m,m)), p=0..m-1)*
mul(1/(binomial(k-1+p,p)-binomial(k-1+m,m)), p=m+1..n),
m=0..n);
end;

calculus - Proving Schwarz derivative $frac{f''(0)}{2} =limlimits_{xto 0} frac{frac{f(x) -f(0)}{x}-f'(0)}{x}$ without Taylor expansion or L'Hopital rule?



Assume that $f$ is $C^2$ near 0. I would like to show the following Schwartz derivative
$$
\frac{f''(0)}{2} =\lim_{x\to 0} \frac{\frac{f(x) -f(0)}{x}-f'(0)}{x}

$$







I am able to do this by using the Taylor expansion and L'Hopital rule. I am wondering how one can prove it without using Taylor expansion or L'Hopital rule.



Answer



Suppose $x>0$. Note
\begin{eqnarray}

&&\frac{\frac{f(x)-f(0)}{x}-f'(0)}{x}\\
&=&\frac{f(x)-f(0)-xf'(0)}{x^2} \\
&=&\frac{\int_0^xf'(t)dt-\int_0^xf'(0)dt}{x^2} \\
&=&\frac{\int_0^x[f'(t)-f'(0)]dt}{x^2} \\
&=&\frac{\int_0^x\bigg[\int_0^tf''(s)ds\bigg]dt}{x^2} \\
&=&\frac{\int_0^x\bigg[\int_s^xf''(s)dt\bigg]ds}{x^2} \\
&=&\frac{\int_0^x(x-s)f''(s)ds}{x^2}
\end{eqnarray}
and
$$ \int_0^x(x-s)ds=\frac12x^2. $$

So
\begin{eqnarray}
&&\bigg|\frac{\frac{f(x)-f(0)}{x}-f'(0)}{x}-\frac12f''(0)\bigg|\\
&=&\bigg|\frac{\int_0^x(x-s)[f''(s)-f''(0)]ds}{x^2}\bigg|\\
&\le&\bigg|\frac{\int_0^x(x-s)|f''(s)-f''(0)|ds}{x^2}\bigg|
\end{eqnarray}
Since $f\in C^2$, for $\forall \varepsilon>0$, $\exists \delta>0$ such that
$$ |f''(x)-f''(0)|<\varepsilon \forall x\in(0,\delta). $$
Thus for $x\in(0,\delta)$,
\begin{eqnarray}

&&\bigg|\frac{\frac{f(x)-f(0)}{x}-f'(0)}{x}-\frac12f''(0)\bigg|\\
&\le&\frac{\int_0^x(x-s)|f''(s)-f''(0)|ds}{x^2}\\
&\le&\bigg|\frac{\int_0^x(x-s)\varepsilon ds}{x^2}\bigg|\\
&=&\frac12\varepsilon.
\end{eqnarray}
So
$$ \lim_{x\to0}\frac{\frac{f(x)-f(0)}{x}-f'(0)}{x}=\frac12f''(0). $$


Sunday 9 June 2019

probability - Proof: $Xge 0, r>0Rightarrow E(X^r)=rint_0^{infty}x^{r-1}P(X>x)dx$




As the title states, the problem at hand is proving the following:



$X\ge 0, r>0\Rightarrow E(X^r)=r\int_0^{\infty}x^{r-1}P(X>x)dx$






Attempt/thoughts on a solution



I am guessing this is an application of Fubini's Theorem, but wouldn't that require writing $P(X>x)$ as an expectation? If so, how is this accomplished?




Thoughts and help are appreciated.


Answer



Proof: Consider the expectation of the identity
$$
X^r=r\int_0^{X}x^{r-1}\,\mathrm dx=r\int_0^{+\infty}x^{r-1}\mathbf 1_{X>x}\,\mathrm dx.
$$


Probability - die - The number of throws until a $5$ and a $6$ have been obtained.

An unbiased die is thrown repeatedly until a 5 and a 6 have been obtained. the random variable M denotes the number of throws required. For example, for the sequence of results 6,3,2,3,6,6,5, the value of M is 7. Calculate P(M=r).

Saturday 8 June 2019

functions - Explicit bijection between $[0,1)$ and $(0,1)$





Proving that $[0,1)$ and $(0,1)$ have the same cardinality (without assuming any previous knowledge) can be done easily using Cantor-Bernstein theorem.



However I'm wondering if someone can build an explicit bijection between these sets.



It's easy to build a bijection between $(0,1)$ and $\mathbb R$, so a bijection from $[0,1)$ to $\mathbb R$ will also fit the bill.


Answer




Let us partition $(0,1)$ into a countable number of disjoint subsets of the form $[\frac{1}{n+1},\frac{1}{n})$ for $n=0,1,2,\ldots$.



These half-open intervals may then be positioned in reverse order to form a half-open interval of equal length. Whether this construction is sufficiently explicit is open to question, but it does allow the relocation of any $x\in (0,1)$ to $[0,1)$ to be computed in short order.



A more succinct construction is to define $f:[0,1) \to (0,1)$ by $f(0) = 1/2$, $f(1/n) = 1/(n+1)$ for integer $n \ge 2$, and $f(x) = x$ otherwise.


algebra precalculus - Integration by partial fractions; how and why does it work?




Could someone take me through the steps of decomposing
$$\frac{2x^2+11x}{x^2+11x+30}$$
into partial fractions?



More generally, how does one use partial fractions to compute integrals
$$\int\frac{P(x)}{Q(x)}\,dx$$
of rational functions ($P(x)$ and $Q(x)$ are polynomials) ?







This question is asked in an effort to cut down on duplicates. See Coping with *abstract* duplicate questions.



Also see List of Generalizations of Common Questions.


Answer



What you are trying to do is a partial fraction decomposition.



Idea. Imagine your calculator is broken and a bunch of hoodlums stop you on the street, and demand at knifepoint that you compute $\frac{191}{105}$ as a decimal (part of their initiation into the Mathies Gang, you see), or else they will slit your throat. Unfortunately, since your calculator is broken, you can really only do divisions if the divisor is a single digit number (so you can use your fingers to do the operations; you're very good with those, because you know the multiplication tables of single digit numbers...). Luckily, you do notice that $105 = 3\times 5 \times 7$. Is there some way you can save your neck using this observation?



Well, you do know that $191 = 105 + 86$, so at least you know that $\frac{191}{105} = 1 + \frac{86}{105}$, so that takes care of the integer part of the fraction. What about $\frac{86}{105}$? Aha! Here's a clever idea: maybe $\frac{86}{105}$ is really the result of a sum of fractions! If you had a sum of fractions of the form

$$\frac{A}{3} + \frac{B}{5} + \frac{C}{7}$$
then to write it as a single fraction you would find the common denominator, $105$, and then do a bunch of operations, and end up with a fraction $\frac{\mathrm{something}}{105}$. If you can find an $A$, $B$, and $C$ so that the something is $86$, then instead of computing $\frac{86}{105}$ you can do $\frac{A}{3}$, $\frac{B}{5}$, and $\frac{C}{7}$ (which you can do, since the denominators are single digit numbers), and then add those decimals to get the answer. Can we? We do a bit of algebra:
$$\frac{A}{3} + \frac{B}{5} + \frac{C}{7} = \frac{35A + 21B + 15C}{105}$$
so you want $35A + 21B + 15C = 86$. As luck would have it, $A=B=1$ and $C=2$ works, so
$$\frac{86}{105} = \frac{1}{3} + \frac{1}{5} + \frac{2}{7}.$$
And now, all is well:
\begin{align*}
\frac{191}{105} &= 1 + \frac{86}{105}\\\
&= 1 + \frac{1}{3} + \frac{1}{5} + \frac{2}{7}\\\
&= 1 + (0.3333\ldots) + (0.2) + (0.285714285714\overline{285714}\ldots)

\end{align*}
and you can give those dangerous hoodlums their answer, and live to derive another day.





Your problem. You want to do something similar with the polynomial quotient, with denominators that "easy"; in this case, degree $1$. The first task is to make the fraction "less than $1$", by making sure the denominator has degree less than the numerator. You do this with polynomial long division (see also this recent answer). Doing the long division mentally, we have: to get $2x^2$ from $x^2+11x+30$ we multiply by $2$:
$$2x^2 + 11x = (x^2+11x+30)(2+\cdots)$$
that produces unwanted $11x + 60$, (well, you get $22x + 60$, but you do want $11x$ of those, so you only have a leftover of $11x+60$); nothing to do about them, except cancel them out after the product is done. So you have
$$2x^2 + 11x = (x^2+11x+30)(2) - (11x+60).$$
So you can write
$$\frac{2x^2+11x}{x^2+11x+30} = 2 + \frac{-(11x+60)}{x^2+11x+30}.$$

Now we got the "integer part", and we work on the "fraction part". The denominator factors as $(x+5)(x+6)$, so we want to think of that fraction as the end result of doing a sum of the form
$$\frac{A}{x+5} + \frac{B}{x+6}.$$
Because the sum is "smaller than $1$" (numerator of degree smaller than the denominator) each of these fractions should also be "smaller than one". So both the $A$ and $B$ will be constants.

So we have:
\begin{align*}
\frac{-11x - 60}{x^2+11x+30} &= \frac{A}{x+5} + \frac{B}{x+6} \\\
&= \frac{A(x+6)}{(x+5)(x+6)} + \frac{B(x+5)}{(x+6)(x+5)}\\\
&= \frac{A(x+6) + B(x+5)}{(x+5)(x+6)}\\\
&= \frac{Ax + 6A + Bx + 5B}{x^2+11x+30} = \frac{(A+B)x + (6A+5B)}{x^2+11x+30}.

\end{align*}
For this to work out, you need $(A+B)x + (6A+5B) = -11x-60$. That means we need $A+B=-11$ (so the $x$s agree) and $6A+5B = -60$ (so the constant terms agree).



That means $A=-11-B$ (from the first equation). Plugging into the second equation, we get
$$-60 = 6A+5B = 6(-11-B)+5B = -66 -6B + 5B = -66-B.$$
So that means that $B=60-66 = -6$. And since $A+B=-11$, then $A=-5$.



(An alternative method for finding the values of $A$ and $B$ is the Heaviside cover-up method; from the fact that
$$-11x - 60 = A(x+6)+B(x+5)$$
we know that the two sides must take the same value for every value of $x$; if we plug in $x=-6$, this will "cover up" the $A$ on the right and we simply get $B(-6+5) = -B$; this must equal $-11(-6)-60 = 6$; so $6 = -B$, hence $B=-6$. Then plugging in $x=-5$ "covers up" the $B$ to give us $-11(-5)-60 = A(-5+6)$, or $-5=A$. So we obtain $A=-5$, $B=-6$, same as before.)




That is,
$$\frac{-11x-60}{x^2+11x+30} = \frac{-5}{x+5} + \frac{-6}{x+6}.$$



Putting it all together, we have:
$$\frac{2x^2+11x}{x^2+11x+30} = 2 + \frac{-11x-60}{x^2+11x+30} = 2 - \frac{5}{x+5} - \frac{6}{x+6}.$$
And ta da! Your task is done. You've decomposed the quotient into partial fractions.



Caveat. You need to be careful if the denominator has repeated factors, or has factors that are of degree $2$ and cannot be further factored (e.g., $x^2+1$); I talk about that below in the general discussion.




Now, let's hope the Mathies Gang doesn't ask for a proof of Goldbach's Conjecture from its next batch of pledges...






General Idea.



So let's discuss this idea for the general problem of integrating a rational function; that is, a function of the form
$$\frac{P(x)}{Q(x)}$$
where $P(x)=a_nx^n+\cdots +a_1x+a_0$ and $Q(x)=b_mx^m+\cdots + b_1x+b_0$ are polynomials.




Integrating polynomials is easy, so the first task is to do long division in order to write the fraction as a polynomial plus a rational function in which the numerator has degree smaller than the denominator. So we will consider only the case where $n\lt m$ going forward.



First, let us make sure we can actually do those fractions with "easy denominators." To me, "easy denominator" means (i) linear, $ax+b$; (ii) power of a linear polynomial, $(ax+b)^n$; (iii) irreducible quadratic; (iv) power of an irreducible quadratic. So let's talk about how to integrate those:




  1. When $Q(x)$ has degree $1$ and $P(x)$ is constant. For example, something like
    $$\int \frac{3}{2x-5}\,dx.$$
    These integrals are very easy to do: we do a change of variable $u=2x-5$, so $du=2dx$. We simply get
    $$\int\frac{3}{2x-5}\,dx = 3\int\frac{dx}{2x-5} = 3\int\frac{\frac{1}{2}du}{u} = \frac{3}{2}\ln|u|+C = \frac{3}{2}\ln|2x-5|+C.$$


  2. In fact, the idea above works whenever the denominator is a power of a degree $1$ polynomial and the numerator is a constant. If we had something like

    $$\int\frac{3}{(2x-5)^6}\,dx$$
    then we can let $u=2x-5$, $du=2dx$ and we get
    $$\begin{align*}
    \int\frac{3}{(2x-5)^6}\,dx &= 3\int\frac{\frac{1}{2}du}{u^6} = \frac{3}{2}\int u^{-6}\,dx \\&= \frac{3}{2}\left(\frac{1}{-5}u^{-5}\right)+C\\ &= -\frac{3}{10}(2x-5)^{-5} + C.\end{align*}$$


  3. What if the denominator is an irreducible quadratic? Things get a little more complicated. The simplest example of an irreducible quadratic is $x^2+1$, and the easiest numerator is $1$. That integral can be done directly:
    $$\int\frac{1}{x^2+1}\,dx = \arctan(x)+C.$$
    If we have an irreducible quadratic of the form $x^2+a$, with $a\gt 0$, then we can always write it as $x^2+b^2$ with $b=\sqrt{a}$; then we can do the following: factor out $b^2$ from the denominator,
    $$\int\frac{dx}{x^2+b^2} = \int\frac{dx}{b^2((\frac{x}{b})^2 + 1)};$$
    and now setting $u=\frac{x}{b}$, so $du = \frac{1}{b}\,dx$, we get:
    $$\int\frac{dx}{x^2+b^2} = \frac{1}{b}\int\frac{1}{(\frac{x}{b})^2+1}\left(\frac{1}{b}\right)\,dx = \frac{1}{b}\int\frac{1}{u^2+1}\,du,$$

    and now we can do the integral easily as before.
    What if we have a more general irreducible quadratic denominator? Something like $x^2+x+1$, or something else with an $x$ term?



    The magic phrase here is *Completing the square". We can write $x^2+x+1$ as
    $$x^2 + x + 1 = \left(x^2 + x + \frac{1}{4}\right) + \frac{3}{4} = \left(x+\frac{1}{2}\right)^2 + \frac{3}{4}.$$
    Then setting $w=x+\frac{1}{2}$, we end up with an integral that looks just like the previous case! For instance,
    $$\int\frac{dx}{x^2+x+1} = \int\frac{dx}{(x+\frac{1}{2})^2+\frac{3}{4}} = \int\frac{dw}{w^2+\frac{3}{4}},$$
    and we know how to deal with these.



    So: if the denominator is an irreducible quadratic, and the numerator is a constant, we can do the integral.



  4. What if the denominator is an irreducible quadratic, but the numerator is not constant? Since we can always do the long division, then we can take the numerator to be of degree $1$. If we are lucky, it's possible we can do it with a simple substitution; for example, to do
    $$\int\frac{2x+3}{x^2+3x+4}\,dx$$
    (note the denominator is irreducible quadratic), we can just let $u=x^2+3x+4$, since $du = (2x+3)\,dx$, exactly what we have in the numerator, so
    $$\int\frac{2x+3}{x^2+3x+4}\,dx = \int\frac{du}{u} = \ln|u|+C = \ln|x^2+3x+4|+C = \ln(x^2+3x+4) + C.$$
    If we are not lucky? Well, we can always make our own luck. For instace, if we had
    $$\int\frac{3x}{x^2+3x+4}\,dx$$
    then we can't just make the substitution $u=x^2+3x+4$; but if we wanted to do that anyway, we would need $2x+3$ in the numerator; so we play a little algebra game:
    $$\begin{align*}
    \frac{3x}{x^2+3x+4} &= 3\left(\frac{x}{x^2+3x+4}\right)\\
    &= 3\left(\frac{\frac{1}{2}(2x)}{x^2+3x+4}\right)\\

    &=\frac{3}{2}\left(\frac{2x}{x^2+3x+4}\right) &\text{(still not quite right)}\\
    &= \frac{3}{2}\left(\frac{2x+3-3}{x^2+3x+4}\right)\\
    &= \frac{3}{2}\left(\frac{2x+3}{x^2+3x+4} - \frac{3}{x^2+3x+4}\right).
    \end{align*}$$
    What have we accomplished? The first summand, $\frac{2x+3}{x^2+3x+4}$, is an integral we can do with a simple substitution; and the second summand, $\frac{3}{x^2+3x+4}$, is an integral that we just saw how to do! So in this way, we can solve this kind of integral. We can always rewrite the integral as a sum of an integral that we can do with a substitution, and an integral that is as in case 3 above.


  5. What if the denominator is a power of an irreducible quadratic? Something like $(x^2+3x+5)^4$? If the numerator is of degree at most $1$, then we can play the same game as we just did to end up with a sum of two fractions; the first one will have numerator which is exactly the derivative of $x^2+3x+5$, and the second will have a numerator that is constant. So we just need to figure out how to do an integral like
    $$\int\frac{2dx}{(x^2+3x+5)^5}$$
    with constant numerator and a power of an irreducible quadratic in the denominator.



    By completing the square as we did in 3, we can rewrite it so that it looks like

    $$\int\frac{dw}{(w^2+b^2)^5}.$$
    Turns out that if you do integration by parts, then you get a Reduction formula that says:
    $$\int\frac{dw}{(w^2+b^2)^n} = \frac{1}{2b^2(n-1)}\left(\frac{w}{(w^2+b^2)^{n-1}} + (2n-3)\int\frac{dw}{(w^2+b^2)^{n-1}}\right)$$
    By using this formula repeatedly, we will eventually end up in an integral where the denominator is just $w^2+b^2$... and we already know how to do those. So all is well with the world (with a lot of work, at least).




Okay. Do we now need to go and discuss what to do when the denominator is a cubic polynomial, then a fourth degree polynomial, then a fifth degree polynomial, etc.?



No! We can play the same game we did with fractions above, and take an arbitrary rational function and rewrite it as a sum of fractions, with each fraction a power of a degree 1 or an irreducible quadratic polynomial. The key to this is the Fundamental Theorem of Algebra, which says that every polynomial with real coefficients can be written as a product of linear and irreducible quadratic polynomials. In fact, one of the major reasons why people wanted to prove the Fundamental Theorem of Algebra was to make sure that we could do integrals of rational functions in the manner we are discussing.




So here is a method for integrating a rational function:




To compute
$$\int\frac{P(x)}{Q(x)}\,dx$$
where $P(x)$ and $Q(x)$ are polynomials:




  1. If $\deg(P)$ is equal to or greater than $\deg(Q)$, perform long division and rewrite the fraction as a polynomial plus a proper fraction (with numerator of degree strictly smaller than the denominator). Integrate the polynomial (easy).


  2. Completely factor the denominator $Q(x)$ into linear terms and irreducible quadratics. This can be very hard to do in practice. In fact, this step is the only obstacle to really being able to do these integrals always, easily. Factoring a polynomial completely and exactly can be very hard. The Fundamental Theorem of Algebra says that there is a way of writing $Q(x)$ that way, but it doesn't tell us how to find it.



  3. Rewrite the fraction as a sum of fractions, each of which has a denominator which is either a power of linear polynomial or of an irreducible quadratic. (More about this below.)


  4. Do the integral of each of the fractions as discussed above.





How do we rewrite as a sum?



Write $Q(x)$ as a product of powers of distinct polynomials, each linear or irreducible quadratic; e.g., $Q(x) = x(2x-1)(x+2)^3(x^2+1)(x^2+2)^2$.



For each power $(ax+b)^n$, use $n$ fractions of the form:

$$\frac{A_1}{ax+b} + \frac{A_2}{(ax+b)^2} + \cdots+\frac{A_n}{(ax+b)^n},$$
where $A_1,\ldots,A_n$ are constants-to-be-determined-later.



For each power of an irreducible quadratic, $(ax^2+bx+c)^m$, use $m$ fractions of the form:
$$\frac{C_1x+D_1}{ax^2+bx+c} + \frac{C_2x+D_2}{(ax^2+bx+c)^2} + \cdots + \frac{C_mx+D_m}{(ax^2+bx+c)^m},$$
where $C_1,D_1,\ldots,C_m,D_m$ are constants-to-be-determined-later.



So in the example above, we $Q(x) = x(2x-1)(x+2)^3(x^2+1)(x^2+2)^2$, we would get:
$$\begin{align*}
&\quad+\frac{A}{x} &\text{(corresponding to the factor }x\text{)}\\

&\quad+\frac{B}{2x+1} &\text{(corresponding to the factor }2x-1\text{)}\\
&\quad+\frac{C}{x+2} + \frac{D}{(x+2)^2}+\frac{E}{(x+2)^3} &\text{(corresponding to the factor }(x+2)^3\text{)}\\
&\quad+\frac{Gx+H}{x^2+1} &\text{(corresponding to the factor }x^2+1\text{)}\\
&\quad+\frac{Jx+K}{x^2+2} + \frac{Lx+M}{(x^2+2)^2}&\text{(corresponding to the factor }(x^2+2)^2\text{)}
\end{align*}$$



And now, the final step: how do we figure out what all those constants-to-be-determined-later are? We do the operation and compare it to the original! Let's say we are trying to calculate
$$\int\frac{3x-2}{x(x+2)^2(x^2+1)}\,dx$$
(not the same as above, but I want to do something small enough that we can do it).




We set it up as above; then we do the algebra. We have:
$$\begin{align*}
\frac{3x-2}{x(x+2)^2(x^2+1)} &= \frac{A}{x} + \frac{B}{x+2} + \frac{C}{(x+2)^2} + \frac{Dx+E}{x^2+1}\\
&= \frac{\small A(x+2)^2(x^2+1) + Bx(x+2)(x^2+1) + Cx(x^2+1) + (Dx+E)x(x+2)^2}{x(x+2)^2(x^2+1)}.
\end{align*}$$
Now we have two options: we can do the algebra in the numerator and write it as a polynomial. Then it has to be identical to $3x-2$. For example, the coefficient of $x^4$ in the numerator would be $A+B+D$, so we would need $A+B+D=0$; the constant term would be $4A$, so $4A=-2$; and so on.



The other method is the Heaviside cover-up method. Since the two expressions have to be the same, the numerator has to be the same as $3x-2$ when evaluated at every value of $x$. If we pick $x=0$ and plug it into the left numerator, we get $-2$; if we plug it into the right hand side, we get $A(2)^2(1) = 4A$, so $4A=-2$, which tells us what $A$ is. If we plug in $x=-2$, th left h and side is $3(-2)-2 = -8$, the right hand side is $C(-2)((-2)^2+1) = -10C$, so $-10C = -8$, which tells us what $C$ is; we can then simplify and continue doing this (you'll note I selected points where a lot of the summands on the right simply evaluate to $0$) until we get the value of all the coefficients.



And once we've found all the coefficients, we just break up the integral into a sum of integrals, each of which we already know how to do. And so, after a fair amount of work (but mainly just busy-work), we are done.



Friday 7 June 2019

real analysis - Are $l_{p} cap k$ and $l_{p} cap k_{0}$ complete in $||$ $||_{infty}$? Are they complete in $l_{p}$ norms?

Let the space $k$ be all convergent sequences of real numbers. Let the space $k_{0}$ be the space of all sequences which converge to zero with $l_{\infty}$ norm. Are $l_{p} \cap k$ and $l_{p} \cap k_{0}$ complete in $||$ $||_{\infty}$? Are they complete in $l_{p}$ norms?




If $k \subseteq l_{\infty}$, with sequence $\{x_{n}\} \in k$, then $\exists \displaystyle\lim x_{n}$. And if $k_{0} \subseteq l_{\infty}$, with sequence $\{x_{n}\} \in k_{0}$, then $k_{0} = \displaystyle\lim_{n \to \infty}x_{n} = 0$.



The help would be appreciated!

modular arithmetic - Proving identities (mod $pq$) using Fermat's little theorem?

I have come across this question, which reminded me of Fermats little theorem, i dont know if the Fermats theorem is actually in use in the following mathematical statements



an integer a is a coprime with p and a coprime with q (p and q are different prime numbers )then prove that



$$a^{(p-1)(q-1)} \equiv 1(\operatorname{mod} pq) $$



$$p^{q-1}+q^{p-1} \equiv 1(\operatorname{mod} pq)$$



any help would be appreciated, thanks in advance.

general topology - 1-1 correspondence between [0,1] and [0,1)

I wonder how to build a 1-1 correspondence between [0,1] and [0,1). My professor offers an example such that 1 in the first set corresponds to 1/2 in the second set, and 1/2 in the first set corresponds to 1/4 in the second. I don't quite understand it. Does it mean every element in the first set corresponds to its half value in the second set? wouldn't that make some elements left out in the second set? Does it still count as 1-1 correspondence? Does it connect to Schroder-Bernstein theorem?

Thursday 6 June 2019

complex analysis - How to calculate $ int_{0}^{infty} frac{ x^2 log(x) }{1 + x^4} $?




I would like to calculate $$\int_{0}^{\infty} \frac{ x^2 \log(x) }{1 + x^4}$$ by means of the Residue Theorem. This is what I tried so far: We can define a path $\alpha$ that consists of half a half-circle part ($\alpha_r$) and a path connecting the first and last point of that half circle (with radius $r$) so that we have $$ \int_{-r}^{r} f(x) dx + \int_{\alpha_r} f(z) dz = \int_{\alpha} f(z) dz = 2 \pi i \sum_{v = 1}^{k} \text{Res}(f;a_v) $$ where $a_v$ are zeros of the function $\frac{x^2 \log(x) }{1+x^4}$.



If we know $$\lim_{r \to \infty} \int_{\alpha_r} f(z) dz = 0 \tag{*} $$ then we know that $$\lim_{r \to \infty} \int_{-r}^{r} f(x) dx = \int_{-\infty}^{\infty} f(x) dx = 2 \pi i \sum_{v=1}^{k} \text{Res}(f;a_v) $$ and it becomes 'easy'.



Q: How do we know (*) is true?


Answer



It's a bit more tricky that what you describe, but the general idea is correct. Instead of integrating from $0$ to $\infty$, one can integrate from $-\infty$ to $+\infty$ slightly above the real axis. Because of the logarithm, the integral from $-\infty$ to $0$ will give a possibly non-zero imaginary part, but the real part will be an even function of $x$. So we can write:
\begin{align}
\int_0^{\infty}\frac{x^2\ln x}{1+x^4}dx&=\frac12\mathrm{Re}\,\int_{-\infty+i0}^{\infty+i0}

\frac{x^2\ln x}{1+x^4}dx=\\&=\pi\cdot \mathrm{Re}\left[ i\left(\mathrm{res}_{x=e^{i\pi/4}}\frac{x^2\ln x}{1+x^4}+\mathrm{res}_{x=e^{3i\pi/4}}\frac{x^2\ln x}{1+x^4}\right)\right]=\\
&=\pi\cdot \mathrm{Re}\left[ i\left(\frac{\pi e^{i\pi/4}}{16}-
\frac{3\pi e^{3i\pi/4}}{16}\right)\right]=\\
&=\pi\cdot\mathrm{Re}\frac{(1+2i)\pi}{8\sqrt{2}}=\frac{\pi^2}{8\sqrt{2}}.
\end{align}



Now as far as I understand the question was about how can one justify the vanishing of the integral over the half-circle $C$ which in its turn justifies the application of residue theorem. Parameterizing that circle as $x=Re^{i\varphi}$, $\varphi\in(0,\pi)$, we see that
\begin{align}
\int_C \frac{x^2\ln x}{1+x^4}dx=\int_0^{\pi}\frac{iR^3e^{3i\varphi}\left(i\varphi+\ln R\right)}{1+R^4e^{4i\varphi}}d\varphi=O\left(\frac{\ln R}{R}\right),
\end{align}

which indeed vanishes as $R\rightarrow \infty$.


Wednesday 5 June 2019

Limits of sequences connected with real and complex exponential

Let us denote $S_{n}(x)=1+\frac{x}{1 !}+\frac{x^{2}}{2!}+ ... + \frac{x^{n}}{n!}$.





  1. How could be calculated the limit
    $$L(x)=\lim_{n\to \infty}\frac{S_{n}(n x)}{e^{n x}}=\lim_{n\to \infty}\frac{1+\frac{nx}{1 !}+\frac{(nx)^{2}}{2!}+ ... + \frac{(nx)^{n}}{n!}}{e^{n x}}, x\ge 0,\,\,\,\ ?$$


  2. Similar question, when $x\ge 0$ above is replaced by $z\in \mathbb{C}$ ?


  3. More general, if $r_{n}$ is a sequence of real numbers with $r^{n}\searrow 1$, what are the limits $\lim_{n\to \infty}\frac{S_{n}(n r_{n}x)}{e^{n x}}$ for $x\ge 0$ and $\lim_{n\to \infty}\frac{S_{n}(n r_{n}z)}{e^{n z}}$ for $z\in \mathbb{C}$ ? (Are they equal with the limits from the above points 1) and 2) ?




Initially, my intuition told me that probably that the limit $L(x)$ is equal to one, for any $x\ge 0$. But thinking better, my opinion is that the limit $L(x)=0$, for all $x\ge 0$. In support to this guess, for example, for $x=1$ I have calculated $\frac{S_{n}(n)}{e^{n}}$ for several consecutive values of $n$ and it appeared to me that it forms a decreasing sequence. In the general case, I have tried to use the Stolz-Cesaro lemma to the ratios $\frac{S_{n}(nx)}{e^{n x}}$ and $\frac{e^{nx}}{S_{n}(nx)}$, but it did not work. Also, I have tried to estimate $|S_{n}(nx)-e^{nx}|$ by using the Lagrange form of the remainder for Taylor series, but again I was not able to get any conclusion.In the complex case, the situation seems to be more intricated. Indeed, for $z=i$, we get $$\frac{S_{n}(n i)}{e^{n i}}=\frac{S^{(cos)}_{n}(n)+iS^{(sin)}_{n}(n)}{cos(n)+isin(n)},$$ where $S^{(cos)}_{n}$ and $S^{(sin)}_{n}$ represents the partial sums of order $n$ from the series development of cosine and sine functions. The limit with $n\to \infty$ in this case looks more tricky, as do not exist the limits $\lim_{n\to \infty}cos(n)$ and $\lim_{n\to \infty}sin(n)$.

The infinite integral of $frac{sin x}{x}$ using complex analysis

The problem i came across is the evaluation of $$\int_0^\infty\frac{\sin x}{x}\,dx$$ I chose the function $f(z) = \dfrac{e^{iz}}{z}$ and took a contour of $[\varepsilon , R ] + [R , R+iy] + [-R+iy , R+iy] + [-R,-R+iy]+[-R, -\varepsilon]$ . The problem is how do I continue now to find integrals on each of these segments ?

linear algebra - How can I use an n-dimension all-ones square matrix J to represent its power.



J is a matrix described above, and I want to obtain $${ J }^{ 2 }, { J }^{ 3 },...,{ J }^{ n }$$I got some insights from the answer: What are the eigenvalues of matrix that have all elements equal 1? However, I still cannot figure out how to do it, although the characteristic equation is $${ \lambda }^{ n }=n{ \lambda }^{ n-1 }$$ and with the help of Cayley-Hamilton Theorem, I can get $${ J }^{ n }=nJ^{ n-1 }$$ But confused I cannot iterate J as: $${ J }^{ n }=n{ J }^{ n-1 }=n(n-1){ J }^{ n-2 }=...=n!{ J }^{ }$$Anyone can explain this?


Answer



If $\def\tr{\operatorname{tr}}A$ is any rank$~1$ matrix of size $n\times n$ (for instance your all-entries-$1$ matrix$~J$), then its characteristic polynomial is $X^{n-1}(X-\tr A)$, where the factor $X^{n-1}$ is deduced from the $n-1$-dimensional eigenspace for eigenvalue $0$, and the final factor is there to make the sum of the roots (with multiplicity) of the characteristic polynomial equal to the trace of $A$, as it should be. But since the factor $X^{n-1}$ came from an actual eigenspace (as opposed to a generalised eigenspace), one only gets a single factor $X$ in the minimal polynomial. So if $n>1$ the minimal polynomial of any $n\times n$ matrix of rank$~1$ is $X(X-\tr A)$. (Check that even in case $\tr A=0$, the minimal polynomial is $X(X-\tr A)=X^2$ rather than $X$.)




In case of your matrix $J$, the trace is$~n$, so the minimal polynomial is $X(X-n)=X^2-nX$; you matrix satifies $J^2=nJ$. From this it follows easily that $J^{k+1}=n^kJ$ for all $k\in\mathbf N$.


Tuesday 4 June 2019

real analysis - Prove that if $a_n$ is a non-negative sequence, lim $a_n$ = 0 $implies$ lim $sqrt{a_n}$ =0

The book I am using for my Advance Calculus course is Introduction to Analysis by Arthur Mattuck.



Prove that if $a_n$ is a non-negative sequence, lim $a_n$ = 0 $\implies$ lim $\sqrt{a_n}$ =0



This is my rough proof to this question. I was wondering if anybody can look over it and see if I made a mistake or if there is a simpler way of doing this problem. I want to thank you ahead of time it is greatly appreciated.So lets begin:



Proof:




enter image description here

algebra precalculus - Inequality $frac{b}{ab+b+1} + frac{c}{bc+c+1} + frac{a}{ac+a+1} ge frac{3m}{m^2+m+1}$




Let $m=(abc)^{\frac{1}{3}}$, where $a,b,c \in \mathbb{R^{+}}$. Then prove that



$\frac{b}{ab+b+1} + \frac{c}{bc+c+1} + \frac{a}{ac+a+1} \ge \frac{3m}{m^2+m+1}$



In this inequality I first applied Titu's lemma ; then Rhs will come 9/(some terms) ; now to maximise the rhs I tried to minimise the denominator by applying AM-GM inequality.But then the reverse inequality is coming
Please help.


Answer



By Holder:
$$\sum_{cyc}\frac{b}{ab+b+1}=1-\frac{(abc-1)^2}{\prod\limits_{cyc}(ab+b+1)}\geq1-\frac{(m^3-1)^2}{(\sqrt[3]{a^2b^2c^2}+\sqrt[3]{abc}+1)^3}=$$

$$=1-\frac{(m-1)^2(m^2+m+1)^2}{(m^2+m+1)^3}=1-\frac{(m-1)^2}{m^2+m+1}=\frac{3m}{m^2+m+1}.$$
Done!


Monday 3 June 2019

sequences and series - How to compute the Riemann zeta function at negative integers?

There already questions such as $1 + 1 + 1 +\cdots = -\frac{1}{2}$ and Why does $1+2+3+\cdots = -\frac{1}{12}$? which show how $\zeta(0)$ and $\zeta(-1)$ can be calculated.



What are some ways to evaluate the Riemann zeta function at any negative integer that appears to have a direct correlation to $\sum_{n=1}^\infty\frac1{n^s}$? That is to say, results that can be attained by manipulating this series somewhat directly. So not things such as the reflection formula or the Bernoulli numbers which don't seem to relate to the above series.

finite fields - Understanding Primitive Polynomials in GF(2)?

This is an entire field over my head right now, but my research into LFSRs has brought me here.



It's my understanding that a primitive polynomial in $GF(2)$ of degree $n$ indicates which taps will create an LFSR. Such as $x^4+x^3+1$ is primitive in $GF(2)$ and has degree $4$, so a 4 bit LFSR will have taps on bits 4 and 3.




Let's say I didn't have tables telling me what taps work for what sizes of LFSRs. What process can I go through to determine that $x^4+x^3+1$ is primitive and also show that $x^4+x^2+x+1$ is not (I made that equation up off the top of my head from what I understand about LFSRs, I think it's not primitive)?



Several pages online say that you should divide $x^e+1$ (where e is $2^n-1$ and $n$ is the degree of the polynomial) by the polynomial, e.g. for $x^4+x^3+1$, you do $(x^{15}+1)/(x^4+x^3+1)$. I can divide polynomials but I don't know what the result of that division will tell me? Am I looking for something that divides evenly? Does that mean it's not primitive?

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...