Thursday 31 March 2016

complex analysis - Compute the Integral via Residue Theorem



My goal is to compute $$I=\int_{0}^{+∞}\frac{\cos{ax}}{1+x^2}dx$$ where $a>0$.




$$I=\frac{1}{2}\int_{-∞}^{+∞}\frac{\cos{ax}}{1+x^2}dx=\frac{1}{2}Re\bigg(\int_{-∞}^{+∞}\frac{e^{iax}}{1+x^2}dx\bigg)$$.



Let $f(z)=\frac{e^{iaz}}{1+z^2}$.



By Residue Theorem, $\int_{-R}^{R}\frac{e^{iax}}{1+x^2}dx+\int_{\gamma_R}\frac{e^{iaz}}{1+z^2}dz=2\pi Res(f,i)=\frac{e^{-a}}{2i}$, where $\gamma_R$ denotes the upper semi-circle centered at $O$ with radius $R$.



As $R$—>$+∞$,



$\int_{-R}^{R}\frac{e^{iax}}{1+x^2}dx$ —> $\int_{-∞}^{+∞}\frac{e^{iax}}{1+x^2}dx$




Now, I am stuck on how to prove $\int_{\gamma_R}\frac{e^{iaz}}{1+z^2}dz$ goes to $0$ as $R$ goes to infinity.



Anyone know how to do it? Many thanks.


Answer



Note that for $z=R(\cos(t)+i\sin(t))$ with $R>1$ and $t\in [0,\pi]$
$$\left|\frac{e^{iaz}}{1+z^2}\right|=\frac{e^{-aR\sin(t)}}{R^2-1} \leq \frac{1}{R^2-1}.$$
Hence, as $R\to +\infty$,
$$\left|\int_{\gamma_R}\frac{e^{iaz}}{1+z^2}dz\right|\leq \frac{|\gamma_R|}{R^2-1}=\frac{\pi R}{R^2-1}\to 0.$$
P.S. This is a particular case of the Jordan Lemma.



How can I get the result of this limits



I found a limits equation




$$\lim_{n \to \infty}\left(1-\frac{\lambda}{n}\right)^n=e^{-\lambda}$$



How can I get the result of $e^{-\lambda}$?



Normally, we can use



$$\lim_{x \to \infty}\left(1+\frac{n}{x}\right)^x=e^n$$



And how can I get $e^n$?


Answer




You may know that (sometimes this is used as definition of $e$)
$$\lim_{n\to\infty}\left(1+\frac1n\right)^n=e $$
Taking $k$th powers, $k\in\Bbb N$, we obtain
$$e^k=\lim_{n\to\infty}\left(1+\frac1{n}\right)^{nk}=\lim_{n\to\infty}\left(1+\frac k{nk}\right)^{nk}.$$
The latter limit is the limit of a subsequence of $\lim_{n\to\infty}\left(1+\frac k{n}\right)^{n}$, hence this also converges to $e^k$, once we know it converges at all. In fact, the same method shows that more generally
$$\lim_{n\to\infty}\left(1+\frac {ak}n\right)^n =\left(\lim_{n\to\infty}\left(1+\frac {a}n\right)^n\right)^k$$
for $k\in\Bbb N$ and arbitrary $a$ (provided both limits exist).
As a consequence, $$\lim_{n\to\infty}\left(1+\frac {a}n\right)^n=e^a\qquad \text{for all }a\in\Bbb Q_{\ge0}.$$
Finally, using $(1-\frac1n)^n(1+\frac1n)^n=(1-\frac1{n^2})^n$, you can show that the same also hods for $a=-1$ and hence also for all $a\in\Bbb Q$.


real analysis - A sequence defined by setting $a_{n+1}$=$frac{a_{n}^{p-1}+frac{a}{a_{n}^{p-1}}}{p}$ Find $Lim_{nrightarrowinfty}$ $a_{n}$




Question For p$\in\mathbb{N}$, $a$$>$0 and $a_{1}>0$ define

the sequence $\left\{ a_{n}\right\} $by setting



$a_{n+1}$=$\frac{a_{n}^{p-1}+\frac{a}{a_{n}^{p-1}}}{p}$ Find $Lim_{n\rightarrow\infty}$$a_{n}$




My approach Actually i can prove that the sequence is monotonically
decreasing and bounded below by $\sqrt[p]{a}$



Book mentions the answer $\sqrt[p]{a}$




but i can not say that every monotonically decreasing sequence converges
to its lower bound .There can exist $l$ such that $l\geq$ $\sqrt[p]{a}$



and $Lim_{n\rightarrow\infty}$$a_{n}$= $l$. We can not be sure
about it



it would be wrong if i say $l$=${\sqrt[p]{a}}$



My proof of lower bound using AM $\geq$GM inequality




$\frac{A+B}{p}\geq$$\sqrt[p]{A.B}$



$a_{n+1}$$\geq$$\sqrt[p]{a^{p-1}\frac{a}{a^{p-1}}}$$\Longrightarrow$$a_{n+1}$$\geq$$\sqrt[p]{a}$



Edit Actually Real question in the book was $\Longrightarrow$ This



And its answer $\Longrightarrow$ This



I thought there is some misprinting in the book so i corrected it and asked the question.


Answer




Thought it'd be a good idea to write the comment as an answer since I wasn't descriptive enough:



$1)$ Use Monotone Convergence Theorem: It says that bounded increasing (decreasing) sequence converges to its supremum (infimum).



$2)$ You've shown that the sequence is monotone and bounded, hence the infimum exists, hence the limit exists.



$3)$ Let that limit be $l$. Since the limit of any subsequence of a convergent sequence is that same as the limit of the sequence itself, we get the following:



$$lim_{n\rightarrow\infty}a_n=lim_{n\rightarrow\infty}a_{n+1}$$




Hence using the definition of a sequence and taking limits of both sides we get the equation:



Note: This equation is valid only for the formulation of the problem before the edit
$$l=\frac{l^{p-1}+\frac{a}{l^{p-1}}}{p}$$



I suppose you can solve this, if not, I can be of assistance



The problem is your AM-GM: you have the sum of only $2$ things:



Hence $$a_{n+1}p=a_n^{p-1}+\frac{a}{a_n^{p-1}} \geq 2\sqrt{a}$$

hence $a_{n+1}\geq \frac{2\sqrt{a}}{p}$


calculus - Solving inequality involving square root and division by logarithm: $sqrt n



I would like to solve the inequality $\sqrt n<\frac{n}{\log(n)}-2$. for some reason I had never done this before. This is clearly the same as $\frac{n}{\log(n)}-\frac{n}{\sqrt{n}}>2$. Which is the same as $\frac{\sqrt n)(\sqrt n-\log(n))}{\log(n)}>2$. But here I get stuck. Thanks in advance.


Answer



It doesn’t look like it has a closed-form solution, other than in terms of the two solutions of $\frac{\sqrt n)(\sqrt n-\log(n))}{\log(n)}{\color{blue}=}2$.



Solving numerically with Mathematica, the inequality holds for $018.138863...$.



enter image description here




The calculations here show that Mathematica finds no closed form for the two intersection points and denotes them simply as the points where $\frac{\sqrt n)(\sqrt n-\log(n))}{\log(n)}=2$



I don’t know what numerical technique Mathematica used, but the function is well behaved near its two solutions, so no fancy method beyond guessing and checking is needed. For the larger solution, for example, you could try $n= 18$, based on the graph, see that the curve lies below $y=2$, then try $y=18.5$, see that it lies above, then $y=18.2$, and so on.



enter image description here


Wednesday 30 March 2016

real analysis - Is this a necessary and sufficient condition for the derivative to exist at $C$?

Suppose we want to prove that the derivative of a function across an interval exists at $C$, but the derivative at $C$ cannot be found. We know the function must be continuous. Can we take the limit of derivative from the negative and positive direction of $C$ and show that if they are equal, the derivative at $C$ exists and is equal to the limit obtained? Is this a necessary and sufficient condition?




EDIT:



Sufficiency - If a function is a derivative along some interval, it does not have a removable singularity at $C$.



Necessity - There is no interval of a derivative of some function in which a jump or essential discontinuity occurs.



There are two cases in which the condition is met if this is a necessary and sufficient condition. One is where the derivative is continuous, the other is where there is a removable discontinuity in the derivative. Is the latter possible?

Can someone explain the following modular arithmetic?

$7^{-1} \bmod 120 = 103$



I would like to know how $7^{-1} \bmod 120$ results in $103$.

Nature of primes as the building blocks of integers?




It is considered standard in mathematics that all integers can be expressed as the product of primes:
$$n=p_1^{a_1}p_2^{a_2}...p_k^{a_k}$$
Where $p_i$ is prime and $p_{i-1}

Answer



Yes, of course your assumption is correct.



Consider for example, $\large{n=3^{56}\cdot 5^{48}}.$



Let us colour exponents that are composite, and indent for each level of tetration for which the action must be repeated:




\begin{align}
&\quad n&=&\quad 3^{\color\red {56}}\cdot 5^{\color\red {48}}\\
\text{tetration level 1:}&\quad \color\red {56}&=&\quad 2^3\cdot 7\\
&\quad \color\red {48}&=&\quad 3\cdot 2^{\color\red {4}}\\
\text{tetration level 2:}&\quad \color\red {4}&=&\quad 2^2\\
\Rightarrow&\quad \large{n}&=&\quad \large{3^{2^{3}\cdot 7}\cdot 5^{3\cdot 2^{2^2}}}\\
\end{align}



If it were not true (ie, if there were some stage at some level of tetration that factorisation yielded neither prime nor composite), it would of course contradict the fundamental theorem of arithmetic.



linear algebra - Maximum eigenvalue of a special $m$-matrix

I have a set of matrix, which is:





  1. Real symmetric positive definite. Very sparse.

  2. Diagonal elements are positive while off-diagonal elements are negative.

  3. $\displaystyle a_{ii}=-\sum^{n}_{{j=1}\atop{j\ne i}} a_{ij}$

  4. $a_{ii} \in (0,1]$

  5. $a_{ij} \in (-1,0]$ when $ i \neq j$



My experiments show that the largest eigenvalue of all the matrices I have are larger than 1. Can some one help me on proving that $ \lambda_{max} >1 $ for this matrix?




My first though is to prove that $Ax=\lambda x < x$ does not hold. But I couldn't get any break through. Thanks!

algebra precalculus - Understanding a proof that $17mid (2x+3y)$ iff $17mid(9x +5y)$




I was studying the number theory and came across this question.



Example 1.1. Let $x$ and $y$ be integers. Prove that $2x + 3y$ is divisible by 17 if and only if $9x + 5y$ is divisible by 17.



Solution. $17 \mid (2x + 3y) \implies 17 | [13(2x + 3y)]$, or $17 \mid (26x + 39y) \implies 17 \mid (9x + 5y)$. Conversely, $17 \mid (9x + 5y) \implies 17 \mid [4(9x + 5y)]$, or $17 \mid (36x + 20y) \implies 17 \mid (2x + 3y)$.



I have a difficulty understanding how $$17\mid(26x+39y)$$ implies $$17\mid (9x+5y)$$ and vice versa.
I know this question as already been asked here but I didn't understand from that answer and since I'm new to Math SE, I don't have enough points to add a comment to that post to clarify that answer.


Answer




I think the "if and only if" (abbreviated "iff" in the title of this question and in the textbook you're studying) is confusing you. It kind of suggests that one requires the other but the other does not necessarily require the one, when in fact the two conditions are mutually dependent: one requires the other and the other requires the one.



Do you remember how to add and subtract binomials from Algebra 101? Align the like terms in columns and then perform arithmetic as usual.



$$\begin{array}{rrrr}
& 26x & + & 39y \\
- & 9x & + & 5y \\
\hline
= & 17x & + & 34y \\
\end{array}$$




(okay, that's not properly aligned because in the time it takes me to figure out how to do it, twenty other answers will get posted and they'll be like "you copied from me").



Then it's easy to see that $$\frac{17x + 34y}{17} = x + 2y.$$ Since $17x + 34y$ is clearly a multiple of 17, it follows that if you subtract it from another multiple of 17, you will get yet another multiple of 17.






Bonus: if $17 \mid 9x + 5y$, then either $x$ or $y$ may be a square, but not both.


fractions - Expression for binomial coefficient denominator




I'm trying to find an analytical expression for the denominator of $\pmatrix{-1/2\\k}$ in terms of $k$ when the fraction is fully reduced.



E.g., the first several such denominators, starting with $k=0$, are $1,2,8,16,128,256,1024,2048,32768$, so there are various power-of-$2$ jumps, but I haven't been able to figure out the overall pattern so that I can nail down the expression.



Does anyone know of such an expression, or know of a good place to look to try to figure this out? If not, does anyone know if this is a fool's errand?



Thanks for any help.


Answer



This is integer sequence A046161 in the On-Line Encyclopedia of Integer Sequences.




You can find several formulas and details there.


calculus - How to find the following limits



$$\lim_{x\to \infty} \frac{x^2(1+\sin^2x)}{(x+\sin x)^2}$$




I can't figure out how to manipulate this algera so as to get the limit I want. Any hint?


Answer



The limit doesn't exist. Note that, dividing by $x^2$ the numerator and denominator, we arrive at the equality



$$\lim_{x\to \infty} \frac{x^2(1+\sin^2 x)}{(x+\sin x)^2}=\lim_{x\to \infty} \frac{1+\sin^2x}{\left(1+\frac{\sin x}{x}\right)^2}.$$ But, since $\left(1+\frac{\sin x}{x}\right)^2\to 1$ as $x\to \infty$ and $\lim_{x\to \infty}(1+\sin^2x)$ doesn't exist, we conclude that the original one doesn't exist.


Tuesday 29 March 2016

inequality - Prove that $frac{1}{n+1} + frac{1}{n+3}+cdots+frac{1}{3n-1}>frac{1}{2}$



Without using Mathematical Induction, prove that $$\frac{1}{n+1} + \frac{1}{n+3}+\cdots+\frac{1}{3n-1}>\frac{1}{2}$$




I am unable to solve this problem and don't know where to start. Please help me to solve this problem using the laws of inequality. It is a problem of Inequality.



Edit: $n$ is a positive integer such that $n>1$.


Answer



The sum can be written as
\begin{align}
\frac{1}{n+1} + \frac{1}{n+3} + \ldots + \frac{1}{3n - 1} & = \sum_{i=1}^n \frac{1}{n + 2i - 1}.
\end{align}
Now recall the AM-HM inequality:

$$
\frac 1n\sum_{i=1}^n(n + 2i - 1) > \frac{n}{\sum_{i=1}^n \frac{1}{n + 2i - 1}}.
$$
(The requirement that $n > 1$ guarantees that the inequality is strict.)



Rearrange to get
\begin{align}
\sum_{i=1}^n \frac{1}{n + 2i - 1} & > \frac{n^2}{\sum_{i=1}^n(n + 2i - 1)} = \frac 12.
\end{align}


Prove this binomial sum by induction

Can someone help me with this one?
Prove by mathematical induction



For $$n\geq1$$
$$\displaystyle{\sum^n_ {k=0} k^n\binom{n}{k}(-1)^k= (-1)^nn!}$$




It's easy to see that for $$n=1$$
$$\displaystyle{0^1\binom{1}{0}(-1)^0+1^1\binom{1}{1}(-1)^1= -1}$$ and $$\displaystyle{(-1)^11!=-1}$$



My problem is how to use the induction hypothesis
I'm trying to solve it this way:



Perhaps I just have to add this to my sum:
$${\sum_{k={n+1}}^{n+1}} (n+1)^{n+1}\binom {n+1}{n+1}(-1)^{n+1} $$
And get:
$$\displaystyle{\sum^n_ {k=0} \biggl[k^n\binom{n}{k}(-1)^k}\biggr]+(n+1)^{n+1}\binom {n+1}{n+1}(-1)^{n+1}$$

And as:
$$\displaystyle{\sum^n_ {k=0} k^n\binom{n}{k}(-1)^k= (-1)^nn!}$$
Then i get:
$$\displaystyle{(-1)^nn!+(n+1)^{n+1}(-1)^{n+1}}$$
I tried this:
$$\left(-1\right)^{n}\,n!+\left(-n-1\right)\,\left(n+1\right)^{n}\,
\left(-1\right)^{n}$$
And this:
$$\left(-1\right)^{n}\,\left(n!-n\,\left(n+1\right)^{n}-\left(n+1
\right)^{n}\right)$$

But i can't figure out how to get to this:
$$\displaystyle{ (-1)^{n+1}(n+1)!}$$

Monday 28 March 2016

indefinite integrals - Flip side of Feynman's trick for Integration

If I differentiate the integral:
$$\int_{-a+2}^{a-2} \ (a-x) \, da$$



then I get 4 - 2 a.




1) Is it possible to get back to integral in the form $ \int_{-a+2}^{a-2} \ (a-x) \, da$?



The application would be to find a way to use the 'flip side of Feynman's trick' described on page 90 and 91. The author Paul J. Nahin of Inside Interesting Integrals appears to find the integral that when integrated again (double integration) leads to the solution. So I thought if one differentiated the original definite integral one could find what the integral should be. Otherwise the integral seems to have to be guessed.



To illustrate what I'm getting at, so he finds:



$$\int_{0}^{1} \frac{x^a-1}{\ln(x)}\,dx\,=ln(a+1)\, a > 0, \, a = 0$$



by using:




$$\int_{0}^{1} \ x^y dy\, = \frac{x^a-1}{\ln(x)}\,$$



So I thought could one differentiate



$$\int_{0}^{1} \frac{x^a+1}{\ln(x)} \, dx\, a > 0, \, a = 0$$



to get:



$$\int_{0}^{1} \ x^y dy\,$$




because without him saying so I cannot see how one could guess this integral.



Hence the question 1.

number theory - Simplifying Linear Congruences

I want to solve the given linear congruence



$$ 5x+1\equiv 2 \mod 6 $$




My Approach:



\begin{align}&\Rightarrow& 5x -1&\equiv 0\mod 6 \\
&\Rightarrow& 6x -x-1&\equiv 0\mod6 \tag{ as $6\equiv 0\mod6$}\\
&\Rightarrow& 0 -x-1&\equiv 0\mod6 \\

&\Rightarrow& x&\equiv -1\mod 6 \end{align}




Am I correct till now ?? If I am correct, how to proceed further?

Sunday 27 March 2016

geometry - Question about solved problem

I am struggling to understand a step in the accepted solution of
How Can One Prove $\cos(\pi/7) + \cos(3 \pi/7) + \cos(5 \pi/7) = 1/2$.



I have been new to this site, I can not figure out where to ask this question.





Can someone please enlighten me why $u^5+u^3+u=a$ is equivalent to $ua+1=a$?




With algebra, I just get stuck. I also can't find any property that establishes the equivalence.



Thanks in advance.

logic - What "real numbers" (elements in $mathbb R$), are people referring to?



To define mathematical objects, it seems one defines them in terms of other mathematical objects.



However various mathematical objects have different "definitions".



E.g. it seems people "construct" the real numbers (they use objects other than the real numbers, to create an algebraic structure isomorphic to what others consider the real numbers) and then define those as the real numbers. Yet this appears ambiguous to me.



If one were to refer to the "real numbers" would they then be referring to Dedekind Cuts?




Or would they be referring to equivalence classes of Cauchy Sequences?



Or to some other "construction" entirely?






Here are my best two guesses as to what people mean:



$1.$ They are simply referring to any structure isomorphic to the (unique) complete ordered field.




$2.$ They are referring to an equivalence-class$^{*}$ of all structures that are isomorphic to the (unique) complete ordered field.



$*\small(\text{Not an equivalence class in the proper sense with sets as an such object cant be a set by Russell's paradox)}$



The reason $1$ would not result in ambiguity is because we often never need to make use of their original "definitions" per se. Because we can embed the rationals, integers etc. inside the real numbers. However $2$ would avoid this problem entirely as we have only a single definition.


Answer



Your question is essentially a philosophical one. When ordinary people talk about number, they have a very definite concept in mind. Mathematicians are ordinary people, but have a professional obligation to give logical justifications for their use of the number concept. So when they are concerned about logical foundations, mathematicians give existence proofs for the various number systems by constructing explicit witnesses for systems satisfying the required properties. When they go back to their day-to-day mathematical work, mathematicians forget about the details of these existence proofs and happily work as if $\Bbb{R}$ were a subset of $\Bbb{C}$ and $\sqrt{2}$ were an object that belongs to $\Bbb{R}$, satisfies $x^2 = 2$, but isn't itself a set or a sequence.



There is a seminal paper by Benacerraf What Numbers Could Not Be that anyone interested in the philosophical foundations of mathematics should read.



limits - Is $frac{sin(x)}{x}$ continuous at $x=0$? Whats the value at $x=0$?

Is $\dfrac{\sin(x)}{x}$ at $x = 0$ continuous?
Whats the value at $x=0~?$

real analysis - Is it possible for an increasing continuous function on $[0,infty)$ not to be H$ddot{mathrm{o}}$lder continuous at some point?




Let $f$ be a real valued function defined on the domain $[0,\infty)$, such that $f$ is strictly increasing and $f$ is continuous on $[0,\infty)$. It seems to me that, at any point $a$, in a neighborhood $U$, $f(x)-f(a)$ has to be dominated by something like $(x-a)^\alpha$, for some $\alpha>0$, at any point, i.e. I suspect that this kind of functions will be H$\ddot{\mathrm{o}}$lder continuous at any given point (maybe with different $\alpha$ s). But I cannot materialize this intuition of mine using some formal mathematical techniques.




Is it possible for $f$ not to be H$\ddot{\mathrm{o}}$lder continuous at some point? If it is not possible can anyone give me a hint to the techniques that might be helpful addressing this kind of question?




In case such an example do not exist, please do not give a full answer, but give hint to possible methods that I have to learn so that I can address this question myself. Any help is appreciated.



Edit: By the way, note that at any point where the functions is differentiable and the derivative is finite, the functions is automatically Lipscitz continuous at that point. Also, I suppose, though I am not sure, the points where the functions has one of its Dini derivatives finite, it is Lipscitz at that point. Thus the main points of interest are the points where the function is not differentiable and also the Dini derivatives blow up, which I think leave us the only possibility as the point $0$.


Answer




Hint: Try something like



$$f(x) = \sum_{n=1}^{\infty}c_n x^{1/n}$$



for suitable positive constants $c_n.$


Saturday 26 March 2016

geometry - Expanding (or reducing) a shape




If I want to expand or reduce a shape what mathematical methods are there to do this.



I'd like to understand scaling which seems simple enough. Using my limited knowledge I would do this by measuring the angle and distance of each point from a given anchor point, and then re-plot them by multiplying the distance against a scaling factor.



scale shape



I have no idea how optimal this method is, or how to express it using mathematical notation, so I'd like to know.



I'd also very much like to understand how to make a shape expand or reduce around it's interior. For example, I thought it would be something like this...




enter image description here



I use a circle of a given radius on each point, and at each point create a bisecting line, then construct the larger or smaller shape using the intersecting points of the circles. However, as you can see this method has numerous errors, shape B obviously has angles that differ from shape A, what's the correct way to expand / shrink a shape in this manner?



Plain english and mathematical notation answers are gratefully requested, I'm still learning a lot of notation.





I'm not sure that the second example is clear enough, so I've made this image to describe what I'm looking for.




enter image description here



Using this example, it's clear that projection scaling isn't going to produce the shape required. what is this sizing method called, and how is it done mathematically?


Answer



You can follow your anchor point (projection) approach using a point in the interior. That will preserve angles. It is not clear how you got from A to B in the second drawing. The lines between corresponding vertices do not meet in a point, which is why the angles change.


sequences and series - How do i evaluate this :$sum_{k=1}^{+infty}frac{{(-1)^k}}{sqrt{k+1}+sqrt{k}}$?

I have tried to evaluate the bellow sum using some standard altern sum but i don't succed then my question here is :





Question:

How do i evaluate this sum





$\sum_{k=1}^{+\infty}\frac{{(-1)^k}}{\sqrt{k+1}+\sqrt{k}}$ ?



Note: Wolfram alpha show it's values here

complex analysis - To evaluate principal value of $intlimits_{-infty}^{infty} frac{cos z}{z-w} dz.$


Evaluate the principal value of the integral $$\int\limits_{-\infty}^{\infty} \frac{\cos z}{z-w} \ dz. \ \ \ \ \ |\text{Im} \ z|>0 $$




I could not solve this problem during tutorial class. Upon looking at the solution sheet that was uploaded the next day, I am still not clear about the method that is used to solve this. So the solution given is:




$$ \int\limits_{-\infty}^{\infty} \frac{\cos z}{z-w} \ dz = \begin{cases} 2\pi i \ \text{Res}_{z=w} \left(\frac{e^{iz}}{2(z-w)}\right) = \pi i e^{iw}, & \text{if} \ \text{Im} \ w >0 \\
-2\pi i \ \text{Res}_{z=w} \left(\frac{e^{-iz}}{2(z-w)}\right) = -\pi i e^{iw}, & \text{if} \ \text{Im} \ w <0 \end{cases}$$



The part which I don't understand is the beginning part of the solution where they claim (without giving any reason) that $\int\limits_{-\infty}^{\infty} \frac{\cos z}{z-w} \ dz = \int\limits_{-\infty}^{\infty} \frac{e^{iz}}{z-w} \ dz$ $\text{if}$ $\text{Im} \ w>0$ and $\int\limits_{-\infty}^{\infty} \frac{\cos z}{z-w} \ dz = \int\limits_{-\infty}^{\infty} \frac{e^{-iz}}{z-w} \ dz$ if $\text{Im} \ w<0$.



Evaluating the integrals with the exponential terms is clear to me. But what is the reasoning behind replacing the cosine with the exponential function? Need help understanding this.

calculus - Show equality of two integrals: $int_0^inftyfrac{cos x}{1+x}dx = int_0^inftyfrac{sin x}{(1+x)^2}dx$



I need to prove that:



$$\int_0^\infty \frac{\cos x}{1+x} \ dx = \int_0^\infty \frac{\sin x}{(1+x)^2}\ dx$$




but, one converges absolutely whereas the other is not.



I've tried few things like subtitution, integration by parts. I've also tried to use linearty and substract one from the other, hoping the result would be zero.
Yet, none of those worked for me. I guess it's probably demanding some creative subtitution I can't see but I'm not sure.


Answer



You may integrate by parts, obtaining
$$
\int_0^M \frac{\cos x}{1+x} \ dx =\frac{\sin M}{1+M} + \int_0^M \frac{\sin x}{(1+x)^2}\ dx,
$$ then let $M \to \infty$ to get the announced result.



Non-Standard analysis and infinitesimal

Can someone please explain how Non Standard Analysis is used to justify infinitesimals?



I am not very clear about this but apparently it has something to do with hyperreals.

Friday 25 March 2016

probability - Bus arrival times and minimum of exponential random variables



I came across a question that is supposed to show us how the properties of the exponential distribution can be used.



I know and have shown that $$P(X_i where each $X_i$ is an independent exponential random variable with parameter $\lambda_i$



And I also assume I have to use the memory less property for the question too.





Buses numbered 1, 2 and 3 arrive at a bus stop. The time in minutes between consecutive arrivals of buses 1,2 and 3 follow an exponential distribution with parameters $0.1$, $0.2$ and $0.4$ respectively. The time that a bus arrives is independent of the time that any other bus arrives.




  • Find the mean time between arrivals of buses numbered 1.

  • You are currently at the stop waiting for Bus 2. Find the
    probability that a bus numbered 1 arrives before a bus numbered 2.

  • You are currently at the bus stop waiting for a bus numbered 2. Find the probability that a bus numbered 2 will turn up before one numbered 1 or 3.





So far, I calculated that for question 1, $$min\{ X_1,X_2,X_3\} =exp(0.7)$$ which means that the mean is equal to $$\dfrac{1}{0.7}$$



For question 2, I find that $$ P(X_1



And finally for question 3, I get $$ \dfrac{0.2}{0.7}$$



I am not sure if I approached this correctly. I guess the setting of the problem makes it harder to understand what I am supposed to be doing.


Answer



Busses numbered 1 are $Exp(\lambda_1)$, so the expected waiting time is simply

$$\mathbf E [X_1] = \frac{1}{\lambda_1} = 10$$



For question 2 you can either use your result or use the law of iterated expectation
$$P(X_1 < X_2) = \mathbf E\big[ P(X_1 < X_2 | X_2)\big] = \mathbf E\big[ 1- \mathrm e^{-\lambda_1 X_2}\big ] = \frac{\lambda_1}{\lambda_1 + \lambda_2} =\frac{0.1}{0.3}$$



For question 3 you can use your result to say that



$$P\big(X_2 < \min\{X_1, X_3\}\big) = \frac{\lambda_2}{\lambda_1+\lambda_2 + \lambda_3}=\frac{0.2}{0.7}$$



Below you find a short Julia simulation to check that the results are actually correct (note that Julia uses the mean instead of the rate as parameter of the exponential distribution)




julia> using Distributions

julia> λ1, λ2, λ3 = 0.1, 0.2, 0.4;

julia> B1 = Distributions.Exponential(1/λ1)
Exponential{Float64}(θ=10.0)

julia> B2 = Distributions.Exponential(1/λ2)
Exponential{Float64}(θ=5.0)


julia> B3 = Distributions.Exponential(1/λ3)
Exponential{Float64}(θ=2.5)

julia> mean([rand(B1) for i in 1:1000000])
10.001321659735758

julia> 1/λ1
10.0


julia> mean([rand(B1)0.333354

julia> λ1 /(λ1+λ2)
0.3333333333333333

julia> mean([rand(B2) < min(rand(B1), rand(B3)) for i in 1:1000000])
0.28588

julia> λ2 /(λ1+λ2+λ3)

0.2857142857142857

complex analysis - find $lim_{zrightarrow 0}left(frac{1}{z^{2}}-frac{1}{sin^{2}z}right)$

Find $\lim_{z\rightarrow 0}\left(\frac{1}{z^{2}}-\frac{1}{\sin^{2}z}\right)$.
First I let $z=x+iy$ and then substitute to the limit and I let $y=0$ so it now limit $x\rightarrow 0$. So i use L'Hopital but I didn't get the answer.

elementary number theory - Solving Non-Linear Congruences




I'm trying to solve an exercise from Niven I.M "An Introduction to Theory of Numbers" on page 106, problem 10. The problem wants you to find all the solutions to the congruence \begin{equation*}
x^{12} \equiv 16 \quad (\text{mod }17)
\end{equation*}

Here is my attempt;



First I found that $3$ is a primitive root in $(mod 17)$, i.e $3^{16} \equiv 1 \quad (\text{mod }17)$.



This means that we can write $16 \equiv 3^{8} \quad (\text{mod }17)$. So we have \begin{equation*}
x^{12} \equiv 3^{8} \quad (\text{mod }17)

\end{equation*}

Then multiplying the congruence by $3^{16}$ we see that
\begin{equation*}
x^{12} \equiv 3^{24} \quad (\text{mod } 17)
\end{equation*}

We see that $x=9$ is a solution because $9=3^2$.



To find the remaining solution I think we need to have \begin{equation*}
x^{12} \equiv 3^{8+16k} \quad (\text{mod }17)
\end{equation*}


for $k \in \mathbb{Z}/17\mathbb{Z}$.



So we need $12|(8+16k)$.
However, I'm not sure about my last argument that $12|(8+16k)$. Is it right or wrong?
Any help is appreciated.


Answer



$k$ would be in $\mathbb{Z}$ . You can Also note both 12 and $16k+8$ divide by 4. This means, 3 would need to divide $4k+2$. Using mod 3, we get $k$ congruent to 1 mod 3. $k=1$ gives a cube root of 9. $k=4$ gives 15, $k=7$ gives 8, and $k=10$ gives 2. You intuition works, but your reduction could have gone further.


linear algebra - Prove transpose of pseudoinverse commutes



How can I show that $(A^T)^+=(A^+)^T$, where
$A^+$ is Moore-Penrose Inverse?




I know there are 4 properties of the Moore-Penrose Generalized inverse, for example:
$$AA^+A=A^+. $$



To prove it, could I take the transpose of the above
$$(AA^+A)^T=(A^+)^T$$
and somehow simplify the LHS so it looks like the LHS of the original statement? Is this on the right track at all? I can get the LHS to be:
$$A^+A(A^+)^T$$ but this does not seem to help me.



Any hints please?


Answer




You are on the right track; we just need to show both matrices satisfy the same 4 properties.



Let $Z=A^{+}$, so $AZA=A, \;ZAZ=Z, \;(AZ)^{T}=AZ, \;(ZA)^{T}=ZA$.



Let $Y=(A^{+})^{T}=Z^{T}$. Taking transposes in the 4 properties above gives



$\;\;\;A^{T}YA^{T}=A^{T},\;YA^{T}Y=Y, \;AY^{T}=YA^{T},\;Y^{T}A=A^{T}Y$.



Now show that if $X=(A^{T})^{+}$, so $A^{T}XA^{T}=A^{T}, \;XA^{T}X=X, \;(A^{T}X)^{T}=A^{T}X,\;(XA^{T})^{T}=XA^{T}$,




then $Y=X$ since $Y$ satisfies these same 4 properties of the Moore-Penrose inverse.


How many sequences of consecutive integers are there where the sum equals the length



I am really sad and I noticed that the sequence:




$0 , 1 , 2$



Has its sum equal to its length.



I was wondering how many these existed.



e.g:



$ 1$




$-3 , -2 , -1 , 0 , 1 , 2 , 3 , 4 , 5$ $(= 9)$



I got so far and got stuck, I reduced it down to finding out how many solutions there are to the equation:



$m^2 - n^2 + m + n = 0$ , $0 < n < m$



Can anyone tell me how to find this out?


Answer



The sum of $0 + 1 + 2 + \ldots + (n-1)$ is $\frac{n(n-1)}{2}$ (and has length $n$) so the sum of any length $n$ sequence of consecutive integers starting at $m$ is $nm + \frac{n(n-1)}{2}$ (since such sequences are of the form $m+0,m+1,m+2,\ldots,m+(n-1)$) and we need to solve the diophantine equation $$nm + \frac{n(n-1)}{2} = n.$$




We should cancel $n$ and double it to get $2m + n = 3.$ This equation will only have solutions for odd $n$, this tells us there are no even length sequences with that property. On the other hand if $n$ is odd, there is exactly one such sequence.






n | m  | 2m+n  | sequence
-------------------------
1 | 1 | 3 | 1
2 | impossible...
3 | 0 | 3 | 0 1 2

4 | impossible...
5 | -1 | 3 | -1 0 1 2 3
6 | impossible...
...

Thursday 24 March 2016

Prove that $(sum_{i=1}^n i)^2$ = $sum_{i=1}^n i^3$ by induction

Prove that: $(\sum_{i=1}^n i)^2$ = $\sum_{i=1}^n i^3$




I can use the fact that $\sum_{i=1}^n i$ = n(n+1)/2 after the inductive hypothesis is invoked.
I'm not sure where to start, I would usually break down one side but there isn't usually two sums, so I'm not sure.

real analysis - Find a one-to-one correspondence between $[0,1]$ and $(0,1)$.







Establish a one-to-one correspondence between the closed interval $[0,1]$ and the open interval $(0,1)$.



this is a problem in real analysis.




Thanks

Is there a constant that reverses Jensen's inequality?



The general Jensen's inequality states: $\varphi\left(\mathbb{E}[X]\right) \leq \mathbb{E}\left[\varphi(X)\right]$. I'm wondering if there is a constant $c$ (function of $\varphi$), such that $c\varphi\left(\mathbb{E}[X]\right) \geq \mathbb{E}\left[\varphi(X)\right]$?



More specifically I wan't to show $\log\mathbb{E}[e^X]\leq(e-1)\mathbb{E}[X]$. Here $(e-1)$ would be the $c$ mentioned above.



I've noticed that many other common inequalities have such 'reversing' constants (or at least a corresponding lower bound). Like $\log(1+x)\leq x$, and $\frac{x}{x+1}\leq \log(1+x)$.


Answer



For the particular problem about the exponential function, let $X=-100$, $0$, or $100$ each with probability $\frac{1}{3}$. Then $E(X)=0$, but $E(e^X)$ is large.


algebra precalculus - Quadratic factor to complex numbers




How to convert this quadratic factor to complex number form? (With steps please)
Reference: $Z = a + bi$, $i = \sqrt{-1}$



$$-3 + \frac{\sqrt{-12}}{2}$$



Thanks!


Answer



Hint: $$-3+\frac{\sqrt{-12}}{2} = -3 + \frac{\sqrt{12}\cdot\sqrt{-1}}{2}$$


Computing the limit of function containing a power series.

Prove that if the sequence $a_{n}$ of real numbers converges to a finite limit;
\begin{align}
\lim_{n \rightarrow \infty} a_{n} = g,
\end{align}

then
\begin{align}
\lim_{x \to \infty}
\left({\rm e}^{-x}\sum_{n = 0}^{\infty}a_{n}\,{x^{n} \over n!}\right) = g.
\end{align}
The initial observation is the power series of $e^{x}$ is given by
\begin{align}
e^{x} = \sum_{n = 0}^{\infty} \frac{x^{n}}{n!}.
\end{align}
I want to use summation by parts somehow while using some sort of telescoping technique. Is this the right technique? How do I get started with this?

order theory - $mathbb{N}timesmathbb{Q}$ isomorphic to $mathbb{Q}timesmathbb{N}$

Consider $\mathbb{N}\times\mathbb{Q}$ and $\mathbb{Q}\times\mathbb{N}$ both with the ordering given by $(a,b)\leq(c,d)$ iff ($a

Are $\mathbb{N}\times\mathbb{Q}$ and $\mathbb{Q}\times\mathbb{N}$ isomorphic as totally ordered sets?



I think that they aren't so, I need to find a function $f:\mathbb{Q}\times\mathbb{N}\to \mathbb{N}\times\mathbb{Q}$ in order to do that with the use of the following definition:



Definition of isomorphic: Let $(X,≤_X)$ and $(Y,≤_Y)$ be posets. $Y$ is isomorphic to $X$ as a poset if there exists an isomorphism $f:X→Y$ of posets.



By





$(a,b)\leq(c,d)$ iff $a



I meant the left lexicographic order

Wednesday 23 March 2016

elementary number theory - Do such unique representations of positive integers exist?




It is well known that every positive integer $n>0$ can be represented uniquely in the form
$$
n=2^k(2m+1),
$$

for positive integers $k,m\geq0$. Does there exist one or more constants $c>1$ such that
$$
2^k(2m+c)
$$

is a unique representation for positive integers greater than some lower bound?


Answer




For $c=3$ the expression is unique, though you can not get a lot of numbers this way. For example, no power of $2$ can be written this way.



To see that, for the numbers which can be expressed this way, the expression is unique: Just remark that $$2^{k_1}(2m_1+3)=2^{k_2}(2m_2+3)\implies k_1=k_2\implies 2m_1+3=2m_2+3\implies m_1=m_2$$



A similar argument goes through for any odd $c$. For even $c$ the argument fails since we can not conclude that $k_1=k_2$.


calculus - Does the series $sum limits_{n=2}^{infty }frac{n^{log n}}{(log n)^{n}}$ converge?



I'm trying to find out now whether the series
$\sum_{n=2}^{\infty } a_{n}$ converges or not when
$$a_n = \frac{n^{\log n}}{(\log n)^{n}}$$




Again, I tried d'Alembert $\frac{a_{n+1}}{a_{n}}$, Cauchy condensation test $\sum \limits_{n=2}^{\infty } 2^{n}a_{2^n}$, and they both didn't work for me.



I can't use Stirling, nor the integral test.



Edit: I'm searching for a solution which uses sequences theorems and doesn't involve functions.



Thank you


Answer



A comparison test will work here; the key is to write both numerator and denominator in terms of exponentials with bases not involving $n$. Note that the numerator is $e^{\log^2 n}$, which is less than $e^{n/2}$ for sufficiently large $n$. The denominator is $e^{n \log \log n}$, which is greater than $e^n$ for sufficiently large $n$; so for all sufficiently large $n$ the terms are less than $e^{-n/2}$ and thus the series converges.



real analysis - Change of Variables Theorem



I am searching for a proof of the following theorem:




THEOREM



Suppose $(X_1, \ldots, X_n)$ is a random vector with joint density function $f_{X_1, \ldots, X_n}(x_1, \ldots , x_n)$ and $g$ is a smooth transformation on the domain of $(X_1, \ldots, X_n)$. Then the joint density of $(Y_1, \ldots, Y_n)= g(X_1, \ldots, X_n)$ is
$$ f_{Y_1, \ldots, Y_n }(y_1, \ldots, y_n) = f_{X_1, \ldots, X_n}(g^{-1}(y_1, \ldots, y_n)) \cdot |\det \mathcal{J}(g^{-1}(y_1, \ldots , y_n))|.$$





Maybe someone can give me some hints or references to prove this theorem.


Answer



This is straightforward using the change of variable formula, and the characterization
of the law of a variable $X$ by the application



$$
f \ge 0 \to E[f(X)]
$$



calculus - Short question about the proof of Cantor nested sets theorem



I have a question about a part of this proof I have:



We have two sequences such that:




$\forall n : a_n\le a_{n+1}\le b_{n+1}$ $a_n$ is a monotone increasing sequence bounded above by $b_{n+1}.$



$\forall n : b_n\ge b_{n+1}> a_{n+1}$ $b_n$ is a monotone decreasing sequence bounded below by $a_{n+1}.$



We know that every monotone and bounded sequence converges so, we'll show that $b_n\to c_b$ and $a_n\to c_a$. So $\lim(c_b-b_n)<\frac \epsilon 2 : \lim(c_a-a_n)<\frac \epsilon 2 $



Now we're left with showing that the limit is the same for both sequences: $b_n\to c_a$



$\lim(b_n-c_b)=\lim(b_n-a_n+a_n-c_b)\le \color{blue}{|\lim(b_n-a_n)|}+|\lim(c_b-a_n)|\le \frac \epsilon 2+\frac \epsilon 2=\epsilon$




My question is why the part marked in blue is $|\lim(b_n-a_n)|\le\frac \epsilon 2$ ?



NOTE: I may be wrong about the name of this proof, we call it just Cantor theorem and it's related to BW theorem.


Answer



The conclusion of Cantor's theorem is that the infinite intersection $\bigcap_{n=1}^\infty [a_n,b_n]$ is not empty (in fact, this intersection is the interval $[c_a,c_b]$). As the example by @LuizCordeiro demonstrates, it is not necessary that $c_a = c_b$. One of the proofs of the Bolzano-Weierstraß theorem constructs a convergent subsequence of a given bounded sequence $(x_k)_{k=1}^\infty \subset [a,b]$ by successively choosing an interval $[a_n,b_n]$ containing infinitely many terms of $(x_k)_{k=1}^\infty$ so that $[a_n,b_n]$ is one of the two halves of the previous interval $[a_{n-1},b_{n-1}]$. This construction explicitly ensures that $b_n-a_n = 2^{-n}(b-a) \to 0$ as $n \to \infty$, so that we can later conclude that $c_a = c_b$ is the limit of the constructed subsequence. Below is a proof of Cantor's theorem.



The sequence $(a_n)_{n=1}^\infty$ is monotonically increasing and bounded from above by $b_1$, as $a_n \leq b_n \leq b_1$. Hence, $(a_n)_{n=1}^\infty$ converges to a finite limit $c_a \in \mathbb{R}$. Recall that $c_a:=\lim_{n\to\infty}a_n$ is in fact the least upper bound $\sup_n a_n$ of the set $\{a_n\;|\;n\geq1\}\subset \mathbb{R}$. Indeed, if we assume that $a_n>c_a$ for some $n$, then putting $\varepsilon=\frac12(a_n−c_a)>0$, we observe that for all $m \geq n$, we would have $a_m \geq a_n > c_a+ \varepsilon$, a contradiction. Also, if we assume that some $s0$, we obtain $a_n\leq s < c_a−\varepsilon$ for all $n$; again a contradiction. Similarly, we have $c_b := \lim_{n\to\infty}b_n = \inf_n b_n$, the greatest lower bound of the set $\{b_n\;|\;n\geq 1\}$. Now, observe that $a_m \leq b_n$ for arbitrary $m$ and $n$, not only when $m=n$. Indeed, for $p = \max\{m,n\}$ we have $a_m \leq a_p \leq b_p \leq b_n$. Fix an arbitrary index $n\geq1$ and consider the inequalities $a_m \leq b_n$ for all $m \geq 1$. They imply that $b_n$ is an upper bound for $(a_m)_{m=1}^\infty$. Hence, $c_a = \sup_m a_m \leq b_n$. Since $n$ was chosen arbitrarily, the last inequality holds for all $n \geq 1$. Thus $c_a$ is a lower bound for $(b_n)_{n=1}^\infty$, and hence, $c_a \leq \inf_n b_n = c_b$. Thus, the interval $[c_a,c_b]$ is not empty. Since $a_n \leq c_a \leq c_b \leq b_n$, we have $[c_a,c_b] \subset [a_n,b_n]$ for all $n$, or in other words, $[c_a,c_b] \subset \bigcap_{n=1}^\infty [a_n,b_n]$. Therefore, the latter intersection is non-empty, Q.E.D. In fact, we have also the inclusion $\bigcap_{n=1}^\infty [a_n,b_n] \subset [c_a,c_b]$, as every element of the intersection on the left is an upper bound for $(a_n)_{n=1}^\infty$ and simultaneously a lower bound for $(b_n)_{n=1}^\infty$, and hence $c_a \leq x \leq c_b$. Thus, $\bigcap_{n=1}^\infty [a_n,b_n] = [c_a,c_b].$


Proving that a sequence is increasing




Problem: A sequence $(a_n)$ is defined recursively as follows, where $0<\alpha\leqslant 2$:
$$
a_1=\alpha,\quad a_{n+1}=\frac{6(1+a_n)}{7+a_n}.
$$
Prove that this sequence is increasing and bounded above by $2$. What is its limit?






Ideas: How should I go about starting this proof? If the value of $a_1$ were given, I could show numerically and by induction that the sequence is increasing. But no exact value of $\alpha$ is given.


Answer




First, $0\leq a_1\leq2$.



Second, $a_{n+1}=\frac{6(1+a_n)}{7+a_n}=6-\frac{36}{7+a_n}$.



So, if $0\leq a_n\leq2$,



$\qquad$then $7\leq 7+a_n\leq 9$,



$\qquad$then $\frac{1}{7}\geq\frac{1}{7+a_n}\geq\frac{1}{9}$,




$\qquad$then $\frac{36}{7}\geq\frac{36}{7+a_n}\geq\frac{36}{9}$,



$\qquad$then $-\frac{36}{7}\leq-\frac{36}{7+a_n}\leq-\frac{36}{9}$,



$\qquad$then $0\leq6-\frac{36}{7}\leq6-\frac{36}{7+a_n}\leq6-\frac{36}{9}=2$,



then $0\leq a_{n+1}\leq2$.



By induction $0\leq a_n\leq2$ for all $n$.




Also $a_{n+1}-a_n=\frac{6(1+a_n)}{7+a_n}-a_n=\frac{6-a_n-a_n^2}{7+a_n}=\frac{(a_n+3)(2-a_n)}{7+a_n}>0$, as long as $0\leq a_n\leq2$.



Therefore $a_{n+1}\geq a_n$.


Tuesday 22 March 2016

calculus - Is exponential growth and decay faster than polynomial growth and decay?

I know the answer is yes for growth conditions, but I don't see how it's obvious that exponential decay is faster than polynomial decay, say for a polynomial $x^2$.

calculus - example of discontinuous function having direction derivative




Is there a function (non piece-wise unlike below) which is discontinuous but has directional derivative at particular point? I have a manual that says the function has directional derivative at $(0,0)$ but is not continuous at $(0,0)$.
$$f(x,y) = \begin{cases}
\frac{xy^2}{x^2+y^4} & \text{ if } x \neq 0\\
0 & \text{ if } x= 0
\end{cases}$$



Can anyone give me few examples which is not defined piece wise as above?


Answer



$$f(x,y)=\lim_{u\to0}\frac{xy^2+u^2}{x^2+y^4+u^2}$$


abstract algebra - How to show that any field extension $K/mathbb{Q}$ of degree 4 that is not Galois has a quadratic extension $L$ that is Galois over $mathbb{Q}$.

$\newcommand{\Q}{\mathbb{Q}}$Let $K/\Q$ be a field extension of degree $4$ that is not Galois. How to show that there exists an extension $L\supseteq K$ such that $[L:K]=2$ and $L/\Q$ is Galois?



I know the example of $\Q(\sqrt[4]{2})$ which is not Galois but is contained in the splitting field of $x^4-2$ which is Galois and of degree $8$, and I am trying to generalize this. But I am not even sure if we can write $K=\Q(\alpha)$ for some $\alpha$. Anyway, if this is the case, then the splitting field $L$ of the minimal polynomial of $\alpha$ would be Galois and of degree $8$, $12$ or $24$ since $\mathrm{Gal}(L/\Q)$ would be a subgroup of $S_4$. But how to rule out $12$ and $24$?

combinatorics - Probability that n people collectively occupy all 365 birthdays



The problem is quite simple to formulate. If you have a large group of people (n > 365), and their birthdays are uniformly distributed over the year (365 days), what's the probability that every day of the year is someone's birthday?



I am thinking that the problem should be equivalent to finding the number of ways to place n unlabeled balls into k labeled boxes, such that all boxes are non-empty, but C((n-k)+k-1, (n-k))/C(n+k-1, n) (C(n,k) being the binomial coefficient) does not yield the correct answer.


Answer



Birthday Coverage is basically a Coupon Collector's problem.




You have $n$ people who drew birthdays with repetition, and wish to find the probability that all $365$ different days were drawn among all $n$ people. ($n\geq 365$)



$$\mathsf P(T\leq n)= 365!\; \left\lbrace\begin{matrix}n\\365\end{matrix}\right\rbrace\; 365^{-n} $$



Where, the braces indicate a Stirling number of the second kind.   Also represented as $\mathrm S(n, 365)$.


Monday 21 March 2016

elementary number theory - How can I prove by induction that $9^k - 5^k$ is divisible by 4?




Recently had this on a discrete math test, which sadly I think I failed. But the question asked:




Prove that $9^k - 5^k$ is divisible by $4$.




Using the only approach I learned in the class, I substituted $n = k$, and tried to prove for $k+1$ like this:



$$9^{k+1} - 5^{k+1},$$




which just factors to $9 \cdot 9^k - 5 \cdot 5^k$.



But I cannot factor out $9^k - 5^k$, so I'm totally stuck.


Answer



$$\begin{align} 9\cdot 9^k - 5\cdot 5^k & = (4 + 5)\cdot 9^k - 5\cdot 5^k \\ \\ & = 4\cdot 9^k + 5 \cdot 9^k - 5\cdot 5^k \\ \\ & = 4\cdot 9^k + 5(9^k - 5^k)\\ \\ & \quad \text{ use inductive hypothesis}\quad\cdots\end{align}$$


integration - Contour Integral of $log(z)/(1+z^a)$ where $agt1$



I am asked to prove that:
$$ \int_{0}^{+\infty}\frac{\log z}{1+z^{\alpha}}\,dz = -\frac{\pi^2}{\alpha^2}\cdot\frac{\cos\frac{\pi}{\alpha}}{\sin^2\frac{\pi}{\alpha}},$$
provided that $\alpha > 1$, with a complex analytic method, i.e. contour integration.



However, I was not able to find a good candidate as a meromorphic function to integrate, neither a proper contour. Would you mind giving me a hand?



Answer



To do the contour integration, use a circular wedge of radius $R$ and angle $2 \pi/\alpha$ in the complex plane. This wedge encloses the pole at $z=e^{i \pi/\alpha}$. The integral about the arc vanishes as $R \to \infty$. (We technically should have a small cutout of radius $\epsilon$ about the origin, but we may ignore that piece as there is no contribution.)



enter image description here



The integrals that remain are over the real axis and over the line in the complex plane that is at an angle $\alpha$ with respect tot he real axis. Thus, by the residue theorem,



$$\int_0^{\infty} dx \frac{\log{x}}{1+x^{\alpha}} - e^{i 2 \pi/\alpha} \int_0^{\infty} dx \frac{\log{x}+i 2 \pi/\alpha}{1+x^{\alpha}} = i 2 \pi \operatorname*{Res}_{z=e^{i \pi/\alpha}} \frac{\log{z}}{1+z^{\alpha}} $$



which becomes




$$\left ( 1-e^{i 2 \pi/\alpha} \right ) \int_0^{\infty} dx \frac{\log{x}}{1+x^{\alpha}} - i \frac{2 \pi}{\alpha} e^{i 2 \pi/\alpha} \int_0^{\infty} \frac{dx}{1+x^{\alpha}} = \frac{2 \pi^2}{\alpha^2} e^{i \pi/\alpha}$$



To evaluate the second integral on the LHS, use the same contour and pole:



$$\left ( 1-e^{i 2 \pi/\alpha} \right )\int_0^{\infty} \frac{dx}{1+x^{\alpha}} = i 2 \pi \operatorname*{Res}_{z=e^{i \pi/\alpha}} \frac{1}{1+z^{\alpha}} = -\frac{i 2 \pi}{\alpha} e^{i \pi/\alpha}$$



so that



$$\int_0^{\infty} \frac{dx}{1+x^{\alpha}} = \frac{\pi}{\alpha \sin{(\pi/\alpha)}}$$




Thus, we now have that



$$-i 2 \sin{(\pi/\alpha)}\int_0^{\infty} dx \frac{\log{x}}{1+x^{\alpha}} - i \frac{2 \pi^2}{\alpha^2} \frac{e^{i \pi/\alpha}}{\sin{(\pi/\alpha)}} = \frac{2 \pi^2}{\alpha^2} $$



Take real and imaginary parts of the second term, and the real part cancels the RHS. Thus, dividing by the factor in front of the integral, we are left with



$$\int_0^{\infty} dx \frac{\log{x}}{1+x^{\alpha}} = -\frac{\pi^2}{\alpha^2} \frac{\cos{(\pi/\alpha)}}{\sin^2{(\pi/\alpha)}} $$



as was to be shown.



Distinction or rule about $d$ and $delta$ in differential forms



I have in trouble understanding the differential forms.



For $k$ form $\alpha$ and $l$ form $\beta$ we have
\begin{align}

\alpha \wedge \beta = (-1)^{kl} \beta \wedge \alpha
\end{align}
And for differentiation, we have
\begin{align}
d(\alpha \wedge \beta) = d \alpha \wedge \beta + (-1)^k \alpha \wedge d \beta
\end{align}



Apply this for usual Riemannian case, for 1-form spin connection $w$,
and two form curvature $R= dw+ w\wedge w$
\begin{align}

d R = d^2 w + dw \wedge w - w \wedge dw
= d w \wedge w - w \wedge dw
\end{align}



Now i want to variation of differential forms. How about variation?
is the same rule holds?



In page 10 of lecture note and some computation in https://physics.stackexchange.com/questions/222100/variations-of-actions-of-lie-algebra-valued-differential-forms,
it seems they treat $\delta$, following usual Lebiniz rule. $i.e$,
\begin{align}

\delta R = \delta d w+ \delta w \wedge w + w \wedge \delta w
\end{align}



Is their procedure right?



In usual differentiation or variation case, this does not be a big problem,
(As far as i known, the role of differentiation and variation are similar whether they treat function or functional) but in terms of differential form. I got confused.



Can you give me some formula for variation $\delta$ acting on $(\alpha \wedge \beta)$?




How about Lie derivatives?



I tried to find some reference related with variation on differential form, but they only treat differentiation. Recommendation of any kinds of references are welcomed


Answer



I think what needs to be emphasised is that $d$ and this variational $\delta$ (as opposed to the codifferential $\delta$ that is the adjoint of $d$) are two very different operations, that both happen to be a bit like derivatives:




  • Imprecisely, $d$ talks about the variation in the form $\alpha$ near $x$ as we move around on the manifold. A simple, though coarse, analogue is the derivative $d/dx$.

  • Whereas $\delta$ is talking about what happens to a function of $\alpha$ if we change $\alpha$ by a small amount; I find the physicists' notation is rather lacking here. The (more precise) analogue is the functional derivative, $DF[\alpha](\phi) = \lim_{h \to 0} (F[\alpha+h\phi]-F[\alpha])/h $.




In particular, there is antisymmetry built into the definition of $d$, but $\delta$, while superficially looking the same, is an operation talking about different sorts of variations in a different place. A more mathematical way to write the variation is to expand $F[\alpha+h\phi]$ to first order in $h$, so, for example,
$$ (\alpha+h\phi) \wedge (\alpha+h\phi) = \alpha \wedge \alpha + h(\phi \wedge \alpha + \alpha \wedge \phi) + o(h), $$
and subtracting and taking $h \to 0$ gives Leibniz for the variational derivative.


calculus - Calculate the following: $lim limits_{n to infty} sqrt[n]{e^n+(1+frac{1}{n})^{n^2}}$

So here it is: $$\lim_{n \to \infty} \sqrt[n]{e^n+(1+\frac{1}{n})^{n^2}}~~~(n \in \mathbb{N})$$



I tried to use the Squeeze theorem like so:
$e \leq \sqrt[n]{e^n+(1+\frac{1}{n})^{n^2}} \leq e \sqrt[n]{2}$




And if that is true, the limit is simply $e$.



Only that, for it to be true, I first need to prove that $(1+\frac{1}{n})^{n^2} \leq e^n$. So how do I prove it?



Alternatively, how else can I calculate the limit?
I prefer to hear about the way I tried, though.



Thanks

Sunday 20 March 2016

trigonometry - How to solve $3 - 2 cos theta - 4 sin theta - cos 2theta + sin 2theta = 0$



I have got a bunch of trig equations to solve for tomorrow, and got stuck on this one.




Solve for $\theta$:



$$3 - 2 \cos \theta - 4 \sin \theta - \cos 2\theta + \sin 2\theta = 0$$



I tried using the addition formula, product-to-sum formula, double angle formula and just brute force by expanding all terms on this, but couldn't get it.



I am not supposed to use inverse functions or a calculator to solve this.



Tried using Wolfram|Alpha's step by step function on this, but it couldn't explain things.


Answer




Let $x = \sin(\theta), y = \cos(\theta)$



$$3 - 2 y - 4x - 2y^2+1 + 2xy = 0$$



Simplify, divide by $2$ and replace $y^2$ with $1-x^2$.



$$1 - y - 2x+x^2+ xy = 0$$



Factor




$$(x-1)(x+y-1) = 0$$



Now just solve $\sin(\theta) = 1$ and $\sin(\theta) + \cos(\theta) = 1$.


functional equations - Solutions of $f(x)cdot f(y)=f(xcdot y)$




Can anyone give me a classification of the real functions of one variable such that $f(x)f(y)=f(xy)$? I have searched the web, but I haven't found any text that discusses my question. Answers and/or references to answers would be appreciated.


Answer



There is a classification of the functions $f:\mathbb R\to\mathbb R$ satisfying

$$
f(x+y)=f(x)+f(y), \quad\text{for all $x,y\in\mathbb R$}. \qquad (\star)
$$
These are the linear transformations of the linear space $\mathbb R$ over the field $\mathbb Q$ to itself. They are fully determined once known on a Hamel basis of this linear space (i.e., the linear space $\mathbb R$ over the field $\mathbb Q$).



This in turn provides a classification of all the functions $g:\mathbb R^+\to\mathbb R^+$ satisfying
$$
g(xy)=g(x)g(y), \quad\text{for all $x,y\in\mathbb R^+$},
$$
as they have to be form $g(x)=\mathrm{e}^{f(\log x)}$, where $f$ satisfies $(\star)$. Note that $g(1)=1$, for all such $g$.




Next, we can achieve characterization of functions $g:\mathbb R\to\mathbb R^+$ satisfying
$$
g(xy)=g(x)g(y), \quad\text{for all $x,y\in\mathbb R$},
$$
as $g(-x)=g(-1)g(x)$, which means that the values of $g$ at the negative numbers are determined once $g(-1)$ is known, and as $g(-1)g(-1)=g(1)=1$, it has to be $g(-1)=1$. Also, it is not hard to see that only acceptable value of $g(0)$ is $0$.



Finally, if we are looking for $g:\mathbb R\to\mathbb R$, we observe that, if $g\not\equiv 0$, and $x>0$, then $g(x)=g(\sqrt{x})g(\sqrt{x})>0$. Thus $g$ is fully determined once we specify whether $g(-1)$ is equal to $1$ or $-1$.



Note that if $g: \mathbb R\to\mathbb R$ is continuous, then either $g\equiv 0$ or $g(x)=|x|^r$ or $g(x)=|x|^r\mathrm{sgn}\, x $, for some $r>0$.



analysis - Riemann's thinking on symmetrizing the zeta functional equation

In the translated version of Riemann's classic On the Number of Prime Numbers less than a Given Quantity, he quickly derives the zeta functional equation through contour integration essentially as




$$\zeta(s)=2(2\pi)^{s-1}\Gamma(1-s)\sin(\tfrac12\pi s)\zeta(1-s).$$



and says three lines later that it may be expressed as



$$\xi(s) = \pi^{-s/2}\ \Gamma\left(\frac{s}{2}\right)\ \zeta(s)=\xi(1-s).$$



After reading MO-Q7656, I started wondering whether, as his ideas evolved before he wrote the paper, he first constructed $\xi(s)$ by noticing that multiplying $\zeta(s)$ by $\Gamma(\frac{s}{2})$ introduces a simple pole at $s=0$ thereby reflecting the pole of $\zeta(s)$ at $s=1$ through the line $s=1/2$ and that the other simple poles of $\Gamma(\frac{s}{2})$ are removed by the zeros on the real line of the zeta function. The $\pi^{-s/2}$ can easily be determined as a normalization by an entire function $c^s$ where $c$ is a constant, using the complex conjugate symmetry of the gamma and zeta fct. about the real axis.



Anyone familiar with how his ideas (thinking) evolved?




(Update 5/13/2012) Riemann in his fourth equality in his paper, before he writes down the functional eqn., has



$$2sin(\pi s)(s-1)!\zeta(s)=i\int_{+\infty}^{+\infty}\frac{(-x)^{s-1}}{e^x-1}dx$$



where the contour sandwiches the positive real axis and surrounds the origin in the positive sense.



For $m=0,1,2, ...,$ this gives



$$\zeta(-m)=\frac{(-1)^{m}}{2\pi i}\oint_{|z|=1}\frac{m!}{z^{m+1}}\frac{1}{e^z-1}dz=\frac{(-1)^{m}}{m+1}\frac{1}{2\pi i}\oint_{|z|=1}\frac{(m+1)!}{z^{m+2}}\frac{z}{e^z-1}dz$$




from whence you can see, if you are familiar with the exponential generating fct. for the Bernoulli numbers, that the integral vanishes for even $m$. Riemann certainly was familiar with these numbers and states that the integral relation he gives implies the vanishing of $\zeta(s)$ for $m$ even (but gives no explicit proof). Edwards in Riemann's Zeta Function (pg. 12, Dover ed.) even speculates that ".. it may well have been this problem of deriving (2) [Euler's formula for $\zeta(2n)$ for positive $n$] anew which led Riemann to the discovery of the functional equation ...."



Riemann gives two proofs of the fct. eqn.--the first based on contour integration and the singularities of $\frac{1}{e^z-1}$, the second, on the theta function. Edwards wonders:



"Since the second proof renders the first proof wholly unnecessary, one may ask why Riemann included the first proof at all. Perhaps the first proof shows the argument by which he originally discovered the functional equation or perhaps it exhibits some properties which were important in his understanding of it."



In fact, Riemann states on page 3 of his paper, "This property of the function [$\xi(s)=\xi(1-s)$] induced me to introduce, in place of $(s-1)!$, the integral $(s/2-1)!$ into the general term of the series" for zeta. And then he proceeds to introduce the Jacobi theta function.



Edit Dec 18, 2014




In the early 1820s, both Abel and Plana separately published what is now called the Abel-Plana formula (see Wikipedia). In the title of Plana's, he mentions the Bernoulliens. I wonder how much Riemann was influenced by these papers.

Proving a Complex Equality with Powers

Prove that the following identity is valid for all $z$ with $z\neq1$:



$$1+z+z^2+...+z^n=\frac{1-z^{n+1}}{1-z}$$

Need help with complex numbers on an Argand diagram problem

Going through some complex number work for A-Level Further Maths and I have come across a question that I have had a crack at but the mark scheme is very limited so doesn't look at the method I tried to use, and I don't really understand how they tried to approach it.



Question



In an Argand diagram, the complex numbers $ 0, z, ze^{i\pi/6} $ are represented by the points O, A and B respectively.



i) Sketch a possible Argand diagram showing OAB. Show that the triangle is isoceles and state the size of the angle AOB.



(I was okay with this first bit)




ii) Complex numbers $1+i , 5+2i$ are represented by C and D. Complex number $w$ is represented by E such that $CD=CE$ and angle $DCE =\pi /6 $ .



Calculate possible values of $w$, giving answers exactly in form $a+bi$ .



What I attempted to do was to firstly draw the triangle out again, as it was similar to the first part. I then tried to treat C as the origin so worked out that $D= 4+i$ and $E= (a-1) + (b-1)i$.



I worked out the distance between $CD= \sqrt{17} $ so tried to do $\sqrt{(a-1)^{2} +(b-1)^{2} } =\sqrt{17} $
I then worked out that $tan^{-1}(1/4) $ to find the length of CD and added $\pi /6 $ to find the argument of E treating C as the origin. Then subbed in $b/a = tanANS $ and tried to solve simultaneously with my last equation.



This gave me the wrong answer. Is this approach invalid? How would I otherwise go about this problem? Thanks in advance for any advice, I reallly appreciate it! :)




edit
complete workings



$$
tan^{-1} (1/4) = 0.2498\\
=\pi/6 =0.7686\\
argE =0.7686\\
b/a = tan0.7686\\
=0.9667, 2.5375\\

b= 0.9667a, 2.5475a\\
(0.997b-1)^{2} + (b-1)^{2} =17\\
$$



gave up here as the question says exact answers and by this point it looks like something has probably gone wrong.



MARK SCHEME ANSWERS



$$ w= (1+i)+((5+2i)-(1+i))e^{\pm i\pi/6} \\
w+ 1/2 +2\sqrt{3} +(3+1/2\sqrt{3})i\\

or 3/2 +2\sqrt{3} +(-1+1/2\sqrt{3})i\\
alternative \\
CE=(a,b), CD=(4,1)\\
CE*CD =17cos{\pi/6}, CE^{2} =17 \\
4a +b = 17\sqrt{3}/2, a^2 + b^2 =17 \\
Obtain~3-term~quadratic~in~one~variable~and~solve \\
(a,b) =(\sqrt{3} \pm 1/2 , 1/2 \sqrt{3} \mp 2) $$



(also sorry about the mildly dodgy LaTex, I'm not that used to it yet! )

real analysis - $lim_{pto infty}Vert fVert_{p}=Vert fVert_{infty}$?








On the $L_p$ spaces, when is $$\lim_{p\to \infty}\| f\|_{p}=\| f\|_{\infty}$$ true? Or what condition is necessary for this?

number theory - Divisibility by 7 or 3

I read a question on mathematics.I have not been able to figure out the answer.



the question is




given a set $Q=\{1,2,3,4,5,6,7\}$ such that I can have a subset $L$ from this set $Q$. I have been given a number string $Y$ in such a way that I know its starting digit and the end digit.



Now the rest of the digits from second digit to the last but one digit can be have any numbers of digits from the subset $L$.



Now the question how can i decide whether $Y$ can be made divisible by $3$ or $7$.

Saturday 19 March 2016

polynomials - How to evaluate GF(256) element

I wonder is there any easy way to evaluate elements of GF$(256)$: meaning that I would like to know what $\alpha^{32}$ or $\alpha^{200}$ is in polynomial form? I am assuming that the primitive polynomial is $D^8+D^4+D^3+D^2+1$. For example for GF$(8)$ what we do is as follow to calculate $\alpha^3$ is divide it by $\alpha^3+\alpha+1$ and we get $\alpha+1$ but here in GF$(256)$ this will be really tedious so I would like to know is there any way to calculate above expressions or similar expressions like $\alpha^{100}$ in GF$(256)$.



Thanks.

Probability of rolling $T$ or higher on at least $M$ of $R$ dice of $S$ sides



I am trying to figure out how to write an equation to solve the following problem:





  • S = # of sides on a die

  • R = # of dice being rolled

  • T = Minimum result to count as a success event on each die (Aka "Threshold")

  • M = Minimum # of success events desired



For example, if I want to know the probability (P) of getting at least two "5's" or higher when rolling three 6-sided dice, then:
\begin{align}

S &= 6\\
R &= 3\\
T &= 5\\
M &= 2
\end{align}



Is there a formula I could use to determine P?



Thanks!


Answer




Let $A$ be the number of acceptable numbers, that is numbers at or above the threshold. Then $A=S-T+1$. In your example, $A=2$, since $5$ and $6$ are the only acceptable numbers. The probability that a particular die has an acceptable number is $A/S = 1 -(T-1)/S$ and the probability that it is not acceptable is, of course, $(T-1)/S.$



For a given value $k,$ the probability that exactly $k$ rolls out of $R$ are acceptable is $${R\choose k}\left(1-{T-1\over S}\right)^k\left(T-1\over S\right)^{R-k}$$ since there are ${R\choose k}$ ways to choose the $k$ acceptable dice. For a success, we must have $k\ge M$ so the formula you seek is$$P=\sum_{k=M}^R{R\choose k}\left(1-{T-1\over S}\right)^k\left(T-1\over S\right)^{R-k}$$



P.S.



If $M$ is less than half of $R,$ it will be more convenient to compute the probability of success as one minus the probability of failure, since there will be fewer term is the sum. That is,
$$P=1-\sum_{k=0}^{M-1}{R\choose k}\left(1-{T-1\over S}\right)^k\left(T-1\over S\right)^{R-k}$$


calculus - How to find $lim_{x to 0} ( frac{1}{sin(x)}- frac{1}{arcsin(x)})$

I want to do the problem without using L'Hopitals rule, I have
$$\frac{1}{\sin(x)}- \frac{1}{\arcsin(x)} = \frac{x}{\sin(x)}\frac{x}{\arcsin(x)}\frac{\sin(x)-\arcsin(x)}{x^2}$$
and I'm not quite sure about how to deal with the $\dfrac{\sin(x)-\arcsin(x)}{x^2}$, apparently its limit is $0$? In which case the whole limit would be $0$. But how would I show this without using l'Hopitals rule. Thanks for any help.

multivariable calculus - Example of discontinuous function satisfying some conditions




I would like to construct a function $f :\mathbb{R}^2 \mapsto \mathbb{R} $ discontinuous at the origin but satisfy the following:
$$\lim_{x\to 0} f(x,mx) = f(0), \quad \lim_{y\to 0} f(my,y)= f(0) \qquad \forall m \in \mathbb{R} $$
That is a function “continuous” along all the lines at a point but still not continuous.


Answer



Let $g\colon S^1\to\Bbb R$ be an unbounded function. Then set
$ f(x,y)=rg(\frac xr,\frac yr)$ where $r:=\sqrt{x^2+y^2}$ (and set $f(0,0)=0$). This function is not continuous at the origin, but has the desired property.







A bit more explicitly:
Define $h\colon \Bbb R\to\Bbb R$, $$h(x)=\begin{cases}x+1&x\in\Bbb Q\\x&x\notin\Bbb Q\end{cases}$$
and then
$$ f(x,y)=\begin{cases}0&x=0\\\sqrt{x^2+y^2}\cdot h(\frac yx)&x\ne 0\end{cases}$$
is nowhere continuous and has the desired property


probability - Plot the cdf and simulate a random variable (rv) with this cdf using the inversion method.




Consider the continuous random variable with pdf given by:



$$f(x) = 2(x − 1)^2;\quad 1 < x ≤ 2$$
$$f(x) = 0;\quad \text{otherwise}$$



Plot the cdf for this random variable.
Show how to simulate a rv with this cdf using the inversion method.



I tried this as




$$F(x) =\int_{1}^{x} 2(t − 1)^2 dt=\frac{2}{3}(x − 1)^3;\quad 1 < x ≤ 2$$



I plot the cdf using R.



 x<-seq(1,2,length=10)
plot(x,(2/3)*(x-1)^3)


to simulate a rv with this cdf using the inversion method




let



$$S=\frac{2}{3}(x − 1)^3$$



$$\frac{3S}{2}=(x − 1)^3$$



$$X=1+{(\frac{3S}{2})}^{\frac{1}{3}}$$



Using R




  X.sim <- function(S)1+(3*S/2)^(1/3)
S <- runif(100)

x = X.sim(S)


But i got the value of $x$ more than $2$ . But $1 < x ≤ 2$ .



How can i solve the problem ?




I noticed if i integrate the pdf under the range $1 < x ≤ 2$ it is not equal to $1$.


Answer



The issue is that the normalizing constant has been incorrectly specified in the exercise. It says 2 when it should in fact be 3. See below.



$$
\int_1^2c(x-1)^2dx=c\left[\frac{(x-1)^3}{3}\right]_1^2=\frac{c}{3}=1\Leftrightarrow c=3
$$



If you change this and run the following code:




X.sim <- function(S)1+(S)^(1/3)
S <- runif(100000)
x <- X.sim(S)
max(x)


you will see that you get very close to 2, but you won't exceed it. Hope this helps!


calculus - The limit of general term in a series

I have the following statement -



If $\sum_{1}^{\infty} a_{n}^2$ converge then $\sum_{1}^{\infty} a_{n}^3$ converge.



Well i know this statement is true , but if can someone explain why



$\lim_{n\rightarrow\infty}(a_{n}^2) = 0$ implies that $\lim_{n\rightarrow\infty}(a_{n}) = 0$




(a fact that help to prove this statement) , Thanks!

analysis - Prove that $f'$ exists for all $x$ in $R$ if $f(x+y)=f(x)f(y)$ and $f'(0)$ exists

A function $f$ is defined in $R$, and $f'(0)$ exist.
Let $f(x+y)=f(x)f(y)$ then prove that $f'$ exists for all $x$ in $R$.



I think I have to use two fact:
$f'(0)$ exists
$f(x+y)=f(x)f(y)$
How to combine these two things to prove that statement?

Friday 18 March 2016

elementary number theory - Modular Arithmetic & Congruences

Show that if $p$ is an odd prime and $a \in \mathbb Z$ such that $p$ doesn't divide $a$ then $x^2\equiv a(mod p)$ has no solutions or exactly 2 incongruent solutions.




The only theorem that I thought could help was:



Let $a, b$ and $m$ be integers with $m > 0$ and $gcd(a,m)=d$. If $ d $ doesn't divides $b$ then $ax\equiv b(mod p)$ has no solutions. If $d$ divides $b$ then $ax\equiv b(mod p)$ has exactly $d$ incongruent solutions modulo $m$.



But I feel that this would be an invalid theorem for this proof since $ax$ and $x^2$ are of different degrees.



Thoughts? Other methods to approach this?

calculus - Is there an easier way of solving this problem?

Take a look at this figure:



enter image description here



What's the length of the longest 1-dimensional rod which can pass through this corridor horizontally? I can solve this problem mathematically using first and second derivative tests, but is there an easier way, similar to what feynman does in many of his lectures? One approach might be to take advantage of the symmetry of the situation.



The longest such rod will make an angle of pi/4 with the original direction when it (almost) touches the inner corner. I need a valid logical explaination for that.

Need help solving this inequality involving factorial



I really have no idea how to solve this inequality, I would be happy with just a hint on how to solve it :)



$$\frac{e^{0.5}0.5^{n+1}}{(n+1)!} \le 10^{-6}$$



Thanks in advance!


Answer



I was going to suggest you to binary search the answer, but after computing the first 11 values I've got this:





  • $f(0) = 0.8243606353500641$

  • $f(1) = 0.20609015883751602$

  • $f(2) = 0.03434835980625267$

  • $f(3) = 0.004293544975781584$

  • $f(4) = 0.0004293544975781584$

  • $f(5) = 3.577954146484653\cdot10^{-05}$

  • $f(6) = 2.5556815332033237\cdot10^{-06}$

  • $f(7) = 1.5973009582520773\cdot10^{-07}$


  • $f(8) = 8.87389421251154\cdot10^{-09}$

  • $f(9) = 4.4369471062557704\cdot10^{-10}$

  • $f(10) = 2.0167941392071684\cdot10^{-11}$



The value of n that suits you is 7.


Show that the sequence $a_n = frac{sqrt n}{1+sqrt n}$ is increasing for all n.



Show that the sequence $a_n = \frac{\sqrt n}{1+\sqrt n}$ is increasing for all $n$.



To prove its increasing I need to show that $a_{n+1} \gt a_n$, however the algebra is quite tricky and I have not been successful, if anyone could give me a hint that would be greatly appreciated.



Kind Regards,


Answer



Hint:
Write it as, $$\dfrac{\sqrt{n}}{1+\sqrt{n}}=\dfrac{1+\sqrt{n}-1}{1+\sqrt{n}}=1-\dfrac{1}{1+\sqrt{n}}.$$ Since, $\sqrt{n+1}\gt\sqrt{n}$ it follows that $1+\sqrt{n+1}\gt1+\sqrt{n}$, ... can you proceed further?



Prove by induction that $(1 + x)^n = ...$ (binomial expansion)


  1. First, let $\binom{n}{k} = \frac{n!}{k!(n-k!)}$ for any integers $0 \le k \le n$


  2. Show that $\binom{n-1}{k-1}+\binom{n-1}{k} = \binom{n}{k}$ for any $1 \le k \le n$





(I don't need help with this part, I have worked it out and it is true. This must be a hint for how to use induction, but I can't figure out exactly how to apply it, hmm...)




  1. Now, using point (2) and induction, prove that for any integer $n \ge 1$ and any real number $x$,
    $$
    (1+x)^n = \sum_{k=0}^n x^k \binom{n}{k}
    $$



I'm guessing that the solution will require strong induction, i.e. I'll need to assume that for some integer a, the equivalence holds for all b in the range $1 \le b \le a$ and using this assumption show that the equivalence holds for $a+1$ as well. Perhaps multiply both sides by $(1+x)$? But that really messes up the binomial terms... Any help would be greatly appreciated! Thank you :)

Thursday 17 March 2016

Can we determine if a complex number is greater than another?




Is it possible to determine if one complex number is greater than another? Or as the question implies is there an "order" to complex numbers (like 1 is before 2 in the real numbers)?



I thought that would could simply use the modulus to determine if one complex numbers is greater than another, though I believe this can't be the only way used (what if 2 complex numbers have the same modulus and are quite different). So I thought, if the point is in the uppermost right quadrant of the complex plane, then both real and imaginary parts are positive, so it would be greater than any other complex number in a different quadrant of the complex plane (you might say what about the modulus, but in the reals $1>-2$ even though $|-2|>|1|$). But what if one complex number has a positive real, and negative imaginary and another one has a negative real and positive imaginary? (And for arguments sake they both have the same modulus)




If we can't determine why not? In the real numbers it seems (to me), quit trivial at a basic level to determine if one real is greater than another e.g. $2>1$. What is this property of numbers called? Why doesn't complex numbers exhibit this property (if indeed it doesn't)?


Answer



It is possible to order the complex numbers. For instance, one could define $x_1+iy_1

However, it's impossible to define a total order on the complex numbers in such a way that it becomes an ordered field. This is because in an ordered field the square of any non-zero number is $>0$. Hence we would have $-1=i^2>0$, and adding $1$ to both sides would imply $0>1=1^2$, which is a contradiction.


real analysis - Show that $f$ is continuous at exactly one point




Let $f:\mathbb{R}\to\mathbb{R}$ be defined by



$$f(x)=
\begin{cases}
5x+7 & \text{ if } x \text{ is rational } \\
x+11 & \text{ if } x \text{ is irrational }

\end{cases}$$



Show that f is continuous at exactly one point.




My attempt:
Let $a \in \mathbb R$, and take a rational sequence $(x_n)$ and an irrational sequence $(y_n)$ such that $x_n \to a$ and $y_n \to a$.
Then $f(x_n) = 5x_n + 7 \to 5a+7$ and $f(y_n) = y_n + 11 \to a + 11$.
We have $lim_{n \to \infty}f(x_n) = lim_{n \to \infty}f(y_n) = f(a) \iff a = 1$.
Therefore $f$ is continuous only at $x = 1$.



Is the above valid? In the above, I have shown that for every sequence $(z_n)$ such that $(z_n) \to 1$, $lim_{n \to \infty} f(z_n) = f(1)$. Doesn't that prove that $f$ is continuous at $x = 1$? In the given solutions they also used the $\epsilon - \delta$ definition of continuity to prove that $f$ is continuous at $x = 1$, I'm not sure if that is necessary, or why it is necessary. Can anyone explain?




Thanks.


Answer



Using the sequential criterion for continuity is fine, but I think you have to make clear, why $\bigl(f(z_n)\bigr)$ converges for all $z_n \to 1$, even when not all $z_n$ are rational or all irrational. That almost follows from your argument (just look at the subseqeunces of the rational resp. irrational $z_n$), but you can expand on this a little:




Let $z_n \to 1$ be any sequence. If all but finitely many $z_n$ are rational, then $f(z_n) \to f(1)$, as allready shown (the finitely many terms do not make a difference), same if all but finitely many $z_n$ are irrational. Otherwise (that is we have infinitely many rational and irrational $z_n$) let $(z_{n_k})$ denote the subsequence of rational and $(z_{n'_k})$ the subsequence of irrational numbers. Then $(z_{n_k})$ and $(z_{n'_k})$ cover $(z_n)$ and have the same limit $f(1)$ (as shown), hence $f(z_n) \to f(1)$.



complex analysis - Understanding $e$ and $e$ to the power of imaginary number

How did the value of $e$ come from compound interest equation. What does the value of $e$ really mean...



Capacitors and inductors charge and discharge exponentially, radioactive elements decay exponentially and even bacterial growth follows exponential i.e., $(2.71)^x$ ,why can't it be $2^x$ or something.



Also $e^2$ means $e*e$ ,$e^3$ means $e*e*e$
But what exactly $e^{ix}$ mean...




I want to know how to visualise $e^{i \pi} =-1 $in graphs... I knw how to get the value of such type of equations but Im not able to understand what they actually mean....
Plz help me...

Sum of a sequence which is neither arithmetic nor geometric



If you have a sequence which is not geometric or arithmetic or arithmetico–geometric. Is there any methodology to follow in order to have a formula for its sum ?



Take for example the following sequence: $\{0.9^{\frac{1}{2}(n-i+1)(i+n)}\}_{i=1}^n$. It is not a geometric or an arithmetic progression. I don't see how to split it into sums of sequences which are arithmetic or geometric. Is there any hints I can get to proceed with writing a formula for this sum ?



$$S_n = \sum_{i=1}^n 0.9^{\frac{1}{2}(n-i+1)(i+n)}$$


Answer




I hope you’ve played with this, and noticed:
1.$\quad$It’s not a sum, it’s many sums, and each sum is finite.
2.$\quad$The base, $0.9$ in this case, plays no particular role, so that you can use any base $r$.
3.$\quad$The first few values are
\begin{align}
S_0&=0\\
S_1&=r\\
S_2&=r^3+r^2\\
S_3&=r^6+r^5+r^3\\
S^4&=r^{10}+r^9+r^7+r^4\\
S_5&=r^{15}+r^{14}+r^{12}+r^9+r^5\\
S_n&=r^n(S_{n-1}+1)
\end{align}

I see no way of getting a closed-form expression for $S_n$, a polynomial in $r$ of degree $\frac12(n^2+n)$, and most certainly not a numerical value once you evaluate $r$ to, in your case, $r=0.9\,$.



I do wonder where or how you came across this—without context, it seems a most unnatural problem.


modular arithmetic - How to calculate a Modulo?

I really can't get my head around this "modulo" thing.



Can someone show me a general step-by-step procedure on how I would be able to find out the 5 modulo 10, or 10 modulo 5.



Also, what does this mean: 1/17 = 113 modulo 120 ?




Because when I calculate(using a calculator) 113 modulo 120, the result is 113. But what is the 1/17 standing for then?



THANK YOU!

Wednesday 16 March 2016

calculus - Evaluate $int_{0}^{+infty }{left( frac{x}{{{text{e}}^{x}}-{{text{e}}^{-x}}}-frac{1}{2} right)frac{1}{{{x}^{2}}}text{d}x}$



Evaluate :
$$\int_{0}^{+\infty }{\left( \frac{x}{{{\text{e}}^{x}}-{{\text{e}}^{-x}}}-\frac{1}{2} \right)\frac{1}{{{x}^{2}}}\text{d}x}$$



Answer



Related technique. Here is a closed form solution of the integral



$$\int_{0}^{+\infty }{\left( \frac{x}{{{\text{e}}^{x}}-{{\text{e}}^{-x}}}-\frac{1}{2} \right)\frac{1}{{{x}^{2}}}\text{d}x} = -\frac{\ln(2)}{2}. $$



Here is the technique, consider the integral



$$ F(s) = \int_{0}^{+\infty }{e^{-sx}\left( \frac{x}{{{\text{e}}^{x}}-{{\text{e}}^{-x}}}-\frac{1}{2} \right)\frac{1}{{{x}^{2}}}\text{d}x}, $$



which implies




$$ F''(s) = \int_{0}^{+\infty }{e^{-sx}\left( \frac{x}{{{\text{e}}^{x}}-{{\text{e}}^{-x}}}-\frac{1}{2} \right)\text{d}x}. $$



The last integral is the Laplace transform of the function



$$ \frac{x}{{{\text{e}}^{x}}-{{\text{e}}^{-x}}}-\frac{1}{2} $$



and equals



$$ F''(s) = \frac{1}{4}\,\psi' \left( \frac{1}{2}+\frac{1}{2}\,s \right) -\frac{1}{2s}. $$




Now, you need to integrate the last equation twice and determine the two constants of integrations, then take the limit as $s\to 0$ to get the result.


real analysis - Show that $e^{1-n} leq frac {n!}{n^n}$



How can I show that for a $n \in \mathbb N$



$$e^{1-n} \leq \frac {n!}{n^n}$$



I tried using the binomial theorem like this




$$n^n \le (1+n)^n = \sum_{k=0}^n \binom nk n^k \le \sum_{k=0}^\infty \binom nk n^k = \sum_{k=0}^\infty \frac{n!}{k!(n-k)!} n^k \le \sum_{k=0}^\infty \frac{n!}{k!} n^k = n! \sum_{k=0}^\infty \frac{n^k}{k!} = n! \cdot e^n$$



which would give me



$$\frac{1}{e^n} \le \frac{n!}{n^n}$$



But I'm missing the factor of $e$ on the left side. Can you give me a hint?


Answer



Using induction we see that for $n=1$, the inequality holds. Assume that it holds for some number $k$.




Then, using $\left(1+\frac1k\right)^k

$$\begin{align}
\frac{(k+1)!}{(k+1)^{k+1}}&=\frac{k!}{k^k\left(1+\frac1k\right)^k}\\\\
&\ge \frac{e^{1-k}}{e}\\\\
&=e^{1-(k+1)}
\end{align}$$



And we are done!



irrational numbers - How can $(sqrt n)^2$ for $n$ not a perfect square be an integer?

For any positive integer $n$, $\sqrt{n^2}=n$. This looks pretty obvious, but if the nature of an irrational number is considered it doesn't look that obvious.




I mean, $\sqrt n$ where $n$ is not a perfect square will be irrational and have an infinite decimal expansion. But consider
$$17=17.00000000\dots$$
No number from 0 to 9 will give 0 as the result. That looks like it makes impossible an irrational number squaring to a positive integer, as the only way to get a 0 in all decimal places would be to have all digits on the supposed irrational number 0.



I'm not a maths person, so I may have committed lots of errors, but it doesn't look like there's any big one, understanding it in the level that a non-maths person should have. What's the failure in my reasoning?

trigonometry - What have i done wrong in solving the general solution to $sec2theta=csc2theta$?




$$\sec2\theta=\csc2\theta$$




My attempt:




$$\begin{align}
\cos2\theta &= \sin2\theta \tag{1}\\
\cos^2\theta+\sin^2\theta-2\cos\theta\sin\theta &=0 \tag{2}\\
(\cos\theta-\sin\theta)^2 &=0 \tag{3}\\
\cos\theta-\sin\theta &=0 \tag{4}\\
\cos\theta &=\sin\theta \tag{5}\\
\tan\theta &=1 \tag{6}\\
\theta &=180^\circ n+45^\circ \quad\text{??} \tag{7}
\end{align}$$




But the answer was $90^\circ n+22.5^\circ$ and I'm not sure why. I've searched up the question online, and someone has proposed a solution where it is not factored; instead, the equation turns into $\tan2\theta=1$ on line $(2)$, and this allows you to get the correct solution.



What's wrong with factoring it though?


Answer



You have committed an error in the very beginning. Kindly note that



\begin{eqnarray}
\cos(2\theta)=\cos^{2}(\theta)-\sin^{2}(\theta).
\end{eqnarray}




You have written $\cos(2\theta)=\cos^{2}(\theta)+\sin^{2}(\theta)$, which is wrong.


real analysis - How discontinuous can a derivative be?



There is a well-known result in elementary analysis due to Darboux which says if $f$ is a differentiable function then $f'$ satisfies the intermediate value property. To my knowledge, not many "highly" discontinuous Darboux functions are known--the only one I am aware of being the Conway base 13 function--and few (none?) of these are derivatives of differentiable functions. In fact they generally cannot be since an application of Baire's theorem gives that the set of continuity points of the derivative is dense $G_\delta$.



Is it known how sharp that last result is? Are there known Darboux functions which are derivatives and are discontinuous on "large" sets in some appropriate sense?


Answer



What follows is taken (mostly) from more extensive discussions in the following sci.math posts:



http://groups.google.com/group/sci.math/msg/814be41b1ea8c024 [23 January 2000]




http://groups.google.com/group/sci.math/msg/3ea26975d010711f [6 November 2006]



http://groups.google.com/group/sci.math/msg/05dbc0ee4c69898e [20 December 2006]



Note: The term interval is restricted to nondegenerate intervals (i.e. intervals containing more than one point).



The continuity set of a derivative on an open interval $J$ is dense in $J.$ In fact, the continuity set has cardinality $c$ in every subinterval of $J.$ On the other hand, the discontinuity set $D$ of a derivative can have the following properties:




  1. $D$ can be dense in $\mathbb R$.



  2. $D$ can have cardinality $c$ in every interval.


  3. $D$ can have positive measure. (Hence, the function can fail to be Riemann integrable.)


  4. $D$ can have positive measure in every interval.


  5. $D$ can have full measure in every interval (i.e. measure zero complement).


  6. $D$ can have a Hausdorff dimension zero complement.


  7. $D$ can have an $h$-Hausdorff measure zero complement for any specified Hausdorff measure function $h.$




More precisely, a subset $D$ of $\mathbb R$ can be the discontinuity set for some derivative if and only if $D$ is an $F_{\sigma}$ first category (i.e. an $F_{\sigma}$ meager) subset of $\mathbb R.$




This characterization of the discontinuity set of a derivative can be found in the following references: Benedetto [1] (Chapter 1.3.2, Proposition, 1.10, p. 30); Bruckner [2] (Chapter 3, Section 2, Theorem 2.1, p. 34); Bruckner/Leonard [3] (Theorem at bottom of p. 27); Goffman [5] (Chapter 9, Exercise 2.3, p. 120 states the result); Klippert/Williams [7].



Regarding this characterization of the discontinuity set of a derivative, Bruckner and Leonard [3] (bottom of p. 27) wrote the following in 1966: Although we imagine that this theorem is known, we have been unable to find a reference. I have found the result stated in Goffman's 1953 text [5], but nowhere else prior to 1966 (including Goffman's Ph.D. Dissertation).



Interestingly, in a certain sense most derivatives have the property that $D$ is large in all of the ways listed above (#1 through #7).



In 1977 Cliff Weil [8] published a proof that, in the space of derivatives with the sup norm, all but a first category set of such functions are discontinuous almost everywhere (in the sense of Lebesgue measure). When Weil's result is paired with the fact that derivatives (being Baire $1$ functions) are continuous almost everywhere in the sense of Baire category, we get the following:



(A) Every derivative is continuous at the Baire-typical point.




(B) The Baire-typical derivative is not continuous at the Lebesgue-typical point.



Note that Weil's result is stronger than simply saying that the Baire-typical derivative fails to be Riemann integrable (i.e. $D$ has positive Lebesgue measure), or even stronger than saying that the Baire-typical derivative fails to be Riemann integrable on every interval. Note also that, for each of these Baire-typical derivatives, $\{D, \; {\mathbb R} - D\}$ gives a partition of $\mathbb R$ into a first category set and a Lebesgue measure zero set.



In 1984 Bruckner/Petruska [4] (Theorem 2.4) strengthened Weil's result by proving the following: Given any finite Borel measure $\mu,$ the Baire-typical derivative is such that the set $D$ is the complement of a set that has $\mu$-measure zero.



In 1993 Kirchheim [5] strengthened Weil's result by proving the following: Given any Hausdorff measure function $h,$ the Baire-typical derivative is such that the set $D$ is the complement of a set that has Hausdorff $h$-measure zero.



[1] John J. Benedetto, Real Variable and Integration With Historical Notes, Mathematische Leitfäden. Stuttgart: B. G. Teubne, 1976, 278 pages. [MR 58 #28328; Zbl 336.26001]




[2] Andrew M. Bruckner, Differentiation of Real Functions, 2nd edition, CRM Monograph Series #5, American Mathematical Society, 1994, xii + 195 pages. [The first edition was published in 1978 as Springer-Verlag's Lecture Notes in Mathematics #659. The second edition is essentially unchanged from the first edition with the exception of a new chapter on recent developments (23 pages) and 94 additional bibliographic items.] [MR 94m:26001; Zbl 796.26001]



[3] Andrew M. Bruckner and John L. Leonard, Derivatives, American Mathematical Monthly 73 #4 (April 1966) [Part II: Papers in Analysis, Herbert Ellsworth Slaught Memorial Papers #11], 24-56. [MR 33 #5797; Zbl 138.27805]



[4] Andrew M. Bruckner and György Petruska, Some typical results on bounded Baire $1$ functions, Acta Mathematica Hungarica 43 (1984), 325-333. [MR 85h:26004; Zbl 542.26004]



[5] Casper Goffman, Real Functions, Prindle, Weber & Schmidt, 1953/1967, x + 261 pages. [MR 14,855e; Zbl 53.22502]



[6] Bernd Kirchheim, Some further typical results on bounded Baire one functions, Acta Mathematica Hungarica 62 (1993), 119-129. [94k:26008; Zbl 786.26002]




[7] John Clayton Klippert and Geoffrey Williams, On the existence of a derivative continuous on a $G_{\delta}$, International Journal of Mathematical Education in Science and Technology 35 (2004), 91-99.



[8] Clifford Weil, The space of bounded derivatives, Real Analysis Exchange 3 (1977-78), 38-41. [Zbl 377.26005]


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...