Thursday 31 January 2013

real analysis - approximating a riemann integrable function by sequences of step functions and sequences of continuously differentiable functions

Suppose that $f$ is Riemann integrable on $[0,M]$.



How can I show that a) $f$ can be approximated uniformly by a sequence of finite step functions? and b) by a sequence of continuously differentiable functions?



Any hints on how to handle this?



Well I thought that for a), since $f$ is Riemann integrable then there is a partition such that the sum of the product of the oscillation at each partition and the length of the partition is uniformly small. But the oscillation at each partition is a difference of two step functions, the one which bounds the function above and the one which bounds the function below... Help would be appreciated.

algebra precalculus - Tough Logarithm Problem

I was working on this Problem
Prove that: $$ \frac{\log_5(nt)^2}{\log_4\left(\frac{t}{r}\right)}=\frac{2\log(n)\cdot\log(4)+2\log(t)\cdot \log(4)}{\log(5)\cdot \log(t)-\log(5)\cdot\log(r)}$$



I think it has something to do with change of base because it's $\log_{10}$ on the right side and not on the left, but I'm not sure how to go about this.

Tuesday 29 January 2013

discrete mathematics - Proof writing involving propositional logic: (x ∨ y) ≡ ( x ∧ y ) → x ≡ y



Prove by using propositional logic:



(x ∨ y) ≡ ( x ∧ y ) → x ≡ y



I'm a bit lost here proving by propositional logic that the statement is valid. I don't know how to start this problem. Any help? I know the statement is true since x ≡ y, thus the premise (x ∨ y) ≡ ( x ∧ y ) does not matter, it will be still true according to the → operation. Any ideas? Any help will be greatly appreciated, thanks.




Edit:



Apart from true tables.


Answer



You want to show that $x \equiv y$ from the premise $(x\vee y) \equiv (x \wedge y)$. I assume it's enough to derive $(x\to y) \wedge (y \to x)$.




  1. $(x\vee y) \equiv (x \wedge y) \qquad \text{(premise)}$

  2. Assume $x \qquad\qquad \text{(assumption)}$


  3. $x \vee y \qquad\qquad\qquad \text{by $p \to (p \vee q)$}$

  4. $x \wedge y\qquad\qquad\qquad \text{by 1.}$

  5. $y \quad\qquad\qquad\qquad \text{by $(p\wedge q) \to q$}$

  6. $x \to y \quad\qquad\qquad \text{by 2. and 5., discharging 2.}$

  7. -- 11. similarly derive $y \to x$.


Monday 28 January 2013

calculus - Show that $int^{infty}_{0}left(frac{sin(x)}{x}right)^2 < 2$



I`m trying to show that this integral is converges and $<2$

$$\int^{\infty}_{0}\left(\frac{\sin(x)}{x}\right)^2dx < 2$$
What I did is to show this expression:
$$\int^{1}_{0}\left(\frac{\sin(x)}{x}\right)^2dx + \int^{\infty}_{1}\left(\frac{\sin(x)}{x}\right)^2 dx$$
Second expression :
$$\int^{\infty}_{1}\left(\frac{\sin(x)}{x}\right)^2 dx < \int^{\infty}_{1}\left(\frac{1}{x^2}\right)^2dx = \lim\limits_{b\to 0} {-\frac{1}{x}}|^b_0 = 1 $$
Now for the first expression I need to find any explanation why its $<1$ and I will prove it.

I would like to get some advice for the first expression. thanks!


Answer



Hint: $$\lim_{x\to0}\frac{\sin x}{x}=1.$$


How prove this limits $lim_{xto 0^{+}}frac{xcdotfrac{log{x}}{log{(1-x)}}}{log{left(frac{log{x}}{log{(1-x)}}right)}}=1$



show this limits
$$\lim_{x\to 0^{+}}\dfrac{x\cdot\dfrac{\log{x}}{\log{(1-x)}}}{\log{\left(\dfrac{\log{x}}{\log{(1-x)}}\right)}}=1$$



I fell this limits not easy to show it.




since
$$\log{(1-x)}=-x+o(x^2)\Longrightarrow x\cdot\dfrac{\log{x}}{\log{(1-x)}}\approx -\log{x}+o(\log{x})$$
and I know
$$\dfrac{\log{x}}{\log{(1-x)}}\to +\infty$$
then I don't know How to deal this problem




This Problem is from Analysis problem book exercise (MIn hui xie)



Answer




As $x\to 0^+$ we have
$$
\frac{\log x}{\log(1-x)}\sim -\frac{\log x}{x}=\frac{1}{x}\log\left(\frac{1}{x} \right)
$$
and then
$$
\frac{x\cdot\frac{\log{x}}{\log{(1-x)}}}{\log{\left(\frac{\log{x}}{\log{(1-x)}}\right)}}\sim \frac{\log(\frac{1}{x})}{\log\left(\frac{1}{x}\log(\frac{1}{x})\right)}
$$
Changing $u=\frac{1}{x}$, we have using de l'Hopital's rule
$$

\lim_{u\to\infty}\frac{\log u}{\log(u\log u)}=\lim_{u\to\infty}\frac{\log u}{\log u +1}=1.
$$
So your limit is
$$\lim_{x\to 0^{+}}\dfrac{x\cdot\dfrac{\log{x}}{\log{(1-x)}}}{\log{\left(\dfrac{\log{x}}{\log{(1-x)}}\right)}}=1.$$


elementary set theory - Category of Sets w/ 17 Elements: There does not exist a direct product? (Lots of questions here)



I'm having a pretty hard time with this. I'm asked to show that, in the category of sets with exactly 17 elements, no two objects have either a direct product nor direct sum. Part of me doesn't even believe this statement—but whenever I try to come up with a direct product, I get snagged.



Let $(C, \alpha, \beta)$ be a [potential] direct product of $A$ and $B$. Fix some object, $C'$, with mappings, $\alpha'$ and $\beta'$, from $C'$ to $A$ and $B$ respectively. We need a unique $\gamma: C' \rightarrow C$ such that $\alpha \circ \gamma = \alpha'$ and $\beta \circ \gamma = \beta'$.





  • $C = A \times B$ just can't work, because $A \times B$ necessarily has more than 17 elements (in fact, no Cartesian-product-like $C$ can work, because the number of elements is fixed). What about some 17-element subset of $A \times B$? (In the category of sets, $\alpha$ and $\beta$ are injective, but not necessarily surjective). But, what if $\alpha'$ and $\beta'$ are both surjective? So, that can't work, because there's no $\gamma$ that could satisfy this (doesn't that mean that, in the general category of sets, $\alpha$ and $\beta$ have to also be surjective? If they don't "touch" every element in both $A$ and $B$, then one can just define a $\alpha'$ or $\beta'$ that touches the elements $\alpha$ or $\beta$ don't—thus making impossible a direct product.)


  • Let $\alpha(c_n) = a_n$ and $\beta(c_n) = b_n$. This contains bijective $\alpha$ and $\beta$, but all we have to do is define a $C'$ such that $\alpha'(c_1) = a_1$ and $\beta'(c_1) = b_2$.




Ok. So, $\alpha$ and $\beta$ have to be bijective. Let's try to prove this via negation: Since these mappings are necessarily bijective, they have to have an inverse. Thus, $\gamma$ must be such that $\gamma = \alpha^{-1} \circ \alpha'$ and $\gamma = \beta^{-1} \circ \beta'$. To show that we can choose an object $C'$ where $\gamma$ can't make the graph commute, just choose $\alpha'$ and $\beta'$ such that $\alpha^{-1} \circ \alpha \neq \beta^{-1} \circ \beta$.




  1. Can I assume that such an $\alpha'$ and $\beta'$ will always exist?

  2. Was there no point to the number of elements being $17$ specifically? This all seems to work for any category of sets with a fixed number of elements.

  3. Is there something crucial I'm missing?



Answer



Let $n \geq 2$. If $A,B$ have a product $P$ in the category of sets with $n$ elements, then $\hom(A,P) \cong \hom(A,A) \times \hom(A,B)$ shows $n^n = n^n \cdot n^n$, a contradiction.


Sunday 27 January 2013

integration - How to solve this integral $int frac 1{sqrt { cos x sin^3 x }} mathrm dx $




Question : $$\int \frac 1{\sqrt { \cos x \sin^3 x }} \mathrm dx $$





I don’t know where to start. I had tried many methods but they didn’t work.



Can anyone help me solving this ?
Thank you


Answer



HINT: substitute $\text{u}:=\tan\left(x\right)$. Then the integrand will change to $\frac{1}{\text{u}^\frac{3}{2}}$.


elementary set theory - Countability of Infinite Sets


Show that $|\mathbb{R}|$=$|[0,1]|$.





If we were to find a function whose domain is $\mathbb{R}$ and range is $[0,1]$ and show it is a bijection, then we can show that this is true. The function that I came up with is $f(x)=\frac{\arctan x+\frac{\pi}{2}}{\pi}$, but this function's range is $(0,1)$. Is there a function that would work?

real analysis - Sequence defined by max$left{a_n, b_n right}$. Proving convergence



Assume $(a_n)$ and $(b_n)$ are two real sequences, and define $$ c_n = \text{max}\left\{a_n, b_n\right\} $$ for $n \in \mathbb{N}$. Suppose $(a_n)$ and $(b_n)$ are two convergent sequences. Prove then that $(c_n)$ is also a convergent sequence, and that $$ \lim_{n \to \infty} c_n = \text{max} \left\{ \lim_{n \to \infty} a_n, \lim_{n \to \infty} b_n \right\}. $$



Attempt: $(a_n)$ and $(b_n)$ are both convergent. Hence there exists an $n_0 \in \mathbb{N}$ such that $\forall n \geq n_0: | a_n - L | < \epsilon$. Furthermore, there exists an $n_1 \in \mathbb{N}$ such that $\forall n \geq n_1: | b_n - K | < \epsilon $. Here $L$ and $K$ are the limits of resp. $(a_n)$ and $(b_n)$. Now let $n_2 = \text{max} \left\{n_0, n_1 \right\}$. Let $n \geq n_2$ be arbitrary. Then we have that $L - \epsilon < a_n < L + \epsilon$ and $K - \epsilon < b_n < K + \epsilon$. Since $c_n = \text{max}\left\{a_n, b_n\right\}$, we have that $c_n \geq a_n$ and $c_n \geq b_n$. So $ L - \epsilon < a_n \leq c_n$ and thus $ L - \epsilon < c_n$.




But now I'm stuck. Help would be appreciated!


Answer



Remember that $\max\{x,y\}=\frac{x+y+|x-y|}{2}$.



Thus $c_n=\frac{a_n+b_n+|a_n-b_n|}{2}$. So, if $\lim a_n=a$ and $\lim b_n=b$:



$$\lim c_n=\frac{1}{2}(\lim a_n+\lim b_n+|\lim a_n-\lim b_n|)=\frac{1}{2}(a+b+|a-b|)=\max\{a,b\}$$


Help on Geometric Sequence Problem?

The sum of an infinite geometric series with first term
a and common ratio r < 1 is given by $ S_n=a\cdot\dfrac{r^n-1}{r-1} $.
The sum of a given infinite geometric series is $S_{\infty}=200 $ and the
common ratio $r$ is 0.15. What is the second term $a_2$ of this
series?



I'm confused on how to attack,can someone explain it to me? Thanks.

algebra precalculus - Find $cos2theta+cos2phi$, given $sintheta + sinphi = a$ and $costheta+cosphi = b$


If

$$\sin\theta + \sin\phi = a \quad\text{and}\quad \cos\theta+\cos\phi = b$$



then find the value of $$\cos2\theta+\cos2\phi$$




My attempt:



Squaring both sides of the second given equation:



$$\cos^2\theta+ \cos^2\phi + 2\cos\theta\cos\phi= b^2$$




Multiplying by 2 and subtracting 2 from both sides we obtain,



$$\cos2\theta+ \cos2\phi = 2b^2-2 - 4\cos\theta\cos\phi$$



How do I continue from here?



PS: I also found the value of $\sin(\theta+\phi)= \dfrac{2ab}{a^2+b^2}$



Edit: I had also tried to use $\cos2\theta + \cos2\phi= \cos(\theta+\phi)\cos(\theta-\phi)$ but that didn't seem to be of much use

real analysis - Show pointwise convergence and (potentially) uniform convergence $sum_{k=1}^inftyfrac{x^k}{k}$

I am looking to show pointwise convergence and (potentially) uniform convergence of the following:




$$\sum_{k=1}^\infty\frac{x^k}{k}$$



I know (from my book) this converges for my given values of $x \in (0,1)$, but I can't figure out how to do this. I tried using the ratio test, but I wasn't able to get an answer that made sense($x$ is what I kept getting). I also tried Weierstrass M-Test, but could only to think to compare it to $$\sum_{k=1}^\infty\frac{1}{k}$$ which doesn't work either. Wolfram says this can be shown using the ratio test.



Can somebody give me an idea of what to use for the M-Test or maybe do the ratio test so I can see if I am doing something wrong?



Edit: I think my pointwise convergence to $x$ is correct, I just want to double-check that this is not uniform convergence. Am I correct?

Saturday 26 January 2013

sequences and series - Limit $ sum_{k=0}^∞ left( sum_{j=0}^k binom{k}{j} left(-frac{1}{3}right)^j right) $



I have to find the limit of the following series:



$$ \sum_{k=0}^∞ \left( \sum_{j=0}^k \binom{k}{j} \left(-\frac{1}{3}\right)^j \right) $$




I don't even know how to approach this... Any help would be very appreciated


Answer



Using the binomial formula and the geometric series formula:
$$\sum_{k=0}^{\infty}\left(\sum_{j=0}^{k}{k\choose j}\left(-\frac13\right)^j\right)=\sum_{k=0}^{\infty}\left(1-\frac13\right)^k=\lim_{k\to\infty}\frac{(2/3)^{k+1}-1}{(2/3)-1}=\frac1{1-(2/3)}=3$$


Friday 25 January 2013

elementary set theory - Is there any name for the set of 'sets which contains only numbers'?

I know that groups, rings and fields are the sets which contains only numbers but they bears additional properties too or to say that these are just the subsets of the set I'm talking about.



The required set is the set of subsets of the set of real/complex numbers.



PS
The numbers can either be real or complex.







EDIT:



Suppose that I pick an arbitrary set from this set then do we have a special name for it to differentiate it from other type of sets which may contain other mathematical objects too?

abstract algebra - When is a field a nontrivial field of fractions?



If we take any integral domain, then we can define a field of fractions by taking equivalence classes of ordered pairs of elements, the same way that the rational numbers are constructed from the integers. My question is:




What fields (of characteristic $0$) are isomorphic to the field of fractions of some integral domain (that's not a field)?





For instance, is the field of constructible real numbers a nontrivial field of fractions? What about the algebraic real numbers? What about arbitrarily real closed fields? And what about if we restrict ourselves to integral domains which are models of Peano arithmetic, or models of Robinson arithmetic? (EDIT: for those less acquainted with logic and model theory, let me ask this: what if we restricted the integral domains to ones that are discretely ordered rings?) I should mention that my motivation for asking these sorts of questions is my MathOverflow question.



Any help would be greatly appreciated.



Thank You in Advance.


Answer



This is a partial answer (it answers your first question).



Every field of characteristic zero is the fraction field of some integral domain which is not a field. Indeed, let $k$ be your field and let $(X_i)$ be a transcendence basis for $k$ over $\mathbb{Q}$. Consider the ring $R$ which is the integral closure of $\mathbb{Z}[\{X_i\}]$ in $k$. Note then that $R\ne k$ (since integral extensions preserve dimension), but it's a common fact that $k=\text{Frac}(R)$.


real analysis - Find a sequence of measurable functions defined on a measurable set $E$ that converges everywhere on $E$, but not almost uniformly on $E$.



Find a sequence of measurable functions defined on a measurable set $E$ such that the sequence converges everywhere on $E$, but the sequence does not converge almost uniformly on $E$.




I'm having troubles understanding this. I thought that a sequence of functions $\{f_n\}_{n=1}^\infty \to f$ converges almost uniformly if and only if $f_n\to f$ converges in measure. But that's wrong?



Any help would be welcome.


Answer



In light of Egorov's theorem, your measure space will have to be infinite. A good sequence to keep in mind to check that the finite measure space assumption of a theorem is required is the "moving block" $f_n(x) = \chi_{[n,n+1]}(x)$. This converges pointwise to zero, but it fails to converge in a bunch of other senses, including almost uniformly. (To see that, note that it can't converge uniformly on any set whose complement has measure smaller than $1$).


Thursday 24 January 2013

calculus - How can one show that $int_{0}^{pi/4}{sqrt{sin(2x)}over cos^2(x)}mathrm dx=2-sqrt{2over pi}cdotGamma^2left({3over 4}right)?$



Proposed:





$$\int_{0}^{\pi/4}{\sqrt{\sin(2x)}\over \cos^2(x)}\mathrm dx=2-\sqrt{2\over \pi}\cdot\Gamma^2\left({3\over 4}\right)\tag1$$




My try:



Change $(1)$ to



$$\int_{0}^{\pi/4}\sqrt{2\sec^2(x)\tan(x)}\mathrm dx\tag2$$




$$\int_{0}^{\pi/4}\sqrt{2\tan(x)+2\tan^3(x)}\mathrm dx\tag3$$



Not sure what substitution to use



How may we prove $(1)?$


Answer



By substituting $x=\arctan t$ our integral takes the form:



$$ I=\int_{0}^{1}\sqrt{\frac{2t}{1+t^2}}\,dt $$

and by substituting $\frac{2t}{1+t^2}=u$ we get:
$$ I = \int_{0}^{1}\left(-1+\frac{1}{\sqrt{1-u^2}}\right)\frac{du}{u^{3/2}} $$
that is straightforward to compute through the substitution $u^2=s$ and Euler's Beta function:
$$ I = \frac{1}{2} \left(4+\frac{\sqrt{\pi }\,\Gamma\left(-\frac{1}{4}\right)}{\Gamma\left(\frac{1}{4}\right)}\right).$$
The identities $\Gamma(z+1)=z\,\Gamma(z)$ and $\Gamma(z)\Gamma(1-z)=\frac{\pi}{\sin(\pi z)}$ settle OP's $(1)$.


Tuesday 22 January 2013

analysis - Integral $int_0^pi cot(x/2)sin(nx),dx$



It seems that $$\int_0^\pi \cot(x/2)\sin(nx)\,dx=\pi$$ for all positive integers $n$.



But I have trouble proving it. Anyone?


Answer



Use this famous sum:



$$1+2\cos x+2\cos 2x+\cdots+2\cos nx=\frac{\sin (n+\frac{1}{2})x}{\sin \frac{x}{2}}=\sin nx\cot\left(\frac{x}{2}\right)+\cos nx$$




Hence



$$\int_0^{\pi}\cot \left(\frac{x}{2}\right)\sin n x\,dx=\int_0^{\pi}1+2\cos x+2\cos 2x+\cdots +\cos nx\,dx$$



All cosine terms obviously evaluate to zero.


real analysis - Continuous or Differentiable but Nowhere Lipschitz Continuous Function




  1. What is a real valued function that is continuous on a close interval but not Lipschitz continuous on any subinterval?


  2. What is a real valued function that is differentiable on a close interval but not Lipschitz continuous on any subinterval?



Answer




Continuous and nowhere Lipschitz



An example is given by the Weierstrass function, which is continuous and nowhere differentiable. This can be justified in two ways:




  • A Lipschitz function is differentiable almost everywhere, by Rademacher's theorem.


  • Direct inspection of the proof that the function is nowhere differentiable; the estimates used in the proof also imply it's nowhere Lipschitz ($|f(x+h)-f(x)|$ is estimated from below by $|h|^\alpha$ with $\alpha<1$, for certain $x,h$).




Differentiable and nowhere Lipschitz




There are no such examples: a differentiable function on an interval must be Lipschitz on some subinterval. The following is an adaptation of a part of PhoemueX's answer.




  1. The function $f'$ is a pointwise limit of continuous functions: namely, $$f'(x) = \lim_{n\to\infty} n(f(x+1/n)-f(x))$$
    where for each $n$, the expression under the limit is continuous in $x$.


  2. Item 1 implies that $f'$ is continuous at some point $x_0$. This is a consequence of a theorem about functions of Baire class 1: see this answer for references.


  3. Continuity at $x_0$ implies $f'$ is bounded on some interval $(x_0-\delta,x_0+\delta)$. The Mean Value Theorem then implies that $f$ is Lipschitz on this interval.



Monday 21 January 2013

limits - Solve $ lim_{xto 0} (sqrt {2x+1} - sqrt[3]{1-3x})^{x}$ without using L'Hospital



I need to solve $$ \lim_{x\to 0}\ (\sqrt {2x+1}\ -\ \sqrt[3]{1-3x})^{x}$$ Please note that I'm first year student and that this can be solved much simpler than in the answers. I tried doing $$\lim_{x\to 0} \ e^{x \cdot \ln\Bigl(\sqrt{2x+1}-1-\left(\sqrt[3]{1-3x}-1\right)\Bigr)}$$ then going with the limit inside the function like this



$$\exp\left\{\lim_{x\to0}x \cdot
\ln\left[\lim_{x \to 0}\Bigl(\sqrt{2x+1}-1\Bigr) \cdot

\lim_{x \to 0} \left(1-
\frac{ \sqrt[3]{1-3x}-1\over x }{ \sqrt{2x+1}-1 \over x }\right)\right] \right\}$$



But problem is that although I can solve third limit this way, I get that second limit is 0, which makes that 0 is inside of $\ln$ and thus is incorrect attempt. Please help, I'm new here, I wan't to contribute back and this is from my university math exam.


Answer



We shall only try to find $\lim_{x\to 0^+}$ because for negative $x$ near $0$, the power is not defined (for reasons given below).



Working without little-o stuff:



Note that $$(1+x)^2=1+2x+x^2\ge 1+2x$$ for all $x$

and of course $1<1+2x$ for all $x>0$.
We conclude that
$$1 <\sqrt{1+2x}\le 1+x\qquad\text{for }x>0.$$



Similarly,
$$ (1-x)^3=1-3x+3x^2-x^3>1-3x\qquad \text{for }x<3$$
and
$$(1-2x)^3=1-6x+12x^2-8x^3<1-3x-3x(1-4x)<1-3x \qquad \text{for }0
hence
$$1-2x<\sqrt[3]{1-3x}<1-x\qquad\text{for }0

and so
$$ x<\sqrt{1+2x}-\sqrt[3]{1-3x}<4x\qquad\text{for }0
(One can find similar bounds for negative $x$, showing that $\sqrt{1+2x}-\sqrt[3]{1-3x}\sim x<0$, and therefore $(\sqrt{1+2x}-\sqrt[3]{1-3x})^x$ is undefined for negative $x$ near $0$)



If we already know that $\lim_{x\to 0^+}x^x=1$, it follows that $(\sqrt{1+2x}-\sqrt[3]{1-3x})^x$ is squeezed between $x^x$ and $4^x\cdot x^x$ and, as $\lim_{x\to 0^+}4^x=1$, therefore also
$$ \lim_{x\to0^+}(\sqrt{1+2x}-\sqrt[3]{1-3x})^x=\lim_{x\to0^+}x^x=1.$$






Why is $\lim_{x\to 0^+}x^x=1$?




Perhaps the most important inequality about the exponential is
$$ e^t\ge 1+t\qquad \text{for all }t\in\Bbb R.$$
Therefore, for $t>0$,
$$ e^t=(e^{t/2})^2\ge(1+\tfrac t2)^2=1+t+\frac14t^2>\frac14t^2.$$
It follows that
$$0\le \lim_{t\to +\infty}\frac{t}{e^t}\le \lim_{t\to +\infty}\frac{t}{\frac14t^2}=0.$$
With $x=e^{-t}$ (i.e., $t=-\ln x$), this becomes
$$\lim_{x\to 0^+} x\ln x=0$$
and therefore

$$\lim_{x\to0^+} x^x=\lim_{x\to 0^+} e^{x\ln x}=e^{\lim_{x\to 0^+} x\ln x} =e^0=1.$$


calculus - Definite Integral of a infinitesimal



I did not study math, but have some foundations in it. I have been looking through some books on nonstandard analysis, and have (what I consider to be) a pretty simple question which I haven't been able to answer through my reading thus far.




Let $\epsilon$ be an infinitesimal as described by Abraham Robinson. Consider the expression:



$\int_{a}^{b} \epsilon$



1) Does this expression even make sense?



2i) If it does make sense, is there a way of calculating what it evaluates to?



2ii) If it doesn't make sense, is there another (rigorous) discipline which can evaluate the quantity?




I would greatly appreciate any direct answers or references to (reasonably easy to read) materials.


Answer



It makes as much sense as, say, $\int_a^b 2$ does — or $\int_a^b 2 \rm{d}x$ if the former looks too weird. As with the example just shown, in $\int_a^b \epsilon$ you're using $\epsilon$ as a shorthand for the constant function $x\mapsto\epsilon\colon[a,b]\to {^*}\Bbb R$. The value of the expression is $(b-a)\epsilon$.


Sunday 20 January 2013

linear algebra - Relating the coefficients of the characteristic polynomial of a symmetric matrix to the determinants of its principal submatrices



I've been thinking about this this problem:



Let $M$ be a symmetric matrix. Recall that the eigenvalues of $M$ are the roots of the characteristic polynomial of M:



$p(x) := det(xI-M) = \prod\limits_{i=1}^n (x-\mu_i)$



Write




$p(x) = \sum\limits_{k=0}^n x^{n-k} c_k (-1)^k$



Prove that



$c_k = \sum\limits_{S \subseteq [n], |S|=k} \det(M(S,S)). $



Here, we write $[n]$ to denote the set $\{1, 2, \ldots, n\}$, and $M(S,S)$ to denote the submatrix of $M$ with rows and columns indexed by $S$.



I am a little confused on how to relate the coefficients back to the determinants of submatricies of $M$.




I think it's not too tricky using the product formulation for the characteristic polynomial that the coefficients $c_k$ are the sum of all k-wise products of eigenvalues. Each coefficient $c_k$ then corresponds to the sum of all determinants of principal submatrices of size $k$ for the matrix $\Lambda$ if you write $M = U\Lambda U^T$ via the spectral theorem. Since $U^T = U^{-1}$, $M$ is similar to $\Lambda$. Can you then say that since it is true for the diagonal matrix, it is true of the matrix itself because of similarity? I know the eigenvalues don't necessarily correspond to the respective submatrices of $M$ via Cauchy's interlacing theorem, so it's confusing to me how to jump back to talking about submatrices of $M$.



Or do you need to use some formulation of the determinant to show this?



Thanks in advance!


Answer



The detour through the eigenvalues will not help.
Using the notations of your link, recall that $[n]=\{1,\ldots,n\}$ and let $M(S,T)$ denote the matrix obtained from $M$ by only keeping the rows with index in $S$ and columns with index in $T$. The the first thing you need is
$$
\frac{\partial}{\partial M_{ij}}{\rm det}(M)=(-1)^{i+j}\

{\rm det}(\ M(\ [n]\backslash\{i\},\ [n]\backslash\{j\}\ )\ )
$$

which follows from the expansion with respect to say the $j$-th column.
Then, express the $c_k$'s as derivatives at zero of the characteristic polynomial. Finally, compute these derivatives using the multivariate chain rule and the formula above.


binomial coefficients - Prove that $sumlimits_{k=0}^rbinom{n+k}k=binom{n+r+1}r$ using combinatoric arguments.





Prove that $\binom{n+0}0 + \binom{n+1}1 +\binom{n+2}2
+\ldots+\binom{n+r}r = \binom{n+r+1}r$ using combinatoric arguments.




(EDITED)



I want to see if I understood Brian M. Scott's approach so I will try again using an analogical approach.




$\binom{n+0}0 + \binom{n+1}1 +\binom{n+2}2
+\ldots+\binom{n+r}r = \binom{n+r+1}r$ can be rewritten as $\binom{n+0}n + \binom{n+1}n +\binom{n+2}n
+\ldots+\binom{n+r}n = \binom{n+r+1}{n+1}$



We can use the analogy of people lining up to buy tickets to see a concert. Let's say there are only $n$ number of tickets available for sale. "Choosing" who gets to attend the concert can be done in two ways.



The first way (the RHS), we have $n+1$ number of tickets for sale but $n+r+1$ people who wants to buy the tickets. Thus, there are $\binom{n+r+1}{n+1}$ ways to "choose" who gets to attend the concert.



The second way (the LHS) is to select the last person in line to buy the first ticket (I think this was the step I missed in my first attempt). Then, we choose $n$ from the remaining $n+r$ people to buy tickets. Or we can ban the last person in line from buying a ticket and choose the second-to-last person in line to buy the first ticket. Then, we have $\binom{n+r-1}n$ ways. This continues until we reach the case where we choose the $n+1$ person in line to buy the first ticket (banning everyone behind him/her from buying a ticket). This can be done in $\binom{n+0}n$ ways.




Therefore, adding up each case on the LHS is equal to the RHS.


Answer



You’re on the right track, but you have a discrepancy between choosing $r$ from $n+r+1$ on the right, and choosing $r$ from $n+r$ on the left, so what you have doesn’t quite work. Here’s an approach that does work and is quite close in spirit to what you’ve tried.



Let $A=\{0,1,\ldots,n+r\}$; clearly $|A|=n+r+1$, so $\binom{n+r+1}r$ is the number of $r$-sized subsets of $A$. Now let’s look at a typical term on the lefthandside. The term $\binom{n+k}k$ is the number of ways to choose a $k$-sized subset of $\{0,1,\ldots,n+k-1\}$; how does that fit in with choosing an $r$-sized subset of $A$?



Let $n+k$ be the largest member of $A$ that we do not pick for our $r$-sized subset; then we’ve chosen all of the $(n+r)-(n+k)=r-k$ members of $A$ that are bigger than $n+k$, so we must fill out our set by choosing $k$ members of $A$ that are smaller than $n+k$, i.e., $k$ members of the set $\{0,1,\ldots,n+k-1\}$. In other words, there are $\binom{n+k}k$ ways to choose our $r$-sized subset of $A$ so that $n+k$ is the largest member of $A$ that is not in our set. And that largest number not in our set cannot be any larger than $n$, so the choices for it are $n+0,\ldots,n+r$. Thus, $\sum_{k=0}^r\binom{n+k}k$ counts the $r$-sized subsets of $A$ by classifying them according to the largest member of $A$ that they do not contain.







It may be a little easier to see what’s going on if you make use of symmetry to rewrite the identity as



$$\sum_{k=0}^r\binom{n+k}n=\binom{n+r+1}{n+1}\;.\tag{1}$$



Let $A$ be as above; the righthand side of $(1)$ is clearly the number of $(n+1)$-sized subsets of $A$. Now let $S$ be an arbitrary $r$-sized subset of $A$. The largest element of $S$ must be one of the numbers $n,n+1,\ldots,n+r$, i.e., one of the numbers $n+k$ for $k=0,\ldots,r$. And if $n+k$ is the largest element of $S$, there are $\binom{n+k}n$ ways to choose the $n$ smaller members of $S$. Thus, the lefthand side of $(1)$ also counts the $(n+1)$-sized subsets of $A$, classifying them according to their largest elements.



The relationship between the two arguments is straightforward: the sets that I counted in the first argument are the complements in $A$ of the sets that I counted in the second argument. There’s a bijection between the $r$-subsets of $A$ and their complementary $(n+1)$-subsets of $A$, so your identity and $(1)$ are essentially saying the same thing.


Saturday 19 January 2013

algebra precalculus - What are the Laws of Rational Exponents?

On Math SE, I've seen several questions which relate to the following. By abusing the laws of exponents for rational exponents, one can come up with any number of apparent paradoxes, in which a number seems to be shown as equal to its opposite (negative). Possibly the most concise example:




$-1 = (-1)^1 = (-1)^\frac{2}{2} = (-1)^{2 \cdot \frac{1}{2}} = ((-1)^2)^\frac{1}{2} = (1)^\frac{1}{2} = \sqrt{1} = 1$



Of the seven equalities in this statement, I'm embarrassed to say that I'm not totally sure which one is incorrect. Restricting the discussion to real numbers and rational exponents, we can look at some college algebra/precalculus books and find definitions like the following (here, Ratti & McWaters, Precalculus: a right triangle approach, section P.6):



Ratti's definition of rational exponents
Ratti's properties of rational exponents



The thing that looks the most suspect in my example above is the 4th equality, $(-1)^{2 \cdot \frac{1}{2}} = ((-1)^2)^\frac{1}{2}$, which seems to violate the spirit of Ratti's definition of rational exponents ("no common factors")... but technically, that translation from rational exponent to radical expression was not used at this point. Rather, we're still only manipulating rational exponents, which seems fully compliant with Ratti's 2nd property: $(a^r)^s = a^{rs}$, where indeed "all of the expressions used are defined". The rational-exponent-to-radical-expression switch (via the rational exponent definition) doesn't actually happen until the 6th equality, $(1)^\frac{1}{2} = \sqrt{1}$, and that seems to undeniably be a true statement. So I'm a bit stumped at exactly where the falsehood lies.



We can find effectively identical definitions in other books. For example, in Sullivan's College Algebra, his definition is (sec. R.8): "If $a$ is a real number and $m$ and $n$ are integers containing no common factors, with $n \ge 2$, then: $a^\frac{m}{n} = \sqrt[n]{a^m} = (\sqrt[n]{a})^m$, provided that $\sqrt[n]{a}$ exists"; and he briefly states that "the Laws of Exponents hold for rational exponents", but all examples are restricted to positive variables only. OpenStax College Algebra does the same (sec. 1.3): "In these cases, the exponent must be a fraction in lowest terms... All of the properties of exponents that we learned for integer exponents also hold for rational exponents."




So what exactly are the restrictions on the Laws of Exponents in the real-number context, with rational exponents? As one example, is there a reason missing from the texts above why $(-1)^{2 \cdot \frac{1}{2}} = ((-1)^2)^\frac{1}{2}$ is a false statement, or is it one of the other equalities that fails?






Edit: Some literature that discusses this issue:




  • Goel, Sudhir K., and Michael S. Robillard. "The Equation: $-2 = (-8)^\frac{1}{3} = (-8)^\frac{2}{6} = [(-8)^2]^\frac{1}{6} = 2$." Educational Studies in Mathematics 33.3 (1997): 319-320.


  • Tirosh, Dina, and Ruhama Even. "To define or not to define: The case of $(-8)^\frac{1}{3}$." Educational Studies in Mathematics 33.3 (1997): 321-330.



  • Choi, Younggi, and Jonghoon Do. "Equality Involved in 0.999... and $(-8)^\frac{1}{3}$" For the Learning of Mathematics 25.3 (2005): 13-36.


  • Woo, Jeongho, and Jaehoon Yim. "Revisiting 0.999... and $(-8)^\frac{1}{3}$ in School Mathematics from the Perspective of the Algebraic Permanence Principle." For the Learning of Mathematics 28.2 (2008): 11-16.


  • GĂ³mez, Bernardo, and Carmen Buhlea. "The ambiguity of the sign √." Proceedings of the Sixth Congress of the European Society for Research in Mathematics Education. 2009.


  • GĂ³mez, Bernardo. "Historical conflicts and subtleties with the √ sign in textbooks." 6th European Summer University on the History and Epistemology in Mathematics Education. HPM: Vienna University of Technology, Vienna, Austria (2010).


real analysis - For $alpha>0$, $int_{0}^{+infty}frac{t-sin{t}}{t^alpha},dt$ converges iff $alphain (2,4)$



I can't prove that for $\alpha>0$, $I_\alpha=\int_{0}^{+\infty}\frac{t-\sin{t}}{t^\alpha}\,dt$ converges iff $\alpha\in (2,4)$. Here's my attempt:



1) An easy remark but important: as $\sin{t}\le t$, we're dealing with a positive function.



2) I splited the integral to $\int_{0}^{1}\frac{t-\sin{t}}{t^\alpha}\,dt$ and $\int_{1}^{+\infty}\frac{t-\sin{t}}{t^\alpha}\,dt$ and use the inequality $\frac{t^3}{3!}-\frac{t^5}{5!}\le t-\sin{t}\le \frac{t^3}{3!}$ but it didn't work.




3) This is the only thing that lead me to a part of the answer: I noticed that $\int_0^{+\infty}\frac{dt}{t^{\alpha-1}}$ is always divergent. Thus, if $I_\alpha$ converges, $\int_0^\infty\frac{\sin{t}}{t^\alpha}\,dt$ diverges. But we know that $\int_1^\infty\frac{sin{t}}{t^\alpha}\,dt$ converges for any $\alpha>0$. Thus $\int_0^\infty\frac{\sin{t}}{t^\alpha}\,dt$ diverges iff $\int_0^1\frac{\sin{t}}{t^\alpha}\,dt$ diverges. Since $\sin{t}\sim t$ near $0^+$, both positive, $\int_0^1\frac{\sin{t}}{t^\alpha}\,dt$ diverges iff $\alpha>2$. Hence we showed that $(I_\alpha\,\text{converges})\Rightarrow\alpha>2$



Could you please help me? Thank you in advance!


Answer



The idea to split the integral is fine.



First, note that the integral $I_1$ as given by



$$I_1=\int_0^1 \frac{t-\sin(t)}{t^\alpha}\,dt$$




converges for $\alpha<4$ (and diverges for $\alpha \ge 4)$ since the integrand is $O\left(t^{3-\alpha}\right)$ as $t \to 0$.



Next, note that the integral $I_2$ as given by



$$\int_1^\infty \frac{t-\sin(t)}{t^\alpha}\,dt$$



converges for $\alpha >2$ (and diverges for $\alpha \le 2$) since the integrand is $O\left(t^{1-\alpha}\right)$ as $t\to \infty$.



Putting it together, we have that the integral of interest $I=I_1+I_2$ converges for all $\alpha \in (2,4)$ and diverges elsewhere.


calculus - Proof that a function is bounded



The question :




Let $f:[1,\infty)\to \mathbb{R}$
be a continuous function such that $\underset{x\rightarrow\infty}{\lim}f(x)=L$



Prove that the function is bounded.



My try :



By definition a continuous function $f:[1,\infty)\rightarrow\mathbb{R}$
1. $f$
is continuous in $(1,\infty)$

i.e $\forall x_{0}>1\underset{x\rightarrow x_{0}}{\lim}f(x)=f(x_{0})$




  1. $f$
    continuous at $1^{+}$
    i.e $\underset{x\rightarrow1^{+}}{\lim}f(x)=f(1)$



as stated in the question's contitions :$ \underset{x\rightarrow\infty}{\lim}f(x)=L\iff\forall\varepsilon>0\,\exists M\in\mathbb{R}: x>M\rightarrow\left|f(x)-L\right|<\varepsilon$




Set $\varepsilon=\left|f(1)-L\right|$



Then exists $M$
such that $x>M\rightarrow\left|f(x)-L\right|\leq\left|f(1)-L\right|$
including when $x=1$



Set $\left|f(1)-L\right|=K$



This also implies that$ M\geq1$




Thus we get $x\geq1\rightarrow\left|f(x)-L\right|\leq K \iff x\geq1\rightarrow L-K\leq f(x)\leq L+K$



Thus $f$
is bounded.



However I feel that this proof doesnt work and I do not fully understand what I have done here actually (just tried to replicate the lecture notes of my professor)



I want to understand this question and the correct way to answer it.
Assistance will be greatly appreciated.


Answer




Since the comments seem to indicate that the OP is not familiar with the theorem that a continuous function $f$ is bounded on a compact set, here is a simple proof for the special case of a closed bounded interval $[a,b] \subset \mathbb R$.



Suppose for a contradiction that $f$ is not bounded on $[a,b]$. Therefore it must be either unbounded above or unbounded below. Without loss of generality, assume $f$ is unbounded above. (Otherwise replace $f$ with $-f$.)



Now divide the interval into two subintervals, $[a + (a+b)/2]$ and $[(a+b)/2, b]$. Now $f$ must be unbounded on one of these subintervals. Repeating this procedure, we identify a sequence of closed bounded intervals $[a,b] = I_0 \supset I_1 \supset I_2 \supset \cdots$ such that the length of $I_n$ is $(b-a)/2^n$, and $f$ is unbounded on each of these intervals.



This means that we can choose points $x_0 \in I_0$, $x_1 \in I_1$, $x_2 \in I_2$, etc. such that $f(x_n) > n$ for every $n$.



Now each $I_n$ contains every $x_k$ for $k \geq n$, and from this we can easily conclude that $x_k$ converges to some limit $x$. Since each $x_n$ is in $[a,b]$ and $[a,b]$ is closed, it contains all of its limit points, hence $x\in [a,b]$.




By continuity of $f$, we must have
$$\lim_{n \to \infty}f(x_n) = f(x)$$
but this is impossible since $f(x_n) > n$ for every $n$.



Our assumption that $f$ is unbounded on $[a,b]$ is untenable, so $f$ must be bounded after all.


Friday 18 January 2013

List Table(s) of Series Here

I've been interested in series expansions of all types of mathematical functions. I was wondering if anyone has ever created a large list of all types of series. For example, Wolfram's Mathworld's Maclauren Series page list a bunch of those series. Herbert S. Wilf's Generatingfunctionology lists some power series at the end of section 2.5. Wikipedia lists some generalizations of series here and some Taylor series here.



I was wondering if there is a giant, somewhat comprehensive list of series somewhere. I'm especially interested in basic mathematical constants and elementary functions, but I'd like to have access to as many different series as possible. Could someone help me find some links or books with series?



Sorry if this seems hopelessly vague, but I would really like to have access to tons of series.

Thursday 17 January 2013

integration - Challenging Logarithmic Integral $int_0^1frac{ln^3(1-x)ln(1+x)}{x}dx$

Challenging Integral:




\begin{align}
I=\int_0^1\frac{\ln^3(1-x)\ln(1+x)}{x}dx&=6\operatorname{Li}_5\left(\frac12\right)+6\ln2\operatorname{Li}_4\left(\frac12\right)-\frac{81}{16}\zeta(5)-\frac{21}{8}\zeta(2)\zeta(3)\\&\quad+\frac{21}8\ln^22\zeta(3)-\ln^32\zeta(2)+\frac15\ln^52
\end{align}





The way I computed this integral is really long as it's based on values of tough alternating Euler sums which themselves long to calculate. I hope we can find other approaches that save us such tedious calculations. Any way, here is my approach:



Using the identity from this solution: $\displaystyle\int_0^1 x^{n-1}\ln^3(1-x)\ dx=-\frac{H_n^3+3H_nH_n^{(2)}+2H_n^{(3)}}{n}$



Multiplying both sides by $\frac{(-1)^{n-1}}{n}$ then summing both sides from $n=1$ to $n=\infty$, gives:
\begin{align}
I&=\int_0^1\frac{\ln^3(1-x)}{x}\sum_{n=1}^\infty-\frac{(-x)^{n}}{n}dx=\int_0^1\frac{\ln^3(1-x)\ln(1+x)}{x}dx\\
&=\sum_{n=1}^\infty\frac{(-1)^nH_n^3}{n^2}+3\sum_{n=1}^\infty\frac{(-1)^nH_nH_n^{(2)}}{n^2}+2\sum_{n=1}^\infty\frac{(-1)^nH_n^{(3)}}{n^2}
\end{align}




We have:
\begin{align}
\sum_{n=1}^\infty\frac{(-1)^nH_n^3}{n^2}&=-6\operatorname{Li}_5\left(\frac12\right)-6\ln2\operatorname{Li}_4\left(\frac12\right)+\ln^32\zeta(2)-\frac{21}{8}\ln^22\zeta(3)\\&\quad+\frac{27}{16}\zeta(2)\zeta(3)+\frac94\zeta(5)-\frac15\ln^52
\end{align}



\begin{align}
\sum_{n=1}^\infty\frac{(-1)^nH_nH_n^{(2)}}{n^2}&=4\operatorname{Li}_5\left(\frac12\right)+4\ln2\operatorname{Li}_4\left(\frac12\right)-\frac23\ln^32\zeta(2)+\frac74\ln^22\zeta(3)\\&\quad-\frac{15}{16}\zeta(2)\zeta(3)-\frac{23}8\zeta(5)+\frac2{15}\ln^52
\end{align}



$$\sum_{n=1}^\infty\frac{(-1)^nH_n^{(3)}}{n^2}=\frac{21}{32}\zeta(5)-\frac34\zeta(2)\zeta(3)$$




The proof of the first and second sum can be found here and the third sum can be found here.



By substituting these three sums,we get the closed form of $I$.






Other try is by using the rule: ( see here)
$$\int_0^1 \frac{\ln^a(1-x)\ln(1+x)}{x}dx=(-1)^a a! \sum_{n=1}^\infty\frac{H_n^{(a+1)}}{n2^n}$$




We get $\quad\displaystyle I=-6\sum_{n=1}^\infty\frac{H_n^{(4)}}{n2^n}\quad$ and this sum is really hard to crack and I think I made it more complicated this way. All approaches are appreciated.



By the way, the last sum was proposed by Cornel last year on his FB page here but he has not revealed his solution yet.



Thanks.

calculus - find this limit without l'hopital's rule



I can't figure out how to get the limit in this problem. I know that ${1-\cos x \over x}=0$ but I'm not allowed to use L'Hopital's Rule. I also already know that the answer is $-{25 \over 36}$ but I don't know the steps in between. I've already tried multiplying by the conjugates of both the numerator and the denominator but neither are getting me anywhere close. Here is the question:



$$\lim_{x\to 0}{1-\cos 5x \over \cos 6x-1}$$



Answer



Outline: Our expression is equal to
$$-\frac{1+\cos 6x}{1+\cos 5x}\cdot\frac{1-\cos^25x}{1-\cos^2 6x},\tag{1}$$
which is
$$-\frac{1+\cos 6x}{1+\cos 5x}\cdot\frac{\sin^2 5x}{\sin^2 6x}.\tag{2}$$
To find
$$\lim_{x\to 0} \frac{\sin 5x}{\sin 6x},$$ rewrite as
$$\frac{5}{6}\lim_{x\to 0} \frac{\frac{\sin 5x}{5x}}{\frac{\sin 6x}{6x}}.$$


Wednesday 16 January 2013

How to find $ f(6)$ given the following functional equation .




Let $f:\Bbb R\to\Bbb R$ be a function such that $\lvert f(x)-f(y)\rvert\le 6\lvert x-y\rvert^2$ for all $x,y\in\Bbb R$. If $f(3)=6$ then $f(6)$ equals:





how to calculate this ? facing problem with the inequality up there .


Answer



Hint: $f$ is differentiable at all points, and its derivative can be calculated explicitly.


calculus - Integral $int_0^1frac{lnleft(x+sqrt2right)}{sqrt{2-x},sqrt{1-x},sqrt{vphantom{1}x}}mathrm dx$




Is there a closed form for the integral
$$\int_0^1\frac{\ln\left(x+\sqrt2\right)}{\sqrt{2-x}\,\sqrt{1-x}\,\sqrt{\vphantom{1}x}}\mathrm dx.$$
I do not have a strong reason to be sure it exists, but I would be very interested to see an approach to find one if it does exist.


Answer



For $a > 0$, let $b = \frac12 + \frac1a$ and $I(a)$ be the integral
$$I(a) = \int_0^1 \frac{\log(a+x)}{\sqrt{x(1-x)(2-x)}}dx$$
Substitute $x$ by $\frac{1}{p+\frac12}$, it is easy to check we can rewrite $I(a)$ as



$$

I(a)
= -\sqrt{2}\int_\infty^{\frac12}\frac{\log\left[a (p + b)/(p + \frac12)\right]}{\sqrt{4p^3 - p}} dp
$$
Let $\wp(z), \zeta(z)$ and $\sigma(z)$ be the Weierstrass elliptic, zeta and sigma functions associated with the ODE:



$$\wp'(z)^2 = 4\wp(z)^3 - g_2 \wp(z) - g_3\quad\text{ for }\quad g_2 = 1 \;\text{ and }\; g_3 = 0.$$



In terms of $\wp(z)$, we can express $I(a)$ as



$$I(a)

= \sqrt{2}\int_0^\omega \log\left[a \left(\frac{\wp(z) + b}{\wp(z) + \frac12}\right)\right] dz
=
\frac{1}{\sqrt{2}}\int_{-\omega}^\omega \log\left[a \left(\frac{\wp(z) + b}{\wp(z) + \frac12}\right)\right] dz
$$



where $\;\displaystyle \omega = \int_\frac12^\infty \frac{dp}{\sqrt{4p^3 - p}} = \frac{\pi^{3/2}}{2\Gamma\left(\frac34\right)^2}\;$ is the half period for $\wp(z)$ lying on real axis. Since $g_3 = 0$, the double poles of $\wp(z)$ lies on a square lattice
$\mathbb{L} = \{\; 2\omega ( m + i n ) : m, n \in \mathbb{Z} \;\}$ and and we can pick
the other half period $\;\omega'$ as $\;i\omega$.



Notice $\wp(\pm i \omega) = -\frac12$. If we pick $u \in (0,\omega)$ such that $\wp(\pm i u) = -b$, the function inside the square brackets in above integral is an ellitpic function

with zeros at $\pm i u + \mathbb{L}$ and poles at $\pm i \omega + \mathbb{L}$. We can express $I(a)$ in terms of $\sigma(z)$ as



$$I(a) = \frac{1}{\sqrt{2}}\int_{-\omega}^\omega \log\left[ C\frac{\sigma(z-iu)\sigma(z+iu)}{\sigma(z-i\omega)\sigma(z+i\omega)}\right] dz
\quad\text{ where }\quad
C = a\left(\frac{\sigma(-i\omega)\sigma(i\omega)}{\sigma(-iu)\sigma(iu)}\right).
$$



Let $\varphi_{\pm}(\tau)$ be the integral $\displaystyle \int_{-\omega}^\omega \log\sigma(z+\tau) dz$ for $\Im(\tau) > 0$ and $< 0$ respectively. Notice $\sigma(z)$
has a simple zero at $z = 0$. We will choose the branch cut of $\log \sigma(z)$ there
to be the ray along the negative real axis.




When we move $\tau$ around, as long as we don't cross the real axis, the line segment $[\tau-\omega,\tau+\omega]$ won't touch the branch cut and everything will be
well behaved. We have



$$\begin{align}
& \varphi_{\pm}(\tau)''' = -\wp(\tau+\omega) + \wp(\tau-\omega) = 0\\
\implies &
\varphi_{\pm}(\tau)'' = \zeta(\tau+\omega) - \zeta(\tau-\omega) \quad\text{ is a constant}\\
\implies &
\varphi_{\pm}(\tau)'' = 2 \zeta(\omega)\\

\implies &
\varphi_{\pm}(\tau) = \zeta(\omega) \tau^2 + A_{\pm} \tau + B_{\pm} \quad\text{ for some constants } A_{\pm}, B_{\pm}
\end{align}
$$



Let $\eta = \zeta(\omega)$ and $\eta' = \zeta(\omega')$. For elliptic functions with general $g_2, g_3$, there is always an identity
$$\eta \omega' - \omega \eta' = \frac{\pi i}{2}$$
as long as $\omega'$ is chosen to satisfy $\Im(\frac{\omega'}{\omega}) > 0$.
In our case, $\omega' = i\omega$ and the symmetric of $\mathbb{L}$ forces $\eta = \frac{\pi}{4\omega}$. This implies




$$\varphi_{\pm}(\tau) = \frac{\pi}{4\omega}\tau^2 + A_{\pm}\tau + B_{\pm}$$



Because of the branch cut, $A_{+} \ne A_{-}$ and $B_{+} \ne B_{+}$. In fact, we can evaluate
their differences as



$$\begin{align}
A_{+} - A_{-} &= \lim_{\epsilon\to 0}
\left( -\log\sigma(i\epsilon-\omega) + \log\sigma(-i\epsilon-\omega) \right) = - 2 \pi i\\
B_{+} - B_{-} &= \lim_{\epsilon\to 0}
\int_{-\omega}^0 \left( \log\sigma(i\epsilon+z) - \log\sigma(-i\epsilon+z) \right) dz = 2\pi i\omega

\end{align}
$$
Apply this to our expression of $I(a)$, we get



$$\begin{align}
I(a)
&= \frac{1}{\sqrt{2}}\left(2\omega\log C + \varphi_{-}(-iu)+\varphi_{+}(iu)-\varphi_{-}(-i\omega)-\varphi_{+}(i\omega)\right)\\
&= \frac{1}{\sqrt{2}}\left\{
2\omega\log\left[a\left(\frac{\sigma(-i\omega)\sigma(i\omega)}{\sigma(-iu)\sigma(iu)}\right)\right] + \frac{\pi}{2\omega}(\omega^2 - u^2) + 2\pi(u-\omega)
\right\}

\end{align}
$$



Back to our original problem where $a = \sqrt{2} \iff b = \frac{1+\sqrt{2}}{2}$. One can use the duplication formula for $\wp(z)$ to vertify $u = \frac{\omega}{2}$. From this, we find:
$$I(\sqrt{2}) =
\sqrt{2}\omega\left\{
\log\left[\sqrt{2}\left(\frac{\sigma(-i\omega)\sigma(i\omega)}{\sigma(-i\frac{\omega}{2})\sigma(i\frac{\omega}{2})}\right)\right] - \frac{5\pi}{16}\right\}
$$



It is known that $| \sigma(\pm i\omega) | = e^{\pi/8}\sqrt[4]{2}$. Furthermore, we have

the identity:



$$\wp'(z) = - \frac{\sigma(2z)}{\sigma(z)^4}
\quad\implies\quad
\left|\sigma\left( \pm i\frac{\omega}{2} \right)\right| = \left|\frac{\sigma(\pm i \omega)}{\wp'\left(\pm i\frac{\omega}{2}\right)}\right|^{1/4} = \left(\frac{\sigma(\omega)}{1+\sqrt{2}}\right)^{1/4}
$$
Combine all these, we get a result matching other answer.



$$\begin{align}
I(\sqrt{2})

&= \sqrt{2}\omega\left\{\log\left[\sqrt{2}\sigma(\omega)^{3/2}\sqrt{1+\sqrt{2}}\right] - \frac{5\pi}{16}\right\}\\
&= \frac{\pi^{3/2}}{\sqrt{2}\Gamma\left(\frac34\right)^2}\left\{\frac78\log 2 + \frac12\log(\sqrt{2}+1) - \frac{\pi}{8} \right\}
\end{align}$$


order theory - Prove that, for all ordered sets P, Q and R,



$\langle P \rightarrow \langle Q \rightarrow R\rangle \rangle \cong \langle P \times Q \rightarrow R\rangle $




where $\langle Q \rightarrow R\rangle $ is the set of all order-preserving maps from Q to P; $\cong$ is order-isomorphic symbol; $\times$ is cartesian product symbol.



Besides I am not good at building bijection to prove isomorphism, I hope you can teach me some technique on this issue.


Answer



Suppose that $\varphi\in\langle P\to\langle Q\to R\rangle\rangle$; then for each $p\in P$, $\varphi(p)$ is an order-preserving map from $Q$ to $R$. Define



$$\widehat\varphi:P\times Q\to R:\langle p,q\rangle\mapsto\big(\varphi(p)\big)(q)\;.$$



Show that the map $\varphi\mapsto\widehat\varphi$ is the desired isomorphism.


real analysis - Prove the following limit exists



I'm trying understand why $\lim\limits_{\varepsilon \rightarrow 0} \left( \int_{|x| \geq \varepsilon} \frac{\varphi(x)}{x^2} dx - 2 \frac{\varphi(0)}{\varepsilon} \right)$ exists, where $\varphi \in C_c^{\infty}(\mathbb{R})$. The motivation for my question is a linear functional which is defined on the space of distributions by this limit, where $\varphi$ is the input of the linear functional. I would be able to show that the limit of the integral exists, but $\lim\limits_{\varepsilon \rightarrow 0} \frac{\varphi(0)}{\varepsilon}$ not necessarily exist for every $\varphi \in C_c^{\infty}(\mathbb{R})$, then I need compute the whole limit instead of compute the limit by the pieces, but I don't have idea how to do this. I would like to understand why $\lim\limits_{\varepsilon \rightarrow 0} \left( \int_{|x| \geq \varepsilon} \frac{\varphi(x)}{x^2} dx - 2 \frac{\varphi(0)}{\varepsilon} \right)$ exists.


Answer



The Maclaurin series of $\varphi$ begins $a+bx+cx^2+\cdots$. If $a=b=0$,
of course there is no problem. We'd like to reduce to this case by subtracting off

$a+bx$, but that does not have compact support. But we can get around that by
taking a compactly supported and smooth $\psi$ which is equal to $1$ on the
interval $[-1,1]$. Then
$$\varphi=\varphi_1+\varphi_2$$
where $\varphi_1(x)=(a+bx)\psi(x)$
and $\varphi_2\sim cx^2$ as $x\to0$. So we can reduce to considering $\varphi_1(x)$.



For $0<\epsilon<1$,
\begin{align}
\int_{|x|\ge\epsilon}\frac{\varphi_1(x)}{x^2}\,dx

&=\int_{|x\ge1}\frac{\varphi_1(x)}{x^2}\,dx+\int_{\epsilon}^1\frac{a+bx}{x^2}\,dx
+\int_{-1}^{-\epsilon}\frac{a+bx}{x^2}\,dx\\
&=C+\int_{\epsilon}^1\frac{2a}{x^2}\,dx=C'+\frac{2\varphi_1(0)}{\epsilon}
\end{align}

where $C$ and $C'$ are independent of $\epsilon$. In this case, the limit
is just $C'$.


definite integrals - Closed form for $sum_{k=0}^{infty}frac{Gamma(k+z)psi(k+1)}{k!}x^{k}$



I remember seeing a closed form for the series :
$$\sum_{k=0}^{\infty}\frac{\Gamma(k+z)\psi(k+1)}{k!}x^{k}\;\;\;\; \left | x \right |<1$$
But i don't seem to recall what it was . I tried replacing the gamma function with its Euler's integral , and then changing the order of summation and integration, using the fact that :

$$\sum_{k=0}^{\infty}\frac{\psi(k+1)}{k!}y^{k}=e^{y}\left(\log y +\Gamma(0,y\right)$$
Thus :
$$\sum_{k=0}^{\infty}\frac{\Gamma(k+z)\psi(k+1)}{k!}x^{k}=\int_{0}^{\infty}y^{z-1}e^{(x-1)y}\left[(\log(xy)+\Gamma(0,xy)) \right ]dy$$
$$=x^{-z}\int_{0}^{\infty}\omega^{z-1}e^{\left(1-\frac{1}{x} \right )\omega}\left[\log \omega +\Gamma(0,\omega) \right ]d\omega$$
Where $\Gamma(\cdot,\cdot)$ is the incomplete gamma function.
But i don't know how to do the integral !


Answer




Lemma 1. By the extended binomial theorem, $$\sum_{k\geq
0}\frac{\Gamma(k+z)}{k!}\,x^k =\frac{\Gamma(z)}{(1-x)^z}.\tag{1}$$

$\phantom{}$
Lemma 2. The series definition of the digamma function ensures
$$\begin{eqnarray*} \psi(k+1) = -\gamma+\sum_{n\geq 1}\left(\frac{1}{n}-\frac{1}{n+k}\right) &=& -\gamma+\sum_{n\geq 1}\int_{0}^{+\infty}e^{-nt}(1-e^{-kt})\,dt\\&=&-\gamma+\int_{0}^{+\infty}\frac{1-e^{-kt}}{e^{t}-1}\,dt\\&=&-\gamma+\int_{0}^{1}\frac{u^k-1}{u-1}\,du.\end{eqnarray*}\tag{2} $$




By Lemma 1 and Lemma 2,
$$\sum_{k\geq 0}\frac{\Gamma(k+z)\,\psi(k+1)}{k!}\,x^k = -\frac{\gamma\,\Gamma(z)}{(1-x)^z}+\Gamma(z)\int_{0}^{1}\frac{1}{u-1}\left(\frac{1}{(1-ux)^z}-\frac{1}{(1-x)^z}\right)\,du$$
and by applying integration by parts
$$\sum_{k\geq 0}\frac{\Gamma(k+z)\,\psi(k+1)}{k!}\,x^k = -\frac{\gamma\,\Gamma(z)}{(1-x)^z}-\Gamma(z+1)\int_{0}^{1}\frac{x \log(1-u)}{(1-u x)^{z+1}}\,du.\tag{3}$$
Now the last integral can be seen as the derivative of a hypergeometric function (by substituting $u\mapsto(1-u)$ then exploiting $\log(u)=\left.\frac{d}{d\alpha}u^{\alpha}\right|_{\alpha=0^+}$) or expanded as a series by exploiting $\log(1-u)=-\sum_{m\geq 1}\frac{u^m}{m}$.



intuition - Intersection of Groups is a Group? Is a Union of Groups? - Fraleigh p. 66 Exercise 6.32h

This is a true or false question, hence are the answers supposed to follow quickly? Because the empty set has no identity element, hence $\emptyset$ is not a group. Hence I'm inquiring for intersection and union $\neq \emptyset$.
Because the intersection of subgroups is a subgroup, I guessed intersection of groups is truly a group? But Fraleigh's answer says false?



I inquired about the union of subgroups is not a subgroup, but what about for groups? I don't know how to predestine, preordain this because that other question, for sub groups, still feels "fatidic" or magical to me. To boot, the difference, for the intersection of subgroups vs that of groups, confounds me as to what to do. What's the intuition?

Do either of the linear transformation properties imply the other?

I'm new to linear algebra and curious about the two properties of a linear transformation:



1) $f(x_i) + f(x_j) = f(x_i+ x_j)$



2) $cf(x) = f(cx)$ where $c$ is some scalar constant



Do either of these two properties entail the other one? I was thinking about it abstractly, and it seems like the linear transformation means that any step you take in the domain is identical in the co-domain.




Or, I can stretch and shift in the domain and that movement will appear exactly the same in the co-domain.



Or, the mapping preserves addition (subtraction) and multiplication (division).



I had two ways to try and answer this question, but got stuck.



First I asked myself if the addition of any two arbitrary reals a, b could be reached by multiplication of any two other arbitrary reals, c, d. In some vague sense I thought this would help me see if there was a link between (1) and (2).



It occurred to me that you can set c = (a+b) and d = 1 to achieve c * d = a+b for all reals a, b. It occurred to me that even if you prohibit using the identity you can simply set c = 1/(a+b) and d = (a+b)2 and this works so long as we avoid (a+b)=0. To me this felt like an additive "shift" (a+b) could yield the same location as a "stretch" (c*d). However, this did not feel rigorous.







For my second attempt I tried to find a counter-example, where one of the two linear transformation properties was preserved but the other was not, but came up empty handed.



Help appreciated, thank you!



EDIT: I assumed domains and co-domains in the reals. Not sure how this changes things.



EDIT 2: It has been claimed that these conditions are equivalent only over the reals. Does anyone have a proof of their equivalence over the reals?

modular arithmetic - Is modulus a periodic function?

Is taking a mod of something, like 12 mod 2 (which is 0), a periodic function? If not, what kind of function is it and can it be classified as such?

Tuesday 15 January 2013

complex numbers - What is $(-1)^{frac{2}{3}}$?



Following from this question, I came up with another interesting question:
What is $(-1)^{\frac{2}{3}}$?
Wolfram alpha says it equals to some weird complex number (-0.5 +0.866... i), but when I try I do this: $(-1)^{\frac{2}{3}}={((-1)^2)}^{\frac{1}{3}}=1^{\frac{1}{3}}=1$.
If it has multiple "answers", should we even call it a "number"? Because if we don't, it would be a bit different from what we were taught in elementary school. I actually thought if it doesn't have a variable in it, it should be a number.
I'm a bit confused. Which one is correct and why? I would appreciate any help.


Answer



Interesting question. This is a subtle point so I will say a lot.




  1. There is a function defined and continuous on the set of all real numbers, which is
    $$

    x \mapsto x^{1/3}
    $$
    where the symbol $x^{1/3}$ denotes the unique real number whose cube is $x$. Similar for other fractions with $1$ for a numerator and an odd denominator.


  2. There is a function defined and continuous on the set of all positive real numbers, which is
    $$
    x \mapsto x^{1/2}
    $$
    where the symbol $x^{1/2}$ denotes the unique positive real number whose square is $x$. Without that caveat "positive" the symbol would be ambiguous, in contrast to the case with cube roots. Similar with other fractions with $1$ for a numerator and an even denominator.


  3. There is a function defined and continuous on the set of all real numbers, which is
    $$

    x \mapsto x^2
    $$
    which needs no more justification.


  4. There is a rule for exponents which works when $x$ is real and positive: when you see
    $$
    x\mapsto x^{ab}
    $$
    you may write this as $(x^a)^b$ or as $(x^b)^a$. In fact this rule works for any fractions $a, b$.


  5. Rule 4 does not continue to work when $x$ is a negative real number and $a$ or $b$ are allowed to be fractions. For example,
    $$

    (-1)^1 = (-1)^{(1/2)*2} \neq ((-1)^2)^{1/2} = 1.
    $$
    Note that the failure here has nothing to do with imaginary numbers; indeed, in the above equality, I never took a square root of a negative number. It's just that Rule 4 does not work when $x$ is allowed to be negative.


  6. For this reason, it's somewhat dangerous to try to define a symbol like $(-1)^{2/3}$. For instance, is it the same as $(-1)^{4/6}$? Note that either answer you give will be problematic; on the one hand $2/3$ is the same number as $4/6$, and so whatever definition we pick we had better get the same value; on the other hand, we shouldn't be speaking of taking even roots of negative numbers if we insist on working with only real numbers.


  7. We can introduce complex numbers to get rid of the problem in (6). However, when we introduce complex numbers, it's no longer true that there is a unique number whose cube is a given real number. For instance, as you've discovered, by playing with Wolfram, there are cube roots of $1$ in the complex plane other than $1$ itself. Therefore, the function defined in (1) breaks down if we no longer insist that $x^{1/3}$ be real-valued.


  8. On the complex plane, it is best to think of $x^{1/3}$ as being a "multi-valued function," i.e. not a function at all, but something that takes in one value and returns multiple values. In fact, every complex number but $0$ has three distinct "cube roots."



elementary number theory - Question regarding divisibility test of 13



In order to develop a divisible test for 13, we use $1000 \equiv -1 \pmod{1001}$.
I understand the idea; however, why do we use $1001$, can we use any smaller number? For example, to test for divisible by $11$, we need to use only $10 \equiv -1 \pmod{11}$?



Thanks,


Answer




There are many divisibility tests for $13$. But tests like that for $3$, $9$ and $11$ (among others) are particularly good because they not only test the number for divisibility, but they actually tell you the remainder when the number is not divisible by what you are testing.



What the "test in development" is trying to do is use something similar to the tests for $3$ and $9$ (adding the digits) or for $11$ (alternating sums and differerences of digits). In order to do something like that, with groups of digits, you want to find the smallest power of $10$ for which $10^k \equiv 1$ or $10^k\equiv -1 \pmod{13}$. The smallest such power happens to be $10^3 = 1000$ (as $10\equiv -3 \pmod{13}$, and $100 \equiv 9\equiv -4\pmod{13}$). So $10^3$ is the smallest one that can be used to develop a test that follows the pattern of those for $3$, $9$, and $11$.



Added. There are other tests, of course. For example, you can develop a test similar to the one for $7$: take the last digit, multiply it by $2$, and subtract it from the rest of the digits; the original number is divisible by $7$ if and only if the result is divisible by $7$; but if the result is not divisible by $7$, the remainder need not be the same as that for the original number). A similar test for $13$ is: take last digit, multiply it by $4$, and add it to the rest; the original number is divisible by $13$ if and only if the result is divisible by $13$.


real analysis - Proving the chain rule for complex functions

I'm familiar with the Real Analysis proof of the chain rule (i.e. looking at the difference quotient for both $g(f(z))$ and for $f(z)$), and I'm familiar with another proof using the Weierstrass definition of differentiability (differentiable iff there is a continuous function such that ...).



But in Bak and Newman's Complex Analysis they give a hint for proving that the composition of differentiable functions is differentiable.





Begin by noting $$g(f(z+h))-g(f(z)) = [g'(f(z))+\epsilon][f(z+h)-f(z)]$$ where $\epsilon\rightarrow 0$ as $h\rightarrow 0$.




This seems to me to be practically assuming the thing we're trying to prove. What is the justification for this equation? It's not an equation I've encountered in earlier studies--am I supposed to be familiar with it?



This also isn't the first time that I've encountered an expression involving quantities going to 0 like this, which I didn't fully understand (like when reading about Machine Learning or Statistics). Is there a book I can consult to better understand the theory around this? As far as I recall it wasn't in baby Rudin or Ross's Analysis textbook, and yet it comes up kind of often and is thrown around casually.

Exponentiation with negative base and properties

I was working on some exponentiation, mostly with rational bases and exponents. And I stuck with something looks so simple:



$(-2)^{\frac{1}{2}}$



I know this must be $\sqrt{-2}$, therfore must be imaginary number. However, when I applied some properties I have something unexpected, and I don't know what I did wrong:



$(-2)^{\frac{1}{2}}=(-2)^{2\cdot\frac{1}{4}}=[(-2)^2]^\frac{1}{4}=4^{\frac{1}{4}}=\sqrt2$




I know the above is wrong, but I don't know exactly what is wrong. My initial suspect was from 2nd to 3rd expression, so I checked the property ($x^{mn}=(x^m)^n$), and I realized that it is only true for integer exponents.



I dug a little more, and found the following passage on Wikipedia:



"Care needs to be taken when applying the power identities with negative nth roots. For instance, $−27=(−27)^{((2/3)⋅(3/2))}=((−27)^{2/3})^{3/2}=9^{3/2}=27$ is clearly wrong. The problem here occurs in taking the positive square root rather than the negative one at the last step, but in general the same sorts of problems occur as described for complex numbers in the section § Failure of power and logarithm identities."



But could anyone clarify the explanation here? If we simply follow the order of operation, don't we really get 27?

Sunday 13 January 2013

real analysis - My proof of uniqueness of limit (of sequence)

Should I try to write a direct (i.e. non$-$by-contradiction) proof

instead of the below proof?
(I was told that mathematicians prefer direct proofs.)



We consider a convergent sequence
which we denote by
$(x_n)_{n \in \mathbb{N}}$.
By definition, there is a limit (of the sequence).



$\textbf{Theorem.}$
There are no two limits.




$\textit{Proof.}$
We prove by contradiction.
To that end,
we assume that there are two limits.
Now, our mission is to deduce a contradiction.
Let $x,x'$ be limits such that $x \ne x'$.
By definition ($\textit{limit}$), we have
\begin{equation*}
\begin{split}

&\forall \varepsilon \in \mathbb{R}, \varepsilon > 0 :
\exists N \in \mathbb{N} :
\forall n \in \mathbb{N}, n > N :
|x_n - x| < \varepsilon && \text{ and} \\
&\forall \varepsilon \in \mathbb{R}, \varepsilon > 0 :
\exists N \in \mathbb{N} :
\forall n \in \mathbb{N}, n > N :
|x_n - x'| < \varepsilon.
\end{split}
\end{equation*}




Since $x \ne x'$, we have $0 < \frac{1}{2} |x - x'|$.
We choose $\varepsilon := \frac{1}{2} |x - x'|$.



By assumption, there are $N,N' \in \mathbb{N}$ such that
\begin{equation*}
\begin{split}
&\forall n \in \mathbb{N}, n > N :
|x_n - x| < \varepsilon && \text{ and} \\
&\forall n \in \mathbb{N}, n > N' :

|x_n - x'| < \varepsilon.
\end{split}
\end{equation*}
We choose $n := \max\{N, N'\} + 1$.
Obviously, both $n > N$ and $n > N'$.
Therefore, we have both $|x_n - x| < \varepsilon$ and $|x_n - x'| < \varepsilon$.
Thus, by adding inequalities,
\begin{equation*}
|x_n - x| + |x_n - x'| < 2 \varepsilon .
\end{equation*}

Moreover,
\begin{equation*}
\begin{split}
2 \varepsilon & = |x - x'|
&& | \text{ by choice of } \varepsilon \\
& = |x + 0 - x'| \\
& = |x + ( - x_n + x_n) - x'| \\
& = |(x - x_n) + (x_n - x')| & \qquad & \\
& \le |x - x_n| + |x_n - x'|
&& | \text{ by subadditivity of abs. val.} \\

& = |x_n - x| + |x_n - x'|
&& | \text{ by evenness of abs. val.} \\
\end{split}
\end{equation*}
Hence, by transitivity, we have $2 \varepsilon < 2 \varepsilon$.
Obviously, we deduced a contradiction. QED

integration - distribution of constants over integrals

I'm seeing this as part of a proof for the reduction formula and I see this:



enter image description here




So am I correct for saying that if you multiply the $sin^{n-2}{x}$ by $(1 - sin^2{x})$, you get $sin^{n-2}{x} - sin^n{x}$ and so the $(n-1) \int sin^{n-2}{x}\cdot cos^2{x}$ becomes what is shown below? is that right? Generally, $\int a - b = \int a - \int b$ right? That integral rule is visually intuitive right?

Saturday 12 January 2013

probability - Coin Flipping Expectations



Thinking (as I often do to understand Probability) about coin flipping, I'm looking for someone to explain how - and I've tried to make this as arbitrary as possible - for a coin with probability $p$ of flipping a heads, we can investigate some of its probabilistic properties. We can restrict p to $0

I've found the expected number of heads in $n$ flips to be $np$ and the variance for the number of heads to be $p(n-p)$ - if these are wrong, I'd appreciate some correction, though intuitively the former seems right at least.



Suppose then we have $Y$ heads in total. If we look at the flips individually, so say we define a function $X_i$, which takes value $1$ if the $i^\text{th}$ flip is heads, and $0$ if it's tails, how can we determine $\mathbb{E}[X_i|Y]$ (which I imagine we can re-write as $\mathbb{E}[X_1|Y]$) and how can we also determine$\mathbb{E}[Y|X_i]$?




Can we also find the expected numbers of flips before the first head?



I'm quite interested in seeing where these answers come from, so any help would be really useful. Thanks, MM.



EDIT 1



Variance for first case is $np(1-p)$ rather than $p(n-p)$.


Answer



You're dealing with the binomial distribution. Wikipedia has means, variances and more for all the widely used distributions. Your mean is correct, but the variance is a bit off; it's $p(1-p)n$.




By linearity of expectation, the expected values of all the $X_i$ given $Y=y$ must add up to $y$, so $\mathbb E[X_i|Y=y]=y/n$.



The expected value of $Y$ given $X_i=x$ is just $x$ plus the expected value of the remaining $X_j$, which is $(n-1)p$, so $\mathbb E[Y|X_1=x]=x+(n-1)p$.



[Edit in response to the comment:]



The probability for the first heads to occur in the $k$-th flip is given by the geometric distribution $(1-p)^{k-1}p$. It has the finite mean $1/p$. However, there's nothing strange in general about a value being finite with probability $1$ yet having infinite expected value. This is the case for instance for $(1-p)^{-k}$ (where $k$ is again the number of flips until the first occurrence of heads).


Limit evaluate $lim_{xto0}{{frac{ln(cos(4x))}{ln(cos(3x))}}}$?



now I am evaluating limits of functions, but i dont know how to start to solve this limit. It is possible without L Hopital's rule?




$\lim_{x\to0}{{\frac{\ln(\cos(4x))}{\ln(\cos(3x))}}}$?


Answer



One option (if you can use power series, which require at least as much calculus as L'Hopital's rule!):



In any sufficiently small neighborhood of $ x = 0 $, $\cos (ax) = \sqrt{1 - \sin^2(ax)}$. Thus the original quotient equals



$$\frac{\ln(1 - \sin^2(4x))}{\ln(1 - \sin^2(3x))} = \frac{ -\sin^2(4x) + O(x^4)}{-\sin^2(3x) + O(x^4)} = \frac{ -\sin^2(4x)/x^2 + O(x^2)}{-\sin^2(3x)/x^2 + O(x^2)} \to \frac{4^2}{3^2}$$


Friday 11 January 2013

integration - compute the limit $lim_{nrightarrowinfty} int_{0}^{frac pi 2} frac{sin^2(nx)}{1+x} ,dx$

I've tried using Taylor expansion but that didn't really work out. I'm really stuck and don't know where to begin. I even tried putting it on wolfram alpha but he couldn't solve it either.

trigonometry - How is it solved: $sin(x) + sin(3x) + sin(5x) +dotsb + sin(2n - 1)x =$

The problem is to find the summary of this statement:
$$\sin(x) + \sin(3x) + \sin(5x) + \dotsb + \sin(2n - 1)x = $$



I've tried to rewrite all sinuses as complex numbers but it was in vain. I suppose there is much more complicated method to do this. I think it may be solved somehow using complex numbers or progressoins.



How it's solved using complex numbers or even without them if possible?




Thanks a lot.

algebra precalculus - Showing that $forall ain mathbb Z exists b in mathbb Z$ such that $ab+1$ is perfect square

first time asking a question here.



This proof seems simple, but the only part throwing me off is the the first two remarks
"Show that for every positive integer a, there exist a positive integer $b$ such that $ab+1$ is a perfect square."



What I have is:
Let $k = n^2$ where is an integer and $n^2$ is perfect square.
then $ab+ 1 = k $



This is where I get stuck.

Thursday 10 January 2013

elementary number theory - Find the remainder when dividing by 37




How can i find the remainder of 1316 - 225(1516) when divided by 37?



I have tried to verify if every factor is divisible by 37 (which isn´t), but I can´t figure a way to find the solution.


Answer



It turns out that $2$ is a primitive root modulo $37$ - that is, all the first $36$ powers have a different remainder on division by $37$.



$$\small \begin{array}{c|c}
k &
1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 \\\hline
2^k \bmod 37 &

2 & 4 & 8 & 16 & -5 & -10 & 17 & -3 & -6 & -12 & 13 & -11 & 15 & -7 & -14 & 9 & 18 & -1
\end{array} $$



with the second half being the negated versions of the same, with $2^{36}\equiv 1\bmod 37$.



Then $13^{16} - 2^{25}\cdot 15^{16} \equiv (2^{11})^{16} - 2^{25}(2^{13})^{16} \equiv 2^{176}-2^{233}\equiv 2^{32} - 2^{17}\bmod 37$


functional analysis - preserving norm map between real normed spaces



Suppose $X,Y$ are two real normed spaces,$T:X\to Y$ is the bijective map such that $||Tx+Ty+Tz|| = ||x+y+z||$ for any $x,y,z\in X$.Is $T$ a linear map?


Answer



By assumption
$$
\|T(x) + T(y) + T(-x-y)\| = 0,
$$

so
$$

T(x) + T(y) = -T(-x-y).
$$

In particular, with $x=y=0$ it follows $T(0)=0$, and with $y=0$,
$$
T(x) = -T(-x).
$$

Hence
$$
T(x) + T(y) = -T(-x-y) =T(x+y),
$$


and $T$ is additive.
Let me show that $T$ is continous. Let $x_n\to x$.
Then
$$
\|T(x) - T(x_n) \| = \|T(x) + T(-x_n)\|=\|x-x_n\|\to0.
$$

So $T$ is continuous and additive, hence it is linear, see Continuous and additive implies linear


Do the following series converge or diverge? Justify. $sum_{k=0}^{infty} frac{3k^2 + 1}{k^3 + k^2 + 5}$



$$\sum_{k=0}^{\infty} \frac{3k^2 + 1}{k^3 + k^2 + 5}$$



Can I do this using direct comparison test?



for $k \in [1, \infty), a_k = \frac{3k^2 + 1}{k^3 + k^2 + 5} \geq 0$




for $k \in [1,\infty), a_k = \frac{3k^2 + 1}{k^3 + k^2 + 5} \geq \frac{3k^2}{k^3 + k^3 + 5k^3} = \frac{3}{7k} = b_k$



Consider $\frac{3}{7}\sum_{n=1}^{\infty} \frac{1}{k}$. This is a p-series with $p = 1$. By the p-series test $\sum b_k$ diverges, therefore by the comparison test $\sum a_k$ diverges too.



My textbook does this using limit comparison test wondering if I can do it using direct comparison test too. Is this right?


Answer



It's much quicker with equivalents:
$$\frac{3k^2+1}{k^3+k^2+5}\sim_\infty\frac{3k^2}{k^3}=\frac3k,\quad\text{which diverges}.$$


Wednesday 9 January 2013

dynamical systems - Poincaré Recurrence Theorem (measure theory version)

I had a look on the proof of the following Recurrence Theorem of Poincaré:





Let $(\Omega,\Sigma,T,m)$ be a conservative dynamical system in measure theory for which the function $T^{-1}$ preserves null sets. If $f\colon\Omega\to\mathbb{R}$ is measurable, it follows that
$$
\liminf_{n\to\infty}\lvert f(x)-f(T^n(x))\rvert=0~\text{a.s.}
$$






Proof



Let $B\subset\mathbb{R}$ be a measurable set with $m(f^{-1}(B))>0$ and $\text{diam}(B)<\varepsilon$. With the Theorem of Halmos (cited below) it follows that
$$
S_{\infty}1_{f^{-1}(B)}=1_{f^{-1}(B)}+1_{f^{-1}(B)}\circ T+1_{f^{-1}(B)}\circ T^2+\ldots+1_{f^{-1}(B)}\circ T^n+\ldots=\infty
$$
alomost surely on $f^{-1}(B)$.




So it follows that $f(T^n(z))\in B$ for almost all $n\geqslant 0$. So with Halmos it is
$$
\lvert f(z)-f(T^n(z))\rvert\leqslant\text{diam}(B)=\sup_{x,y\in B}\lvert x-y\rvert<\varepsilon.
$$
From this it follows that
$$
\liminf_{n\to\infty}\lvert f(z)-f(T^n(z))\rvert<\varepsilon~~~~~(*)
$$
almost surely on $f^{-1}(B)$.




Now if one covers $\Omega$ by sets $f^{-1}(B)$ with diam smaller than $\varepsilon$ one has (*) almost surely and the Theorem follows with $\varepsilon\to 0$.






Is that okay?



Why is it possible to cover $\Omega$ by set of the form $f^{-1}(B)$ with $\text{diam}<\varepsilon$? Is $\Omega$ $\sigma$-finite?







Last but not least



here is the Theorem of Halmos that is used:





Let $(\Omega,\Sigma,T,m)$ a dynamical System in measure theory so that $T^{-1}$ preserves null sets. If $A\in\Sigma$ has positive measure then
$$
A\in\mathcal{C}(T)\Leftrightarrow S_{\infty}1_B=\infty \text{ a.s. on } B \text{ for each }B\subset A\cap\Sigma.
$$

(Here with $\mathcal{C}(T)$ the conservative part of the system is meant.)





With Greetings and best regards



math12

algebra precalculus - Find the sum of the series $sum^{infty}_{n=1} frac{1}{(n+1)(n+2)(n+3) cdots (n+k)}$

Find the sum of the series



$$\sum^{\infty}_{n=1} \frac{1}{(n+1)(n+2)(n+3) \cdots (n+k)}$$



Given series




$$\sum^{\infty}_{n=1} \frac{1}{(n+1)(n+2)(n+3) \cdots (n+k)}$$



$$ = \frac{1}{2\cdot3\cdot4 \cdots (k+1)}+\frac{1}{3\cdot4\cdot5 \cdots (k+2)}+\frac{1}{4\cdot5\cdot6\cdots (k+3)} +\cdots$$



now how to proceed further in this pleas suggest thanks ....

Prove the following inequality from jensen's inequality



By using the concave function $f(x)=\ln(x)$ inside the jensen inequality, I get the result:
$$\sqrt[n]{t_1t_2\cdots t_n}\leq \frac{t_1+\cdots+t_n}{n}$$
Where $t_1,\ldots,t_n\in \mathbb{R}_{>0}$



From this result, I am trying to prove that
$x^4+y^4+z^4+16\geq 8xyz$



My attempt at proving this is as follows, let $n=4$, $t_1=x,t_2=y,t_3=z$ and $t_4=2$, hence:



$$\sqrt[4]{2xyz}\leq\frac{x+y+z+2}{4}$$




$$2xyz\leq\frac{(x+y+z+2)^4}{4^4}$$



$$8xyz\leq\frac{(x+y+z+2)^4}{4^3}$$



But now I have trouble trying to get the upper limit to $x^4+y^4+z^4+16$.


Answer



You almost got it, the solution is to set
$t_1=x^4 , t_2=y^4, t_3 = z^4, t_4=16$.
This gives you
$$ 2xyz \leq \frac{x^4+y^4+z^4+16}4 $$, which is what you want.



algebra precalculus - Does the size of the sum of squares say anything about the size of original sum?

Suppose I am told I have N numbers and that the sum of the
square of those numbers is S. Can anything be said about the sum of those
original N numbers?

elementary number theory - Division of Factorials



I have a partition of a positive integer $(p)$. How can I prove that the factorial of $p$ can always be divided by the product of the factorials of the parts?



As a quick example $\frac{9!}{(2!3!4!)} = 1260$ (no remainder), where $9=2+3+4$.




I can nearly see it by looking at factors, but I can't see a way to guarantee it.


Answer



The key observation is that the product of $n$ consecutive integers is divisible by $n!$. This can be proved by induction.


calculus - Integral $int_0^infty frac{log^2 x cos ax}{x^n-1}dx$

Hi I am trying to calculate
$$
I:=\int\limits_0^\infty \frac{\log^2 x \cos (ax)}{x^n-1}\mathrm dx,\quad \Re(n)>1, \, a\in \mathbb{R}.
$$
Note if we set $a=0$ we get a similar integral given by
$$
\int\limits_0^\infty \frac{\log^2 x }{x^n-1}\mathrm dx=-\frac{2\pi^3\cot(\pi/n)\csc^2(\pi/n)}{n^3},\quad \Re(n)>1.

$$
I was trying to write I as
$$
I=\Re \bigg[\int\limits_0^\infty \frac{ e^{i ax}\log^2 x}{x^n-1}\mathrm dx\bigg]=\Re\bigg[\int\limits_\infty^0\frac{e^{iax}\log^2 x}{1-x^n}\mathrm dx\bigg]=\Re\bigg[\int\limits_\infty^0e^{iax}\log^2 x\sum_{m=0}^\infty x^{nm} \mathrm dx\bigg].
$$
But was unsure of where to go from here. How can we calculate $I$? It is clear that this method is not going to work.

group theory - How to prove basic properties that follow from the axioms of a field?

I am recently learning about fields and frequently get stuck whole trying to prove some properties that drop out of the axioms. I'll give an example to try and see how too go about doing it.




Let $m,n \in \mathbb{F}$ some field. Then prove $(-m)n=-(mn)=-mn$. Now I know $(-m)$ is the unique element of the field such that $(-m)+m=0$ and $-mn$ is the unique element of the field such that $(-mn)+mn=0$



I have so far done $(-m)n=(-m1)n=(-1\cdot(mn))=-1\cdot (mn)$ but I don't know how to just write this is $=-(mn).$

Tuesday 8 January 2013

algebra precalculus - How do we get from $ln A=ln P+rn$ to $A=Pe^{rn}$ and similar logarithmic equations?



I've been self-studying from the amazing "Engineering Mathematics" by Stroud and Booth, and am currently learning about algebra, particularly logarithms.



There is a question which I don't understand who they've solved. Namely, I'm supposed to express the following equations without logs:



$$\ln A = \ln P + rn$$



The solution they provide is:




$$A = Pe^{rn}$$



But I absolutely have no idea how they got to these solutions. (I managed to "decipher" some of the similar ones piece by piece by studying the rules of logarithms).


Answer



The basic idea behind all basic algebraic manipulations is that you are trying to isolate some variable or expression from the rest of the equation (that is, you are trying to "solve" for $A$ in this equation by putting it on one side of the equality by itself).



For this particular example (and indeed, most questions involving logarithms), you will have to know that the logarithm is "invertible"; just like multiplying and dividing by the same non-zero number changes nothing, taking a logarithm and then an exponential of a positive number changes nothing.



So, when we see $\ln(A)=\ln(P)+rn$, we can "undo" the logarithm by taking an exponential. However, what we do to one side must also be done to the other, so we are left with the following after recalling our basic rules of exponentiation:
$$

A=e^{\ln(A)}=e^{\ln(P)+rn}=e^{\ln(P)}\cdot e^{rn}=Pe^{rn}
$$


summation - Why does he need to first prove the sum of the $n$ integers and then proceed to prove the concept of arithmetical progression?



I'm reading What is Mathematics, on page 12 (arithmetic progression), he gives one example of mathematical induction while trying to prove the concept of arithmetic progression. There's something weird here: he starts by proving that the sum of the first $n$ integers is equal to $\frac{n(n+1)}{2}$ by giving this equation:



$$1+2+3+\cdots+n=\frac{n(n+1)}{2}.$$




Then he adds $(r+1)$ to both sides:



$$1+2+3+\cdots+n+(r+1)=\frac{n(n+1)}{2}+(r+1).$$



Then he solves it:



$$\frac{(r+1)(r+2)}{2}$$



Now it seems he's going to prove the arithmetical progression: He says that this can be ordinarily shown by writing the sum $1+2+3+\cdots+n$ in two forms:




$$S_n=1+2+\cdots+(n-1)+n$$



And:



$$S_n=n+(n-1)+\cdots+2+1$$



And he states that on adding, we see that each pair of numbers in the same column yields the sum $n+1$ and, since there are $n$ columsn in all, it follows that:



$$2S_n=n(n+1).$$




I can't understand why he needs to prove the sum of the first $n$ integers first. Can you help me?



Thanks in advance.



EDIT: I've found a copy of the book on scribd, you can check it here. This link will get you in the page I'm in.



EDIT:
I kinda understand the proofs presented in the book now, but I can't see how they are connected to produce a proof for arithmetic progression, I've read the wikipedia article about arithmetic progression and this $a_n = a_m + (n - m)d$ (or at least something similar) would be more plausible as a proof to arithmetic progression - what you think?


Answer



He is giving two different proofs, one by a formal induction, and the other a more intuitive one. Good idea, two proofs makes the result twice as true! More seriously, he is probably taking this opportunity to illustrate proof by induction.




It is important to know the structure of a proof by induction. In order to show that a result holds for all positive integers $n$ one shows (i) that the result holds when $n=1$ and (ii) that for any $r$, if the result holds when $n=r$, then it holds when $n=r+1$.



(i) is called the base step and (ii) is called the induction step.



Almost always, the induction step is harder than the base step.



Here is how the logic works. By (i), the result holds when $r=1$. By (ii), because the result holds for $n=1$, it holds when $n=2$ (we have taken $r=1$). But because the result holds for $n=2$, it holds when $n=3$ (here we have taken $r=2$). But because the result holds when $n=3$, we can argue in the same way that the result holds when $n=4$. And so on.



In our example, suppose that we know that for a specific $r$, like $r=47$, we have

$$1+2+\cdots+r=\frac{r(r+1)}{2.}$$
We want to show that this forces the result to hold for the "next" number.
Add $(r+1)$ to both sides. We get
$$1+2+\cdots +r+(r+1)=\frac{r(r+1)}{2}+(r+1).$$
Now we do some algebraic manipulation:
$$\frac{r(r+1)}{2}+(r+1)=\frac{r(r+1)+2(r+1)}{2}=\frac{(r+1)(r+2)}{2},$$
which is what the formula we are trying to prove predicts when $n=r+1$. We have taken care of the induction step. The base step is easy. So we have proved that $1+2+\cdots+n=\frac{n(n+1)}{2}$ for every positive integer $n$.



Remark: Here is another way to think about the induction, one that I prefer. Suppose that there are positive integers $n$ for which the result is not correct. Call such integers $n$ bad. If there are bad $n$, there is a smallest bad $n$. It is easy to see that $1$ is not bad.




Let $r+1$ be the smallest bad $n$.



Then $r$ is good, meaning that $1+2+\cdots+r=\frac{r(r+1)}{2}$. Now we argue as in the main post that $1+2+\cdots +r+(r+1)=\frac{(r+1)(r+2)}{2}$. That shows that $r+1$ is good, contradicting the assumption that it is bad.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...