Monday 31 October 2016

calculus - How do derivatives describe asymptotes?

While delivering pizzas at work today I started thinking of how I could develope a good method of fitting data points to a function/curve. I came up with a good method that would work for creating simple polynomials such as $$f(x) = x^3+x^2+x+1$$ but quickly realized this method would entirely fail if the data could fit a function like $\sqrt x$, since it has infinite derivatives.



This really got me thinking about how a function could even have infinite derivatives, i.e. how the slope of a function could be changing at rate that’s changing at a rate that’s changing at a rate... and so on to infinite. Then it dawned on me that this is why asymptotes occur (and after thinking about this all day, I literally just realized that $\sqrt x$ doesn’t even have an asymptote... I think I need some sleep... but still, I’m confused)!



At first, since I was just working this all out in my head and was also focusing on the road, I mistakenly calculated that every derivative of $\sqrt x$ is negative, except for the first one which is positive. I thought this made perfect sense in terms of creating asymptotes but then my friend pointed out to me that the derivatives actually alternate between being negative and positive, and claimed that THATS why it’s an asymptote. This also made perfect sense to me and so concluded that the curve of $\sqrt x$ has an asymptote because each derivative is causing the derivative right before it to approach 0 and thus the first derivative/ the slope of the curve must also be approaching 0.



Despite that, the concept of a function having an asymptote when the first derivative is positive and the rest are negative, also made perfect sense to me, because I figured that each negative derivative would sort of “pull” the original function down, but won’t pull it past a horizontal orientation since the first derivative is positive and can only approach 0.



Cleary, I’m just totally lost right now. So finally, here are my main questions:





  1. After realizing now that $\sqrt x$ doesn’t even have an asymptote, where is the flaw in the logic that each derivative makes the one before it approach 0 and thus an asymptote occurs.


  2. If that logic isn’t what causes an asymptote, what is (in terms of a function’s derivatives)?


  3. If, in the case of my original flawed logic (where a theoretical function has a positive 1st derivative, and infinite negative derivatives after that), would this function just be a straight horizontal line? I mean if every derivative after the first is negative, then wouldn’t the second derivative approach negative infinity after any infinitesimal increase of x? And thus wouldn’t the first derivative approach zero, causing a line that’s essentially completely horizontal, yet somehow slightly tilted?




Sorry in advance for the long and confusing post, but I’d really appreciate any information that could help me build a better overall understanding of functions, their graphs, their derivatives, and even just the concept of infinity.



Thanks :)

contest math - Polynomial $P(a)=b,P(b)=c,P(c)=a$




Let $a,b,c$ be $3$ distinct integers, and let $P$ be a polynomial with integer coefficients.Show that in this case the conditions $$P(a)=b,P(b)=c,P(c)=a$$ cannot be satisfied simultaneously.



Any hint would be appreciated.


Answer



Hint: If $P(a)=b$ and $P(b)=c$ then $a-b$ divides $b-c$.


calculus - Why is $sumlimits_{n=N+1}^{infty} frac {1}{2^n!} < sumlimits_{n=(N+1)!}^{infty} frac {1}{2^n} = frac{1}{2^{(N+1)!-1}}$

I'm reading a proof of a theorem and some applications and in the process the author used the following without any explanation of why it's true. I'm trying to understand why exactly this is true, but I have no idea how one would justify it besides saying it seems true intuitively. Any help would be much appreciated.



$$
\sum_{n=N+1}^{\infty} \frac {1}{2^n!} < \sum_{n=(N+1)!}^{\infty} \frac {1}{2^n} = \frac{1}{2^{(N+1)!-1}}

$$

arithmetic - Word Problem on Tax Returns

I'm stuck trying to solve a really-existing problem regarding a redistribution of funds. Any help would be greatly appreciated.



A married couple decided to file taxes this year separately instead of jointly.
Suppose Spouse A, by filing separating, receives a tax return of \$1,000. Spouse B, on the other hand, has to pay the government \$300. Their
"joint"
return is therefore \$700.
The couple, however, regretted filing separately, and now needs to know how to equitably redistribute the return from Spouse A to Spouse B.

Specifically, they want to know how much Spouse A has to pay Spouse B if the joint return were to be divided evenly between the two parties (as if they had filed jointly in the first place).
The answer cannot be \$350 since that implies Spouse A netted \$1000 - \$350 = \$650, while Spouse B receives \$50 (i.e., \$350 - \$300).
How do you determine the correct allocation?



Thank you!

number theory - $sum_{n=1}^infty chi(n)phi(n)n^{-s} = frac{L(chi,s-1)}{L(chi,s)}$

Let $\chi$ be a Dirichlet character mod 4. Show $\sum_{n=1}^\infty \chi(n)\phi(n)n^{-s} = \frac{L(\chi,s-1)}{L(\chi,s)}$ and $\sum_{n=1}^\infty \chi(n)d(n)n^{-s}=L(\chi,s)^2$. ($\phi$ is the Euler totient function and $d(n)$ is the number of divisors of $n$.)




First, is this true just for characters mod 4 and not true in general?
I'm not sure what specific properties about characters mod 4 I should use besides that $\chi(n)=0$ for $n$ even.



I took the log of both sides and tried to use the following:
$$L(\chi,s)=\prod_{p \text{ prime}}\frac{1}{1-\frac{\chi(p)}{p^s}}$$



$$\log L(\chi,s)=\sum_{p \text{ prime}}\sum_{n=1}^\infty \frac{\chi(p)^n}{np^{ns}}$$



$\chi$ and $\phi$ are multiplicative, so we can express $$\sum_{n=1}^\infty \chi(n)\phi(n)n^{-s}$$ as the Euler product $$\prod_p(1+\frac{\chi(p)\phi(p)}{p^s}+\frac{\chi(p^2)\phi(p^2)}{p^{2s}}+\cdots).$$




Manipulating things are not quite working. Any help would be appreciated.

Sunday 30 October 2016

basic probability birthday question




I figure this is a trivial question since it's right in the beginning of the book but I get a different answer from that of the answer in the back of the book. I get .0847 while in the correct answer is .0828.



Anyways here is the question:



If birthdays are equally likely to fall on any day, what is the probability that a person chosen at random has a birthday in January?



January has 31 days and there are 365 days in a year so $31 \over 365$ would be $p$ for a non leap year. On a leap year it's $31\over 366$. Since a leap year occurs once every four years I thought I'd get my answer by doing:



$${31\over 365}*{3\over 4} + {31\over 366} * {1\over 4}$$




Any suggestions?


Answer



Since January has $31$ days, the most days a month can have, and $\frac1{12}= 0.0833\ldots $, there is no obvious way to get a figure as low as $0.0828$.



Either it is a trick question or you have spotted an error.


number theory - Reversing digits of power of 2 to yield power of 7

Are there positive integers $n$ and $m$ for which reversing the base-10 digits of $2^n$ yields $7^m$?






I've answered this question for powers of 2 and powers of 3 in the negative. Permutations of the digits of a number divisible by 3 yields another number divisible by 3 in base-10. This arises from the fact that the sum of base-10 digits of a number divisible by 3 is itself divisible by 3.



Thus since $2^n$ is not divisible by 3, reversing the digits can't yield another number divisible by 3, and hence no natural number power of 2 when reversed will be a natural power of 3.







I'm currently trying to put limitations on n and m by considering modular arithmetic. Suggestions for techniques in the comments would also be appreciated.

probability theory - If $τ_x^k$ is the time of the $k$-th entrance of a Markov chain into $x$, then $text P_x[τ_y^k

Let




  • $E$ be at most countable and equipped with the discrete topology and $\mathcal E$ be the Borel $\sigma$-algebra on $E$

  • $X=(X_n)_{n\in\mathbb N_0}$ be a discrete Markov chain with values in $(E,\mathcal E$) and distributions $(\operatorname P_x)_{x\in E}$

  • $\tau_x^0:=0$ and $$\tau_x^k:=\inf\left\{n>\tau_x^{k-1}:X_n=x\right\}$$ for $x\in E$ and $k\in\mathbb N$



Let $$\varrho(x,y):=\operatorname P_x\left[\tau_y^1<\infty\right]\color{blue}{=\operatorname P_x\left[\exists n\in\mathbb N:X_n=y\right]}\;.$$

I want to prove, that $$\operatorname P_x\left[\tau_y^k<\infty\right]=\varrho(x,y)\varrho(y,y)^{k-1}\;\;\;\text{for all }k\in\mathbb N\tag 1$$ using the strong Markov property: $$\operatorname E_x\left[f\circ (X_{\tau+t})_{t\in \mathbb N_0}\mid\mathcal F_\tau\right]=\operatorname E_{X_\tau}\left[f\circ X\right]\tag 2$$ for all $x\in E$, $\sigma(X)$-stopping times $\tau$ and bounded, $\mathcal E^{\otimes\mathbb N_0}$-measurable $f:E^{\mathbb N_0}\to\mathbb R$.






I want to prove $(1)$ by induction over $k\in\mathbb N$. Since, $k=1$ is trivial, we only need to care about $k-1\to k$. Since $$\left\{\tau_y^{k-1}<\infty\right\}\cap\left\{\tau_y^k<\infty\right\}=\left\{\tau_y^k<\infty\right\}$$ and $$\left\{\tau_y^{k-1}<\infty\right\}\in\mathcal F_{\tau_y^{k-1}}\;,$$ we've got $$\operatorname P_x\left[\tau_y^k<\infty\right]=\operatorname E_x\left[1_{\left\{\tau_y^{k-1}<\infty\right\}}\color{red}{\operatorname P_x\left[\tau_y^k<\infty\mid\mathcal F_{\tau_y^{k-1}}\right]}\right]\;,\tag 4$$ by definition of the conditional expectation. Now, I think, that we somehow need to apply $(2)$ with $\tau=\tau_y^{k-1}$ to the $\color{red}{\text{red}}$ term in order to obtain $$\operatorname E_x\left[1_{\left\{\tau_y^{k-1}<\infty\right\}}\color{red}{\operatorname P_x\left[\tau_y^k<\infty\mid\mathcal F_{\tau_y^{k-1}}\right]}\right]=\operatorname E_x\left[1_{\left\{\tau_y^{k-1}<\infty\right\}}\varrho(y,y)\right]\;,$$ but I can't figure out how I need to choose $f$.

elementary number theory - Using induction to prove that $2 mid (n^2 − n)$ for $ngeq 1$




Use induction to prove that, for all $n \in \mathbb{Z}^+$, $2\mid (n^2 − n)$.





That is, I am supposed to use induction to prove that $(n^2 − n)$ can be divided by $2$ when $n$ is a positive integer.



I've tried the following:
$$\begin{split} (n+1)^2 − (n+1) &= (n+1)(n+1) - (n+1)\\
&=(n+1)(n-1)+2 - (n+1)\\
&=n^2 +n -n -1 +2 -n -1\\
&=n^2 -n,
\end{split}$$
but I'm not particularly good at maths so I have no idea if this is even correct, and if it is, what it even means.



Answer



Comment: As others have mentioned a number of times, induction is not at all necessary in this particular problem, but I am sure you hear that loud and clear by now. You probably just want to see how an induction proof would look. I've sketched out a proof below, but before you read it, I would encourage you to take a look at this post on how to write a clear induction proof. I imagine it would help you a good bit in both understanding how to write up your induction proofs clearly but also constructing your proofs. Following the instructions in that link will force you to understand your problem. That being said, see if you can follow the proof below (feel free to leave a comment if a point does not make sense).






For any positive integer $n$, let $S(n)$ denote the statement
$$
S(n) : 2\mid (n^2-n).
$$




Base step: For $n=1, S(1)$ gives $1^2-1 = 0$, and $2$ divides zero. Thus, $S(1)$ holds.



Inductive step: Let $k\geq 1$ be fixed, and suppose that $S(k)$ holds; in particular, let $\ell$ be an integer with $2\ell = k^2-k$. Then
\begin{align}
[(k+1)^2-(k+1)]&= (k^2+2k+1)-(k+1)\tag{expand}\\[0.5em]
&= (k^2-k)+2k\tag{rearrange/simplify}\\[0.5em]
&= 2\ell+2k\tag{by ind. hyp.}\\[0.5em]
&= 2(\ell+k)\tag{factor out $2$}\\[0.5em]
&= 2\eta\tag{$\eta=\ell+k. \eta\in\mathbb{Z}$}
\end{align}

This proves $S(k+1)$ and concludes the inductive step $S(k)\to S(k+1)$.



By mathematical induction, for each $n\geq 1$, the statement $S(n)$ is true. $\blacksquare$


Saturday 29 October 2016

elementary number theory - Prove that $6 - sqrt{2}$ is Irrational by contradiction




What is a Proof by Contradiction, and how to prove by contradiction that $6 - \sqrt{2}$ is an irrational number?


Answer



A proof by contradiction is assuming something then building on it and finding that it leads to contradiction, concluding that the assumed statement is false.



Assume $x=6-\sqrt{2}=\frac{p}{q}$



$x^2=8-12\sqrt{2}=\frac{p^2}{q^2}$



$8q^2=p^2+12\sqrt{2}q^2$




$8q^2-p^2=12\sqrt{2}q^2$



$8q^2-p^2$ is rational, and $q^2$ is rational, thus $12\sqrt{2}$ is rational.



However, we know that that is not true, and thus, $6-\sqrt{2}$ is irrational.


real analysis - Computing: $limlimits_{ntoinfty}left(prodlimits_{k=1}^{n} binom{n}{k}right)^frac{1}{n}$



I try to compute the following limit:




$$\lim_{n\to\infty}\left(\prod_{k=1}^{n} \binom{n}{k}\right)^\frac{1}{n}$$



I'm interested in finding some reasonable ways of solving the limit. I don't find any easy approach. Any hint/suggestion is very welcome.


Answer



All the binomial coefficients except the last one are at least $n$, so the $n$th root is at least $n^{\frac{n-1}{n}}$, so the limit is infinity.


calculus - convergence of $sum_{n=1}^{infty} frac {1}{log(e^n+e^{-n})}$?



Test convergence of $\sum_{n=1}^{\infty} \dfrac {1}{\log(e^n+e^{-n})}$



Attempt: I have tried the integral test, the comparison test ( for which I couldn't find a suitable comparator).



However, I haven't been able to proceed much. Could anyone please give me a direction on how to proceed ahead with this problem.



Thank you very much for your help in this regard.



Answer



$e^{n}+e^{-n}< 2e^{n}$, hence
$$\frac{1}{\log(e^n+e^{-n})}>\frac{1}{n+\log 2}$$
and the series is divergent by direct comparison with the harmonic series.


Friday 28 October 2016

calculus - Unbounded operator between normed spaces

For every infinite sequence $x = (x_1, x_2, x_3, ...)$ of complex numbers define $S(x)$ by $S(x_1, x_2, x_3, ...) = (x_1, 2x_2, 3x_3, ...)$. Is $S$ in $\mathcal{L}(\mathcal l^1, \mathcal l^\infty)$?



I argue that $S$ is unbounded and hence not in $\mathcal{L}(\mathcal l^1, \mathcal l^\infty)$.



Proof: Firstly, using $|| x||_\infty \geq || x||_1$, we have that $$||S(x)||_\infty = \sup_n|S(x_n)| = \sup_n|n\cdot x_n|= n\cdot|| x||_\infty \geq n\cdot||x||_1.$$



This means that



$$\frac{||S(x)||_\infty}{|| x||_1} \geq n \rightarrow\infty$$




and hence, $S$ is unbounded with $||\cdot||_\infty$ norm and not a member of $\mathcal{L}(\mathcal l^1, \mathcal l^\infty)$ .

derivatives - How to prove that a complex power series is differentiable




I am always using the following result but I do not know why it is true. So: How to prove the following statement:



Suppose the complex power series $\sum_{n = 0}^\infty a_n(z-z_0)^n$ has radius of convergence $R > 0$. Then the function $f: B_R(z_0) \to \mathbb C$ defined by
\begin{align*}
f(z) := \sum_{n = 0}^\infty a_n (z-z_0)^n
\end{align*}
is differentiable in $B_R(z_0)$ and for any $z \in B_R(z_0)$ the derivative is given by the formula
\begin{align*}
f'(z) = \sum_{n = 1}^\infty na_n(z-z_0)^{n-1}.
\end{align*}




Thanks in advance for explanations.


Answer



It is a result of Abel that says that the power series converges in the ball $B_R(z_0)$. Moreover since there is uniform convergence in every concentric sub-ball, $B_r (z_0)$ with $r

approximation - Perturbing a polynomial with repeated real roots to get distinct real roots



Consider a real polynomial $f$ of degree $d$ which has $d$ real roots not necessarily distinct. In general, can we accomplish the following?





  1. For every $\epsilon>0$, can we perturb each coefficient of $f$ by less than $\epsilon$ and guarantee real, distinct roots?





I just need one such perturbation to work. I know that a priori, not every perturbation will be nice (read: the number of real roots does not vary continuously like it does in the separable case). For example if $f=x^{2}$, this has $0$ as a root of multiplicity $2$ but if every perturbation of the constant term in the positive direction, there won't be be any real roots at all. But any $x^2+bx+c$ with $b^2>4c$ will be a perturbation that yields two real roots so we've solved 1) for $f=x^2$.



I think this will not be difficult and can be done. Just consider $f=c\prod(x-r_i)^{k_i}$ and reduce to the case of a single $(x-r_i)^{k_i}$. Recenter root to be $0$ so that we have $x^d$ and do this directly. This last part needs argument but I think can be done.





  1. How many coefficients do I need to perturb to achieve objective 1?





For example, in most cases, it is not possible to do just by perturbing the
constant coefficients. The geometric intuition is that a degree $d$ real polynomial with distinct real roots will have $d-1$ local extrema none of which occur at roots. However for those with only $k

Thank you for any comments, solutions or references to the literature that you can provide.


Answer



You can always disambiguate $n$ double roots by perturbing $n$ coefficients.



In general, a perturbation that changes a polynomial of degree $d$ having $d-k$ roots (some of which have multiplicity of $2$ or more) to one that has $d$ distinct real roots requires perturbing $k$ coefficients, and these can always be chosen to be the last $k$ (the constant term, then the linear term, and so forth).




However, there are cases where fewer coefficients will suffice. In particular, whenever $d < 3k-1$ you can "remedy" a polynomial with a deficiency of $k$ roots, by perturbing fewer than $k$ coefficients.



The proof is constructive. For the case of $k$ roots of multiplicity $2$, choose for each double root a sufficiently small quantity having the same sign as the second derivative at that root. Then form a polynomial of degree $d$ passing through the chosen values at the respective roots. Add that polynomial to the original and you get a perturbed polynomial with $d$ real roots. If the perturbation is larger than the allowed change, then divide the added polynomial by some sufficiently large positive number.



For example, consider
$$
P(x) = x^8-8x^7-8x^6+160x^5-86x^4-872x^3+768x^2+720x-675=0
$$
which has double roots at $x=-3, x=1, x=5$ and thus a deficiency of $3$ real roots.




Now add
$$
f(x) = -\frac1{80}(x^2-2x-7)
$$



$$
\bar{P}(x) = x^8-8x^7-8x^6+160x^5-86x^4-872x^3+\frac{61439}{80}x^2+\frac{28801}{40}x-\frac{53993}{80}=0
$$
which has real roots at

$$
\{ -3.00285, -2.99715, -0.99998,+0.99012,+1.00988,+2.99998,+4.99715,+5.00285\}
$$


sequences and series - Find the sum $1+cos (x)+cos (2x)+cos (3x)+....+cos (n-1)x$





By considering the geometric series $1+z+z^{2}+...+z^{n-1}$ where $z=\cos(\theta)+i\sin(\theta)$, show that $1+\cos(\theta)+\cos(2\theta)+\cos(3\theta)+...+\cos(n-1)\theta$ = ${1-\cos(\theta)+\cos(n-1)\theta-\cos(n\theta)}\over {2-2\cos(\theta)}$



I've tried expressing $\cos(n\theta)$ as ${e^{in\theta}+e^{-in\theta}} \over {2}$ but I don't think that will lead anywhere. Does it help that $1+z+z^{2}+z^{3}+...+z^{n-1}$=$e^{0i\theta}+e^{i\theta}+e^{2i\theta}+e^{3i\theta}+...+e^{(n-1)i\theta}$?



So the sum $\sum_{r=0}^{n-1} e^{ir\theta}$=${e^{ni\theta}-1} \over {e^{i\theta}-1}$




Thank you in advance :)


Answer



Your sum can be rewritten: $\Re(\sum \exp{(i n \theta)})$ which is simply a geometric sum. Then make apparent the real and imaginary parts in your result.


Thursday 27 October 2016

real analysis - Does there exists a differentiable function $f$ on $mathbb R$ whose derived function $f'$ is discontinuous on $mathbb Q$ and continuous elsewhere?



Recently I found a problem which asks:



Does there exists a differentiable function $f$ on $\mathbb R$ whose derived function $f'$ is discontinuous on $\mathbb Q$ and continuous elsewhere?




More generally given any $F_\sigma$ set ,does there exist a differentiable function on $\mathbb R$ whose derivative has discontinuity only on that set and continuous elsewhere?



I attempted to make a function whose derivative $f'(x)=t(x)$ where $t(x)$ is the extended thomae function(thomae function extended for $\mathbb R$ instead of $[0,1]$).
But my question is does the function $t(x)$ have an antiderivative on $\mathbb R$?
I have not yet studied Riemann integrability,so I cannot conclude anything about it.


Answer



Yes:



Start with the standard $$h(t)=\begin{cases}t^2\sin(1/t),&(t\ne0),

\\0,&(t=0).\end{cases}$$
So $h$ is differentiable and $h'$ is continuous except at $0$. Since $h'$ is locally bounded there exists a differentiable function $g$ with $g(t)=h(t)$ for $|t|\le1$ and such that $g$ and $g'$ are bounded.



Say $\Bbb Q=\{r_1,r_2,\dots\}$. Let $$f(t)=\sum 2^{-n}g(t-r_n).$$It follows that $f$ is differentiable and $$f'(t)=\sum 2^{-n}g'(t-r_n),$$since the last sum is uniformly convergent (cf. baby Rudin Thm 7.17.). It's clear that $f'$ is continuous at $t$ if $t$ is irrational, again by uniform convergence.



And $f'$ is discontinuous at $t$ if $t$ is rational. Details for that: Say $t=r_n$. Write $$f=f_1+f_2,$$where $$f_1(t)=2^{-n}g(t-r_n).$$Then as above, uniform convergence shows that $f_2'$ is continuous at $r_n$; since $f_1'$ is discontinuous there so is $f$.



Note



No, the Thomae function $f$ does not have an antiderivative. But there's a major gap in the explanation for this in various comments: It's clear that if $g(y)-g(x)=\int_x^yf$ then $g$ is constant, hence $g'\ne f$. But it's not clear why $g'=f$ would imply $g(x)-g(y)=\int_y^x f$, since after all $f$ is not continuous. Possibly one could justify this using some fancy version of FTC.




Edit. In fact it's easy to show that if $g$ is differentiable and $g'$ is Riemann integrable then $g(x)-g(y)=\int_y^x g'$; I was forgetting this. So the argument in those comments is fine, although probably someone might have mentioned the bit about Riemann integrability.



Anyway, there's a simple argument without FTC:



The point being that although a derivative need not be continuous, it can't be "too discontinuous". For example it's well known that a derivative cannot have a jump discontinuity. That's not quite enough here, but:





Lemma. If $g:\Bbb R\to\Bbb R$ is differentiable then $\limsup_{t\to0}g'(t)\ge g'(0)$.






Proof: It's an easy exercise from the definitions to show there exists a sequence $t_n$ decreasing to zero such that $$\frac{g(t_n)-g(t_{n+1})}{t_n-t_{n+1}}\to g'(0).$$So MVT shows here exists a sequence $s_n\to0$ (with $s_n>0$) such that $$g'(s_n)\to g'(0).$$



Otoh if $f$ is the Thomae function then $$\limsup_{t\to0}f(t)So the lemma shows that $f$ is not a derivative.


calculus - Did I choose the correct method for solving this integral?(Integral Techniques)



I'm currently studying for my calc exam, and i've stumbled across a rather odd problem(at least for me). I've been doing integrals non stop for 2 weeks now and I haven't seen anything like this, so I would like to know if my approach is correct. I feel like i'm not doing it correctly since my answer conflicts with my professor's answer key. It is as follows:



$$\int\sqrt{x^4+x^7}\;dx\;\; or \int(x^4+x^7)^\frac{1}{2}\;dx$$



Since i've been doing(mostly) complex trig sub and integration by parts, I didn't immediately know what to do. I decided to convert the integral to $\int(x^4+x^7)^1/2$ and multiply the exponents:




$$\int\sqrt{x^4+x^7} = \int(x^2+x^\frac{7}{2})\;dx$$



Then I use basic integration to yield:



$$\frac13x^3+\frac29x^\frac{9}{2}+\;C$$



Taking the derivative to check:



$$\frac {d}{dx}(\frac 13x^3 + \frac29x^\frac{9}{2}) = x^2+x^\frac{7}{2}$$




Seems to give me what I started with, but my answer key has this as the answer: $$\frac29(1+x^3)^\frac32+C$$



I can see some similarities to my answer, but it makes me feel like I made a mistake in my technique. Symbolab isn't capable of computing this integral for some reason, and WolframAlpha gives an answer far, far different then either of the integrals I(or my professor) has. Any input would be greatly appreciated as I just want to be as prepare for my exam as much as possible. Thanks!


Answer



HINT



Take out $x^4$ common from squareroot and then put $x^3=t$


geometry - Equal perimeters of squares and right angled isosceles triangles



Consider a square ABCD having length l and breadth. Now start folding the sides AB and AC so that the figure becomes something like this $$$$not to scale



All the vertical and horizontal folds/stairs are equal in length.




The perimeter of the figure is equal to the the perimeter of the square.



As we increase the number of divisions, the length of each fold/stair decreases.
Let the number of stairs/folds be n. As $n\rightarrow\infty$ the figure becomes a right angled isosceles triangle BCD. The perimeter of triangle BCD should be equal to the perimeter of square ABCD since as we increase the number of folds/stairs the perimeter remains the same.



Can anyone please correct me where I have gone wrong?


Answer



The question Is Pi equal to 4? Does the same thing, using a circle inside of a square and slowly removing rectangles from the sides of the square and then using $circumference=2\pi r$ to say that $\pi$ is equal to 4. The answers there will probably answer this quite well.


About powers of irrational numbers



Square of an irrational number can be a rational number e.g. $\sqrt{2}$ is irrational but its square is 2 which is rational.



But is there a irrational number square root of which is a rational number?




Is it safe to assume, in general, that $n^{th}$-root of irrational will always give irrational numbers?


Answer



Obviously, if p is rational, then p2 must also be rational (trivial to prove).



$$ p \in \mathbb Q \Rightarrow p^2 \in \mathbb Q. $$



Take the contraposition, we see that if x is irrational, then √x must also be irrational.



$$ p^2 \notin \mathbb Q \Rightarrow p \notin \mathbb Q. $$







By negative power I assume you mean (1/n)-th power (it is obvious that $(\sqrt2)^{-2} = \frac12\in\mathbb Q$). It is true by the statement above — just replace 2 by n.


What is the probability that a fair six-sided die lands on an even number three out of five times it is rolled?




What is the probability that a fair six-sided die lands on an even number three out of five times it is rolled?




For one roll, the outcomes are $1, 2, 3, 4, 5, 6$ of which $2, 4, 6$ are even, so the probability is $3/6=1/2$. But how to deal with "three out of five times" part?


Answer



Because the number of even sides are equal to odd sides, you can simplify the problem to "What is the probability that a coin lands on heads three times out of five?




For which the probability is $5$ pick $3$ out of $32$ or $$\frac{1}{32} \cdot \frac{5 \cdot 4 \cdot 3}{3\cdot 2 \cdot 1}=\frac{10}{32}= \frac{5}{16}.$$


Wednesday 26 October 2016

trigonometry - How were these values of sin x found?

I'm working on Trigonometry problems that have to do with inverse functions.



So I have an example problem that goes like this:



$$\ 10\sin^2x = \sin x$$




and apparently the solution set is:



$\sin x = 0 $ if $ x = 0.0 , 3.1$



and $\sin x = 1/10$ if $ x = 0.1, 3.0$



I know how to do these problems in terms of radians, but I don't know what these integers are referring to on the unit circle, because when I assume that 1 = 360 degrees, the equations still don't make sense.



Can someone please help me with how these values were computed? I can't find any examples in my textbook.




Thank you!

Limit $lim_{uto0} frac{3u}{tan 2u}$

I’m currently stuck trying to evaluate this limit,
$$
\lim_{u\to0} \frac{3u}{\tan(2u)},
$$


without using L’Hôpital’s rule. I’ve tried both substituting for $\tan(2u)=\dfrac{2\tan u}{1-(\tan u)^2}$, and $\tan 2u=\dfrac{\sin 2u}{\cos 2u}$ without success. Am I on the right path to think trig sub?

sequences and series - How to know if it diverges or converges and finding the convergent value

I am given the following succession/series/sequence:



$$ a_n = \frac{4n^5 +4n^3+n}{5n^4-2n^5+n^2} $$



How do I find out if it converges or diverges and how to find such values.




I am quite lost on the subject.



I've read that in a Geometric succession/series/sequence it is convergent if the ratio is less than 0, but I'm not sure if its a geometric series.



Help is really appreciated, thanks in advance.






PD: My native language is not english so I'm not sure what the appropriate term would be, is it succession, series or sequence.

real analysis - Showing that $frac{sqrt[n]{n!}}{n}$ $rightarrow frac{1}{e}$




Show:$$\lim_{n\to\infty}\frac{\sqrt[n]{n!}}{n}= \frac{1}{e}$$



So I can expand the numerator by geometric mean. Letting $C_{n}=\left(\ln(a_{1})+...+\ln(a_{n})\right)/n$. Let the numerator be called $a_{n}$ and the denominator be $b_{n}$ Is there a way to use this statement so that I could force the original sequence into the form of $1/\left(1+\frac{1}{n}\right)^n$


Answer



I would like to use the following lemma:




If $\lim_{n\to\infty}a_n=a$ and $a_n>0$ for all $n$, then we have
$$

\lim_{n\to\infty}\sqrt[n]{a_1a_2\cdots a_n}=a \tag{1}
$$




Let $a_n=(1+\frac{1}{n})^n$, then $a_n>0$ for all $n$ and $\lim_{n\to\infty}a_n=e$. Applying ($*$) we have
$$
\begin{align}
e&=\lim_{n\to\infty}\sqrt[n]{a_1a_2\cdots a_n}\\
&=\lim_{n\to\infty}\sqrt[n]{\left(\frac{2}{1}\right)^1\left(\frac{3}{2}\right)^2\cdots\left(\frac{n+1}{n}\right)^n}\\
&=\lim_{n\to\infty}\sqrt[n]{\frac{(n+1)^n}{n!}}\\&=

\lim_{n\to\infty}\frac{n+1}{\sqrt[n]{n!}}=\lim_{n\to\infty}\frac{n}{\sqrt[n]{n!}}+\lim_{n\to\infty}\frac{1}{\sqrt[n]{n!}}=\lim_{n\to\infty}\frac{n}{\sqrt[n]{n!}}
\end{align}\tag{2}
$$
where we use (1) in the last equality to show that
$
\lim_{n\to\infty}\frac{1}{\sqrt[n]{n!}}=0.
$



It follows from (2) that
$$

\lim_{n\to\infty}\frac{\sqrt[n]{n!}}{n}=\frac{1}{e}.
$$


summation - Closed-form for Floor Sum 1

Does a closed form exist for the following sum?
$$\sum_{k=0}^n \lfloor \sqrt{k} + \sqrt{k + n} \rfloor$$



If not, why is this sum so radically different than the sums below?



Closed forms do exist for the following sums*:
$$\sum_{k=0}^n \lfloor \sqrt{k + n} \rfloor$$

$$\sum_{k=0}^n \lfloor \sqrt{k} \rfloor$$



There is this floor functional identity:
$$\lfloor \sqrt{k} + \sqrt{k + 1} \rfloor = \lfloor\sqrt{4k+2}\rfloor$$
Don't know if this will help.



Thanks



*Existing closed forms




$$\sum_{k=0}^n \lfloor \sqrt{k} \rfloor=2\left(\sum_{k=0}^{\lfloor \sqrt{n} \rfloor-1}k^2\right)+\left(\sum_{k=0}^{\lfloor \sqrt{n} \rfloor-1}k\right)+\lfloor\sqrt{n}\rfloor\left(n-\lfloor\sqrt{n}\rfloor^2+1\right)$$
$$\left(\sum_{k=0}^n k^2\right)=\frac{2n^3+3n^2+n}{6}$$
$$\left(\sum_{k=0}^n k\right)=\frac{n^2+n}{2}$$
$$\sum_{k=1}^n \lfloor \sqrt{k+C} \rfloor=\sum_{k=C+1}^{C+n} \lfloor \sqrt{k} \rfloor=\sum_{k=0}^{C+n} \lfloor \sqrt{k} \rfloor-\sum_{k=0}^{C} \lfloor \sqrt{k} \rfloor$$

elementary set theory - Question regarding the union set of the family and intersection set of the family



I'm really struggling with this question.



"Let $\{A_n\}_{n\in\mathbb N}$ be a family of subsets of a set $S$. Let
$$X:=\bigcup_{n\in\mathbb N}\left(\bigcap_{k\geq n}A_k\right),\qquad Y:=\bigcap_{n\in\mathbb N}\left(\bigcup_{k\geq n}A_k\right).$$
Does any of the relations X ⊂ Y, X = Y, Y ⊂ X hold?"




Currently, by using the definition of Unions and Intersections I already proved that X ⊂ Y. However I'm stuck trying to prove whether the inverse, Y ⊂ X holds. Intuitively, something tells me that Y ⊂ X holds, but I can't prove it correctly. I would appreciate some help. Thanks.


Answer



Suppose $A_{2k}=\{1\}$, and $A_{2k+1}=\{2\}$, for all $k\ge 0$; i.e. the even-indexed sets are all the same, and the odd-indexed sets are also all the same. You will find that $X=\emptyset$ while $Y=\{1,2\}$.


Possible mistake in Rudin's definition of limits in the extended real number system



From Baby Rudin page 98




This seems to be a mistake since we have seemingly absurd results like
$$ graph(f) = \{(0,0)\} \Rightarrow \lim_{x\to \infty} f(x)= 0 $$
We define the limit(for $x$ real) only for limit points of $E$ so my initial thinking is to enforce that every neighborhood of $x$ must have infinitely many points of $E$. This would imply that limits at infinity could only happen for unbound $E$ so the previous example would not be true. Is there a more standard way of defining such limits?



This has been discussed before at Definition of the Limit of a Function for the Extended Reals but I'm more interested in the infinite case and how to fix the definition.



Answer



Yes, there is a more standard way. If you know topology, what you are doing is attaching two points to $\mathbb{R}$, which we will call $\infty$ and $-\infty$, giving the obvious order and imposing the order topology. What follows is that limits are now well-defined as in any topological space, and your proposed definition is equivalent. Just as in any topological space, limits are defined on limit points only.



This has the advantage of putting away the "special" feeling and treatment about $\infty$ and $-\infty$, putting them in the same ground as any real number.






I've made this blog post sometime ago about some considerations on the extended real line from a topological viewpoint. You may find it useful.


Tuesday 25 October 2016

integration - Prove the integral $int_0^1 frac{H_t}{t}dt=sum_{k=1}^{infty} frac{ln (1+frac{1}{k})}{k}$



By numerical results it follows that:



$$\int_0^1 \frac{H_t}{t}dt=\sum_{k=1}^{\infty} \frac{\ln (1+\frac{1}{k})}{k}=1.25774688694436963$$



Here $H_t$ is the harmonic number, which is the generalization of harmonic sum and has an integral representation:




$$H_t=\int_0^1 \frac{1-y^t}{1-y}~dy$$






If anyone has doubts about convergence, we have:



$$\lim_{t \to 0} \frac{H_t}{t} = \frac{\pi^2}{6}$$



Which would be another nice thing to prove, although I'm sure this proof is not hard to find.







It is also interesting that the related integral gives Euler-Mascheroni constant:



$$\int_0^1 H_t dt=\gamma$$


Answer



We apply
$$
\frac{1-y^t}{1-y} = (1-y^t)(1+y+y^2+ \ldots)=(1-y^t)+y(1-y^t) +y^2(1-y^t)+\ldots

$$
Each term gives after $y$-integration,
$$
\frac t{t+1} + \frac{t}{2(t+2)}+ \frac t{3(t+3)} + \ldots
$$
Then we divide these by $t$,
$$
\frac 1{t+1} + \frac{1}{2(t+2)}+ \frac 1{3(t+3)} + \ldots
$$
Taking integral with $t$ variable, we have the result.




Any interchange of integral and summation can be justified by Monotone Convergence Theorem.


number theory - Constructing an irreducible polynomial of degree $2$ over $mathbb{F}_p$

I want to construct an irreducible polynomial of degree $2$ over $\mathbb{F}_p$ where $p$ is a prime that can be written as $4k+1$. My attempt is as follow: we can assume that this polynomial is of the form ${x^2} + ax + b$ for some $a,b \in {\mathbb{F}_p}$. So for all $\lambda \in {F_p}$, $p$ doesn't divide ${\lambda ^2} + a\lambda + b$. It follows that ${\lambda ^2}$ is not equal to $a\lambda + b
\bmod p$. If we can find some $a,b \in {\mathbb{F}_p}$ such that $a\lambda + b$ is a nonresidue for all $\lambda \in {F_p}$, it is ok. But I cannot. I wait your response.

discrete mathematics - Prove that for all positive integers $n, 9|(11^ n − 2 ^n )$

Prove that for all positive integers $n, 9|(11^n − 2^n )$



So the base case would be



9 * k = (11*1 - 2 * 1)
9 * k = 9

k = 1 so yes


The inductive hypothesis would be the fact that $(11^n-2^n)$ is divisible by $9,$



So I thought then I would have to show that $(11^{(n+1)}-2^{(n+1)})$ is divisible by$ 9$



11^(n+1) - 2^(n+1)
11^(n) * 11^1 - 2^n * 2^1
(11-2) * (11^n-2^n)

9*(11^n-2^n)


Is this algebraically correct?

abstract algebra - Remainders of functions



I was given this question:



Consider $\mathbb{R}[x]$ and lets $\mathbb{R}_1[x]=\{a+bx:a,b\in\mathbb{R}\}$. Define the map $\phi:\mathbb{R}[x] \rightarrow \mathbb{R}_1[x]$ by letting $\phi(f(x))$ be the remainder $r(x)$, when $f(x)$ is divided by $x^2+1$. This is well-defined, and in $\mathbb{R}_1[x]$, by the division algorithm in $\mathbb{R}[x]$.



I would like to calculate $\phi(3x^2+4x+7)$ but I am confused as to what is being asked here.




The wording is throwing me off, so I am assuming that I want to take $3x^2+4x+7$ and divide it by $x^2+1$, and the outputted remainder, $r(x) \in \mathbb{R}_1[x]$ is what I am looking for?



How do I approach this type of problem?


Answer



The problem is saying that your map gives you the reminder of some polynomials (with real coefficients) divided by $x^2+1$. So if you take a polynomial $p(x)\in\mathbb{R}[x]$ and divide this polynomial by $(x^2+1)$ you get a new polynomial $q(x)$ such that $$p(x) = (x^2+1)q(x)+ R(x)$$ where $R(x)$ is the reminder of the division. Your function then $$\phi:\mathbb{R}[x]\rightarrow \mathbb{R}_1[x]\\\phi(p(x))\mapsto R(x)$$ As a trivial example, let's take the polynomial $p(x) = x^2+1$, obviously $$\phi(p(x))=\phi(x^2+1)=0$$ so the kernel of this map is all the polynomials of the type $k(x^2+1)$ where $k\in\mathbb{R}$.



To find the value of the map for the polynomial $3x^2+4x+7$, instead of doing long division, you just have to do extrapolate a $(x^2+1)$ factor, for as many times as you can, from that polynomial, mainly: $$3x^2+4x+7 = 3(x^2+1)+4x+4$$ so that $$\frac{3x^2+4x+7}{x^2+1}=\frac{3(x^2+1)+4x+4}{x^2+1}= \\ = \frac{3(x^2+1)}{x^2+1} + \frac{4x+4}{x^2+1} = 4\frac{x+1}{x^2+1}+3$$ clearly the reminder is $3$. So in the end $$\phi(3^2+4x+7) = 3$$



Edit




The fact that your reminders $R(x)$ have to be in $\mathbb{R}_1[x]$ means that you have to do your division until you get something of the form $a+bx$ and not a polynomial of higher degree. For example, if you take some polynomial of fifth degree, call it $$a(x) = a+bx+cx^2+dx^3+ex^4+fx^5$$ then, dividing by $x^2+1$ you only get to the point were $$a(x) = q(x)(x^2+1)+R(x)$$ where $$q(x) = a+bx+cx^2$$ Surely you could divide $q(x)$ again by $x^2+1$ but this would give you, in total, a reminder of degree more than one.


probability - Explain why $E(X) = int_0^infty (1-F_X (t)) , dt$ for every nonnegative random variable $X$




Let $X$ be a non-negative random variable and $F_{X}$ the corresponding CDF. Show,
$$E(X) = \int_0^\infty (1-F_X (t)) \, dt$$
when $X$ has : a) a discrete distribution, b) a continuous distribution.





I assumed that for the case of a continuous distribution, since $F_X (t) = \mathbb{P}(X\leq t)$, then $1-F_X (t) = 1- \mathbb{P}(X\leq t) = \mathbb{P}(X> t)$. Although how useful integrating that is, I really have no idea.


Answer



For every nonnegative random variable $X$, whether discrete or continuous or a mix of these,
$$
X=\int_0^X\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\gt t}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\geqslant t}\,\mathrm dt,
$$
hence




$$

\mathrm E(X)=\int_0^{+\infty}\mathrm P(X\gt t)\,\mathrm dt=\int_0^{+\infty}\mathrm P(X\geqslant t)\,\mathrm dt.
$$







Likewise, for every $p>0$, $$
X^p=\int_0^Xp\,t^{p-1}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\gt t}\,p\,t^{p-1}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\geqslant t}\,p\,t^{p-1}\,\mathrm dt,
$$
hence





$$
\mathrm E(X^p)=\int_0^{+\infty}p\,t^{p-1}\,\mathrm P(X\gt t)\,\mathrm dt=\int_0^{+\infty}p\,t^{p-1}\,\mathrm P(X\geqslant t)\,\mathrm dt.
$$



integration - Evaluate $int frac{1+cos(x)}{sin^2(x)},operatorname d!x$

I`m trying to solve this integral and I did the following steps to solve it but don't know how to continue.
$$\int \dfrac{1+\cos(x)}{\sin^2(x)}\,\operatorname d\!x$$

$$\begin{align}\int \dfrac{\operatorname d\!x}{\sin^2(x)}+\int \frac{\cos(x)}{\sin^2(x)}\,\operatorname d\!x &= \int \dfrac{\operatorname d\!x}{\sin^2(x)}+\int \frac{\cos(x)}{1-\cos^2(x)} \\

&=\int \sin^{-2}(x)\,\operatorname d\!x + \int \cos(x)\,\operatorname d\!x - \int \frac{\operatorname d\!x}{\cos(x)}\end{align}$$
Any suggestions how to continue?

Thanks!

Monday 24 October 2016

calculus - Solve a limit without L'Hopital: $ lim_{xto0} frac{ln(cos5x)}{ln(cos7x)}$

I need to solve this limit without using L'Hopital's rule. I have attempted to solve this countless times, and yet, I seem to always end up with an equation that's far more complicated than the one I've started with.



$$ \lim_{x\to0} \frac{\ln(\cos5x)}{\ln(\cos7x)}$$



Could anyone explain me how to solve this without using L'Hopital's rule ? Thanks!

proof theory - How to prove the mathematical induction is true?



I have no idea about the underlying theory from which the mathematical induction was derived.



How to prove the mathematical induction is true?


Answer



A "proof" in mathematics always means a proof in some system/theory. You have to specify the system/theory that you want a proof for the induction axiom. (You should also formally specify what you mean by the induction axiom since there are various axioms that are called induction axiom.)




The induction axiom in an arithmetical theory (like Peano arithmetic) is an axiom, i.e. it is one of the axioms of the theory, and therefore the proof is just a single line stating the axiom.



In a set theory like $ZFC$ we can prove the induction axiom for the set of natural numbers using the fact that the set of natural numbers is defined as the smallest inductive set that contains zero, and the proof is almost trivial. (An inductive set means a set that contains the successor of $x$ whenever it contains $x$).



In high school or undergraduate courses, when one is asked to prove induction axiom, they are usually asked to derive the induction axiom from some other axioms like the least number principle for natural numbers.



Another possible question is what are the justifications for believing that the induction axiom is true (or for accepting it as an axiom), which is a question in philosophy of mathematics and might be more suitable for MathOverflow.


Let $p,q$ be irrational numbers, such that, $p^2$ and $q^2$ are relatively prime. Show that $sqrt{pq}$ is also irrational.



Progress:





Since, $p,q$ are irrationals and $p^2$ and $q^2$ are relatively prime, thus, $p^2\cdot{q^2}$ cannot be a proper square, so, $pq$ is also irrational. Suppose, $pq=k$, then: $$\sqrt{k}\cdot\sqrt{k}=\sqrt{pq}\cdot\sqrt{pq}=k$$
Which implies that $\sqrt{pq}$ is irrational, since, $k$ is irrational and that $p^2\cdot{q^2}$ cannot be a proper square being $p^2$ and $q^2$ relatively prime and that concludes the proof.




The above lines are my attempt to prove the assertion. Is the proof correct? If not then how can I improve or disprove it.



Regrads


Answer



The first part is correct, but you could clarify it a little bit:





Since $p$ and $q$ are irrational, they are not integers



Since $p$ and $q$ are not integers, $p^2$ and $q^2$ are not perfect squares



Since in addition to that $p^2$ and $q^2$ are relatively prime, $p^2q^2$ is not a perfect square



Since $p^2q^2$ is not a perfect square, $\sqrt{p^2q^2}=pq$ is irrational








The second part is somewhat obscure, but you could simply use the following argument instead:




Since $pq$ is irrational, it is not a multiple of two integers



Since $pq$ is not a multiple of two integers, it is not a perfect square



Since $pq$ is not a perfect square, $\sqrt{pq}$ is irrational




Range of two equal functions




I know two functions $f$ and $g$ are equal if



(1) their domains are equal



(2) their co-domains are equal



(3) $f(x) = g(x)$



I want to ask if two functions are equal is it necessary that they will have equal range too?



Answer



Take $f : A \rightarrow B \wedge g : A \rightarrow B.$ If these functions are equal, then $\forall x \in A. f(x) = g(x).$ Consider their ranges $f(A)$ and $g(A).$ If their ranges are not equal, $f(A) \neq g(A),$ which implies that there exists an element in one of their ranges that is not in the other. Without loss of generality. Say there exists some $y \in f(A)$ such that $y \notin g(A).$ This means that
$$\exists x \in A. f(x) = y \wedge \forall x \in A. g(x) \neq y.$$
Fix this $x \in A$ such that $f(x) = y.$ Because $f$ and $g$ are equal, then
$$f(x) = g(x) = y \implies g(x) = y,$$
which is a contradiction of the statement that $\forall x\in A. g(x) \neq y.$ Thus, the ranges of $f$ and $g$ must be equal.



Your suggested functions are not equal, since



$$fof(3) = 1 \neq 2 = f(3).$$



Sunday 23 October 2016

When working with complex numbers, how can you solve for $x$ when it's inside $Re()$?



I'm trying to figure out the impedance of a capacitor. My textbook tells me the answer is $\frac{-i}{\omega C}$ and plugging that into the equation does work but I wanted to come up with that answer myself. So I wrote out the equation with what I know:




$$-V_0\omega C\sin\omega t = Re\left( \frac{V_0(\cos\omega t + i\sin\omega t)}{x} \right)$$



This is where I get stuck. I don't know how to isolate $x$ given that it is inside the $Re()$ function. Trying to get somewhere, I tried this:



$$x = \frac{V_0(\cos\omega t + i\sin\omega t)}{-V_0\omega C\sin\omega t} = \frac{\cos\omega t}{-\omega C\sin\omega t} - \frac{i}{\omega C}$$



Seeing $-\frac{i}{\omega C}$ makes me feel like I'm on the right track. Now I just need to figure out how to get rid of the first part of that answer. And I'm guessing that if I knew how to isolate $x$ from the first equation, that would do the trick. So how can I isolate $x$ when it is included in the $Re()$ function?


Answer



Re() is a projective map; Re(a+bi) = a. Thus Re(z) = z-iIm(z). So given RHS = Re(z), we have that RHS+bi = z for some real b. Note that Re(z) = a does not yield a single value of z as a solution, but instead gives a vertical line in the complex plane. Each point on that line will give a different value for x.




$$-V_0\omega C\sin\omega t + bi = \frac{V_0(\cos\omega t + i\sin\omega t)}{x} $$



In terms of b, x will be:



$$\frac{V_0(\cos\omega t + i\sin\omega t)}{-V_0\omega C\sin\omega t + bi} $$



Assuming that $\omega$, $V_0$, and C are real numbers, they can be "absorbed" into b; b is an arbitrary real number, so dividing by a real number just gives another arbitrary real number. So the above can be rewritten as



$$\frac{(\cos\omega t + i\sin\omega t)}{-\omega C(\sin\omega t + bi)} $$




Factoring an i out of the numerator, we get



$$\frac{i(\sin\omega t-i\cos\omega t )}{-\omega C(\sin\omega t + ib)} $$



Again, this describes a solution set, not a particular x. But if you take $b = -\cos\omega t$, then you recover the given expression. Any motivation for that choice will have to come from further facts about the capacitance rather than mathematical properties.


abstract algebra - What is wrong with this proof of Wedderburn's little theorem?



Wedderburn's little theorem $\quad$ every finite domain $A$ is a field.




Proof $\quad$ Let $x$ be a nonzero element of $A$. Because $A$ is finite, there
exist positive integers $n$, $k$ such that $x^n = x^{n + k}$. It is easy to
see by induction that the set $E = \left \{x^i : i \in \mathbf{N}^*\right\}$
does not contain $0$; it follows therefore from $x^n\left(1 - x^k\right) = 0$
that $x^k = 1$. Thus, $x^{k - 1}$ is the inverse of $x$ (when $k = 1$, $x$ has
inverse $1$).



All the proofs I have seen of this result are much more sophisticated than mine. Hence, I am doubting its correctness and could use a second opinion.


Answer




Wedderburn's little theorem states as you wrote above, that a finite domain is a field. A field is a commutative domain $A$, such that every nonzero $x \in A$ has a multiplicative inverse. Your proof only shows that any finite domain is a skew field. You must also prove that $A$ is commutative, which needs more sophisticated arguments.


calculus - Which part of the train stays for the longest time in the station?


A train with length $L$ is moving towards a train station of length $S$ with speed $v$.



The train starts to decelerate with acceleration $-a$ as soon as head reaches the station until completely stops. Right after the train completely stops, it starts to accelerate with acceleration $a$ until its tail leaves the station with speed $v$.



Which part of the train stays for the longest time in the station?





The correct answer is the mid-part of the train. If the head stays in the station for $t$ and the mid-part of the train stays for $\sqrt{2}t$. Could anyone give me some clue on how to get this?

calculus - Is this epsilon-delta proof valid?



Prove using $\epsilon- \delta$ that $ \displaystyle \lim_{x \to -4} \frac { x^2 + 6x + 8 }{x + 4} = - 2 $.

Here's a proposed proof:






For $\delta \leq 1$, i.e. $ | x + 4 | < 1 $ which guarantees $x < -1 $, one can argue:



$ \left| \dfrac { x^2 + 6x + 8 }{x + 4} + 2 \right| = \left| \dfrac { x^2 + 8x + 16}{x + 4}\right| < \left| \dfrac { x^2 + 8x + 16}{x}\right| < |x^2 + 8x + 16| = |(x+4)^2| = (x+4)^2 \ . $



Let's require $(x+4)^2 < \epsilon$, which implies $ | x + 4 | < \sqrt \epsilon $. Therefore we have $\delta = \min \{1, \sqrt \epsilon \}$.







Is it a valid proof or are there any loopholes I'm unaware of? Side-note: I realize there are different -- and perhaps simpler -- ways to prove this, I just want to see if this very approach is valid.


Answer



You have $x^2 + 6x + 8 = (x+2)(x + 4).$ Try factoring and canceling.


algebra precalculus - How to prove the identity $3sin^4x-2sin^6x=1-3cos^4x+2cos^6x$?



I'm trying to prove a trigonometric identity but I can't. I've been trying a lot but I can't prove it. The identity says like this:
$$3\sin^4x-2\sin^6x=1-3\cos^4x+2\cos^6x$$




The identity would be easy if $1-\cos^4x=\sin^4x$ and $1-\cos^6x=\sin^6x$ but we know that $\sin^4x+\cos^4x$ isn't equal with $1$ and $\sin^6x+\cos^6x$ isn't equal with $1$.



Can anybody help me?!



Thank you!


Answer



$$3\sin^4 x - 2\sin^6 x = 3 (\sin^2 x)^2 - 2(\sin^2x)^3 = (\sin^2 x)^2(3 - 2\sin^2 x)\cdots$$



Now we can use the Pythagorean Identity: $$\sin^2x + \cos^2x = 1 \iff \sin^2 x = 1 -\cos^2 x$$







$$\begin{align} 3\sin^4 x - 2\sin^6 x & = 3 (\sin^2 x)^2 - 2(\sin^2 x)^3 \\ \\
& = (\sin^2x)^2 (3 - 2\sin^2 x) \\ \\
& = (1 - \cos^2 x)^2\Big(3 - 2(1 - \cos ^2 x)\Big)\\ \\
& = (1- 2\cos^2 x + \cos^4 x)(1 + 2 \cos^2 x) \\ \\
& = 1 - 3 \cos^4 x + 2 \cos^6 x\end{align}$$


Find terms number of arithmetic progression.



I had an exam today, within the exam, this question was the hardest.




If we have a arithmetic progression, its number of terms is $even$, total of it's $even$ terms = $30$, total of it's $odd$ terms = $24$.



the difference between the last term and the first one = $10.5$



(If nothing clear, sorry for it, I tried to translate the question into english)


Answer



Let $a,d,2m$ be the first term, the common difference, the number of terms respectively where $m\in\mathbb N$.



This answer supposes that "total of it's even terms $=30$" means that

$$(a+d)+(a+3d)+\cdots +(a+(2m-1)d)=\sum_{i=1}^{m}(a+(2i-1)d)=30,$$
i.e.
$$am+2d\cdot\frac{m(m+1)}{2}-dm=30\tag1$$



Also, this answer supposes that "total of it's odd terms $=24$" means that
$$a+(a+2d)+\cdots +(a+(2m-2)d)=\sum_{i=1}^{m}(a+(2i-2)d)=24,$$
i.e.
$$am+2d\cdot\frac{m(m+1)}{2}-2dm=24\tag2$$



And we have

$$|a+(2m-1)d-a|=10.5\tag3$$



Now solve $(1)(2)(3)$ to get $a,d,2m$.




From $(1)-(2)$, we have $d=\frac 6m$. From $(3)$, we have $(2m-1)|\frac 6m|=10.5\Rightarrow m=4$. Finally, from $(1)$, we have $d=a=\frac 32$.



abstract algebra - Group isomorphism from subgroup of $U(n)times mathbb{Z}_n$ to $D_n$ the dihedral group of order 2n.




I have a group $G_n = U(n)\times \mathbb{Z}_n$ with the operation $(a,x)(b,y) = (ab,ay+x)$ and I have a subgroup $H_n = \{(a,b) \in G_n | a = \pm 1\}$ which I want to show is isomorphic to $D_n$ the dihedral group of order 2n.



I know for an isomorphism $\phi:H_n \to D_n$ I need $\phi(ab) = \phi(a)\phi(b)$ for $a,b \in H_n$. I know they have the same number of elements so that's good I suppose but I'm having trouble seeing how to preserve group operations with $\phi$.



I've tried to take $\phi: H_n \to \zeta_n$ the set of complex nth roots of unity under multiplication and conjugation (which is isomorphic to $D_n$ right?) because I thought it might be easier and I could then rely on the composition of isomorphisms being an isomorphism but I've not managed to find a working map $\phi$ and I'm very much starting to doubt that it's easier to go this route.



Can anybody point me in the right direction?


Answer



Indeed, $H_n$ and $D_n$ are isomorphic groups. First you may consider $D_n=\langle\tau^j\sigma^i\mid\;\tau^2=1=\sigma^n,\sigma\tau=\tau\sigma^{n-1}\rangle$, and define $\phi:H_n\to{D_n}$ such that $\phi(-1,0)=\tau$ and $\phi(1,1)=\sigma$. For instance you may take $\phi(a,b)=\tau^{\frac{1-a}{2}}\sigma^b$. The rest is straightforward.


Saturday 22 October 2016

random - How to handle dice probability? ie, how much more likely would 3 six sided dice give a higher sum than 3 four sided dice?

I am playing at making my own table-top gaming system/rules and I wanted to have a better handle on how likely different dice combinations will give a higher result than one another. I know that a six sided die roll averages to 3.5, and an eight sided die roll averages 4.5, but I still don't quite have a grasp on just how likely it is an 8 sided die comes up with a higher result than a 6 sided one.



I would also like to know how adding integers to die results effects their comparative advantage as well, like how often would the sum of 3 six-sided dice with a 1 added to the final result give a higher outcome than just 3 six-sided dice?




Thanks in advanced for any advice, I'm just not really sure where to start with this, I focused mostly on algebra/calc/trig in school and never really did any probability/stat.

arithmetic - Why is $left(1-frac{1}{k}right)^t < e^{-t/k}$?

I came across this statement, but can't see why it holds: $\left(1-\frac{1}{k}\right)^t < e^{-t/k}$



I'm sure it's something simple, but I don't have a great deal of mathematical experience. I have tried using $e^k = \sum_{n=0}^\infty \frac{\lambda^n}{n!}$, but without success.



Help would be appreciated. Thanks!

Friday 21 October 2016

terminology - When was the term "mathematics" first used?



By the second century, in the Almagest, Ptolemy provides a modern conception of "mathematics" as a "science":




'Mathematics' ... is an attribute of all existing things, without
exception, both mortal and immortal: for those things which are
perpetually changing ... it changes with them, while for eternal

things ... it keeps their unchanging form unchanged.




When was the term "mathematics" first used in this way?


Answer



It seems the term "mathematics" has been used to express different things at different times, historically. The name itself - "mathematics" - is Greek in origin, and at that point in time, mathematics encompassed much more than it does today in terms of its breadth.



See the entry on Wikipedia: Etymology of term "mathematics".



See also the entry, History of Mathematics, which includes the following assertion:





The study of mathematics as a subject in its own right begins in the 6th century BC with the Pythagoreans, who coined the term "mathematics" from the ancient Greek μάθημα (mathema), meaning "subject of instruction".
- (Reference given: Heath. A Manual of Greek Mathematics. p. 5.)




This may provide a start, and provides further resources, if you scroll down. (The link is to a subsection of the entry "Mathematics".



Also of interest is Earliest Known Uses of Mathematical Words, a site maintained by Jeff Miller, where you can find the origins of the use of many mathematical terms. Click on "M", then scroll down to "mathematics". (This addresses the question in your post's title.) I'll quote the start of that entry:





Words of the form math- derive ultimately from the Greek mathematike tekhne meaning "mathematical science," itself derived from manthanein, the ordinary word meaning “to learn.” How the association with a special form of learning came about is considered by T. L. Heath (A History of Greek Mathematics, vol. 1 pp. 10-11). Heath describes how the school of Pythagoras distinguished between those who had learned the theory of knowledge in its most complete form, the mathematicians, and those who knew only the practical rules of conduct. He infers that, “seeing that the Pythagorean philosophy was mainly mathematics, the term might easily become identified with the mathematical subjects as distinct from others.”



radicals - Proving that for each prime number $p$, the number $sqrt{p}$ is irrational








I'm a total beginner and any help with this proof would be much appreciated. Not even sure where to begin.




Prove that for each prime number $p$, the square root of p is irrational.


calculus - Proving $sum_{k=1}^n{k^2}=frac{n(n+1)(2n+1)}{6}$ without induction




I was looking at: $$\sum_{k=1}^n{k^2}=\frac{n(n+1)(2n+1)}{6}$$



It's pretty easy proving the above using induction, but I was wondering what is the actual way of getting this equation?



Answer



$$n^{3}-(n-1)^{3}=3n^{2}+3n+1$$
$$(n-1)^{3}-(n-2)^{3}=3(n-1)^{2}+3(n-1)+1$$
$$\vdots$$
$$2^{3}-1^{3}=3(1)^{2}+3(1)+1$$



Now use telescopic cancellation.



Here are some "proof without words"(I find them more elegant):




Sum of squares



Sum of Squares(2)



Finally a more generalized form:$$1^{k}+2^{k}+\cdots+n^{k}=\sum\limits_{i=1}^{k}S(k,i)\binom{n+1}{i+1}i!$$
Where S(k,i) represents the Stirling number of the second kind.


Thursday 20 October 2016

abstract algebra - Proof of Artin's Theorem (linearly independent functions)




Recently I have come across one of Artin's theorems and I have not been able to crack it quite yet. The theorem is stated as follows:




Let $G$ be a group. and let $f_1,\dots, f_n\colon G\to K^*$ be distinct homomorphisms of $G$ into the multiplicative group of a field. Prove that these functions are linearly independent over $K$.




Would anyone know a (if possible quite simple) proof of this Theorem. This proof came up in a chapter regarding eigenvectors and eigenvalues, so I presume it has something to do with that?


Answer



Suppose there are nontrivial linear relations between the maps $f_1,\dots,f_n$ seen as elements of the vector space $K^G$; among them choose one with the minimum number of nonzero coefficients. Upon a reordering, we can assume it is
$$

\alpha_1f_1+\dots+\alpha_kf_k=0
$$
with all $\alpha_i\ne0$. This means that, for every $x\in G$,
$$
\alpha_1f_1(x)+\dots+\alpha_kf_k(x)=0
$$
Note that $k>1$ or we have a contradiction.



Fix $y\in G$; then also
$$

\alpha_1f(yx)+\dots+\alpha_kf_k(yx)=0
$$
and, since the maps are homomorphisms,
$$
\alpha_1f_1(y)f_1(x)+\dots+\alpha_kf_k(y)f_k(x)=0\tag{1}
$$
for every $x\in G$ and
$$
\alpha_1f_1(y)f_1(x)+\dots+\alpha_kf_1(y)f_k(x)=0\tag{2}
$$

By subtracting $(2)$ from $(1)$ we get
$$
\alpha_2(f_2(y)-f_1(y))f_2(x)+\dots+\alpha_k(f_k(y)-f_1(y))f_k(x)=0
$$
for all $x$, hence
$$
\alpha_2(f_2(y)-f_1(y))f_2+\dots+\alpha_k(f_k(y)-f_1(y))f_k=0
$$
which would be a shorter linear relation, so we conclude that
$$

f_2(y)=f_1(y),\quad
\dots,\quad
f_k(y)=f_1(y)
$$
Now, choose $y$ such that $f_1(y)\ne f_2(y)$ and you have your contradiction.


real analysis - smoothing the CDF of discrete random variable

Let $X$ be a (discrete) random variable with mass point $x_i$ and probabilities $p_i$, i.e., $Pr(X=x_i)=p_i$. Let $F_X(x)=Pr(X \leq x)$ denote the CDF of $X$. Suppose $F_X(0)=0$ and $F_X(1)=1$, that is: $X\in[0,1]$.




I want to defined a smoothed version of $X$ where the CDF $F_{\tilde{X}}$ of $\tilde{X}$ is equal to that of $X$ at the points $x_i$ and 0 and 1, but the function $F_{\tilde{X}}$ is piecewise linear, that is, the value of $F_{\tilde{X}}$ at any point other than $x_i$ is linear interpolation between the given points.



The quastion is, do you see a nice way to describe the $F_{\tilde{X}}$ in terms of $F_X$.

real analysis - Find the flaw in the given proof: about the limit of a sequence

I'm studying Real Analysis, and one problem gives me a trouble. The problem is as below:




Let $\{x_n\}$ be a sequence defined on $\mathbb{R}$ with $\displaystyle \lim_{n \to \infty} x_n = x$ for some $x \in \mathbb{R}$. Define a sequence $\{\sigma_n \}$ on $\mathbb{R}$ by
$$\sigma_n= \frac{1}{n} ( x_1 + x_2 + x_3 + \cdots + x_n)$$
Find the flaw of the proof below, which tries to show the claim.



Claim : The sequence $\{\sigma_n\}$ converges. In addition, $\displaystyle \lim_{n \to \infty} \sigma_n = x$.




Proof. Since $\displaystyle \lim_{n \to \infty} x_n = x$, for any $\epsilon >0$, there exists a natural number $N$ such that
$$n >N \quad \Rightarrow \quad \lvert x_n-x \rvert < \epsilon$$
Now fix $\epsilon >0$, and let $N_\epsilon$ be the natural number that satisfies the property above. Note that
$$\lvert \sigma_n -x \rvert = \lvert\frac{1}{n} (x_1 + x_2 + \cdots + x_n) - x\rvert \leq \frac{1}{n} (\lvert{x_1-x}\rvert + \cdots + \lvert{x_n-x}\rvert)$$
Now, for sufficiently large $n>N_\epsilon$, we can divide the term above as
$$\lvert \sigma_n -x \rvert = \frac{1}{n}(\lvert{x_1-x}\rvert + \lvert{x_2 - x}\rvert + \cdots + \lvert{x_{N_\epsilon}-x}\rvert)+\frac{1}{n}(\lvert{x_{N_\epsilon+1}-x}\rvert + \cdots + \lvert{x_n-x}\rvert)$$
Since the first term above has only finite constant terms,
$$\frac{1}{n}(\lvert{x_1-x}\rvert + \lvert{x_2 - x}\rvert + \cdots + \lvert{x_{N_\epsilon}-x}\rvert) \to 0 \quad \text{as} \quad n \to \infty$$
Now,
$$\lvert \sigma_n -x \rvert = \frac{1}{n}(\lvert{x_{N_\epsilon+1}-x}\rvert + \cdots + \lvert{x_n-x}\rvert) < \frac{1}{n} \times \epsilon (n-N_\epsilon) \to \epsilon$$

as $n \to \infty$. Therefore $\displaystyle \lim_{n \to \infty} \sigma_n = x$.




I understand that there is some problem in the proof, but I cannot clearly explain the answer! I think the problem comes from finding the limit not at once, but calculating the parts first. Could somebody explain this to me plainly?

Square Matrices Problem



Let A,B,C,D & E be five real square matrices of the same order such that ABCDE=I where I is the unit matrix . Then,



(a)$B^{-1}A^{-1}=EDC$



(b)$BA$ is a nonsingular matrix



(c)$ABC$ commutes with $DE $




(d)$ABCD=\frac{1}{det(E)}AdjE$



More than one option may be correct .



Also , taking the special case A=B=C=D=E=I states all these options be true , but answer key states (a) is incorrect, how ?


Answer



First, a special case can only show you, that an answer is wrong, but not that is true in general. Regarding the options:



(a) We have
$$ 1 = ABCDE \iff A^{-1} = BCDE \iff B^{-1}A^{-1} = CDE $$

so choosing $C$, $D$, $E$ such that $CDE \ne EDC$ will give you an example for (a) being wrong.



(b) If $BA$ were singular, then
$$ 1 = \det(ABCDE) = \det(AB)\det(CDE) = \det(BA)\det(CDE) = 0. $$



(c) We have
$$ ABCDE = 1 \iff (ABC)^{-1} = DE $$
and every matrix commutes with its inverse.



(d) We have $E^{-1} = ABCD$ and $E^{-1} = \frac 1{\det E} \mathrm{adj}\, E$.



determinant - Polynomial interpolation over the integers such that all coefficients are from the integers

If one is given $t$ different points of a polynomial (all values are from the integers), is it then always possible to construct a polynomial of degree $t$ such that it interpolates all points AND all coefficients are from the integers?




Second: What if some of the points correspond to derivatives? So can the Brikhoff interpolation problem with $t$ points given be used to interpolate a polynomial of degree $t$ such that all coefficients are from the integers?



Note that it is wanted that we are only given $t$ points to interpolate a polynomial of degree $t$. This gives one degree of freedom. Otherwise it is easy to find a counterexample.



First question: Let $x_1,x_2, \ldots, x_t \in \mathbb{N}_0$ such that all $x_i$ are distinct and ordered, i.e., $0\leq x_1 < x_2 < x_3 < \ldots < x_t$. And let let $y_1,y_2, \ldots, y_t \in \mathbb{Z}$. Does there exist a polyonimial $f(x) = a_0 + a_1x + a_2 x^2 + \ldots + a_t x^t$ such that for all $i$ it holds that $f(x_i)=y_i$ and all $a_j \in \mathbb{Z}$.



Second question: Now assume that $c_1^{i_1}, c_2^{i_2}, \ldots, c_t^{i_t} \in \mathbb{Z}$, where $i_j \in \mathbb{N}_0$ is just an indice (not the power). For these indices it holds that $0 \leq i_0 \leq i_1 \leq \ldots \leq i_t < t$ and at least one $i_j > 0$.



Does there exist a polyonimial $f(x) = a_0 + a_1x + a_2 x^2 + \ldots + a_t x^t$ such that for all $j$ it holds that $f^{i_j}(x_j)=c_j^{i_j}$ and all $a_j \in \mathbb{Z}$, where $f^{i_j}(x)$ denotes the $i_j$-th derivative of $f(x)$.







I tried to solve the second question with Birkhoff interpolation. The Birkhoff interpolation can be used to reconstruct the function and also single coefficients: The interpolation of one coefficient is based on a matrix $A$ which is determined by all $x_j$ and $c_j^{i_j}$. Then a coefficient $a_{k-1}$ is computed as $det(A_k)/det(A)$ where $A_k$ is obtained from $A$ by replacing the $k$-th column of $A$ with the $c_j^{i_j}$ in lexicographic order. However, I'm not able to proof that $det(A_k)/det(A) \in \mathbb{Z}$. Note that if we want to interpolate the polynomial of degree $t$ with only $t$ points/derivatives given, then we have to see the birkhoff interpolation problem as a problem where we are given $t+1$ points/derivates but we are allowed to modify one point $(x_z,c_z^{i_z})$ arbitrarily.



The problem is also closely related to determinants, but I have very little knowledge in this area.



Until now, I couldn't construct a counterexample for it or proof it.







A proof, counterexample or any hints where to get additional information would be great! Or maybe someone knows something about the eigenvalues of the matrix of the Birkhoff interpolation?

real analysis - Continuous Function and Open Subsets in $mathbb R$



Let $E$ be a subset in $\mathbb R$, $f$ a real-value function on $E$.
Prove that $f$ is continuous on $E\iff$ for every open subset $V$ of $\mathbb R$, $f^{-1}(V)$ in open relative to $E$.



My question is about the ($\Rightarrow$) direction only.
Let $f$ be a continuous function on $E$ and $V$ a open subset on $\mathbb R$.
If $f^{-1}(V)=\{\}$, then it is open. Suppose that $f^{-1}(V)\neq\{\}$. Let $p\in f^{-1}(V)$.
Then $f(p)\in V$. Select $\epsilon$ such that $N_\epsilon(f(p))\subset V$.



My question is this. At this point, we do not know if $p$ is an element of $E$.
If $p\in E$, since $f$ is continuous on $E$, $\exists\delta$ such that $f(x)\in N_\epsilon(f(p))$ for all $x\in N_\delta(p)\cap E$.
Thus $N_\delta(p)\cap E\subset f^{-1}(V)$.



But, suppose that $p\notin E$. How do I know that the above statement is still true?
I tried the following:
Let $q\in E$ be a point such that $f(q)\in N_\epsilon(f(p))$
Select $\alpha$ such that $N_\alpha(f(q))\in N_\epsilon(f(p))$.
Then $\exists\delta$ such that $f(x)\in N_\alpha(f(q))$ for all $x\in N_\delta(q)\cap E$.
But this only shows that $N_\delta(q)\cap E\subset f^{-1}(V)$, not $N_\delta(p) ....$
I also thought about showing that if $p\notin E$, then $N_\delta(p)\cap E=\{\}$,
but I have no idea about how to do it.


Answer




You must change only one step in your proof:



When you say:



"If $f^{-1}(V)=\{\}$ then it is open"



reeplace by



"If $f^{-1}(V)\cap E=\{\}$" then $f^{-1}(V)$ is open relative to $E$".




Then the following line must be



"suppose now $f^{-1}(V)\cap E\neq\{\}$ then exists $p\in f^{-1}(V)\cap E$"



and your problem was solved since $p\in E$.


probability - For all non-negative random variables, why is $X=int_0^{+infty}mathbf 1_{Xgeq t},mathrm dt$ true?



I am wondering why this equation is necessarily true for all non-negative random variables:



$$
X=\int_0^{+\infty}\mathbf 1_{X\geq t}\,\mathrm dt
$$



What is confusing me is that It appears that the indicator function only spits out a value of $1$ and that I am not seeing the connection here and how the integral over the indicator function makes it $X$. Thanks!



Answer



I'm not seeing why you would be confused unless you are trying to integrate with respect to $X$ instead of $t$.



$$\int\limits_{0}^\infty\mathbf 1_{X>t}\operatorname d t~=~\int\limits_{0}^\infty\mathbf 1_{t

Which is of course $X$ when $X>0$


real analysis - Construct an explicit bijection $f:[0,1] to (0,1]$, where $[0,1]$ is the closed interval in $mathbb R$ and $(0,1]$ is half open.

The problem:




Construct an explicit bijection $f:[0,1] \to (0,1]$, where $[0,1]$ is the closed interval in $\mathbb R$ and $(0,1]$ is half open.



My Thoughts:



I imagine that I am to use the fact that there is an injection $\mathbb N \to [0,1]$ whose image contains $\{0\}$ and consider the fact that a set $X$ is infinite iff it contains a proper subset $S \subset X$ with $\lvert S \rvert = \lvert X \rvert$ (because we did something similar in class). I also have a part of proof that we did in class that I believe is supposed to help with this problem; it states the following: Start with an injection $g: \mathbb N \to X$ and then define a set $S=F(X)$ where $F$ is an injective (but NOT surjective) function $X \to X$ with $F(x) = x$ if $x \notin \text{image}(g)$ and $f(g(k)) = g(2k)$ if $x=g(k) \in \text{image}(g)$. Honestly, I'm having a lot of trouble even following this proof, so I could be wrong. Anyway, any help here would be appreciated. I feel really lost on this one. Thanks!

combinatorics - Total possible combinations of primes

I have been working on a problem as follows:
Do there exist 100 consecutive natural numbers none of which is prime?
I know that the answer is 'yes', by considering 101!, and noting the sequence 101! + 2, 101! + 3, ... , 101! + 101.




This approach generalises nicely by considering (n+1)!



However, whilst tacking this problem, I tried many different techniques.



The approach I was most interested in was the following intuition:
We know that the primes are much more spread out than occurring every n integers from knowledge beyond the problem. Since every number can be factorised uniquely into primes despite primes being rare, if we have a prime at least every 100 numbers, say, then we should surely be able to show that the number of possible combinations of primes exceeds the number of numbers that exist.



My problem is that I have found it hard to count the total number of combinations.
For example,

Say a prime occurs at least once every 100 numbers. Then there are at least N primes less than 100N. How many prime combinations can you make that are less than 100N? I'm hoping that I can get a result that exceeds 100N, and therefore we show that the primes cannot populate the natural numbers this densely.



Sorry for the long question! Just thought I'd give some background to my question.

calculus - Functions whose Limit is the Factorial Function

I want to know examples of functions $f(n)$ whose limit is $n!$ Now, when I say "limit", I don't mean $$\lim_{n \to \infty}\frac{f(n)}{n!}=1$$ (I already know functions like that). I'm referring to functions that $$\lim_{n \to \infty}f(n)-n!=0$$ I am looking for functions that do not include factorials itself (that bit should be obvious, but I'm just putting it on record). I am also not looking for summation or product equations.

Wednesday 19 October 2016

Summation operation for precalculus



Studying Spivak's Calculus I came across a relation I find hard to grasp. In particular, I want to understand it without using proofs by induction. So please prove or explain the following relationship by not using induction.



$$ \sum_{j=0}^{n}\binom{n}{j}a^{n-j}b^{j+1}=\sum_{j=1}^{n+1}\binom{n}{j-1}a^{n+1-j}b^{j} $$



Thanks in advance.



Answer



The identity you've given appears to be an index shift. Instead of beginning to sum at $i=0$, we wish to begin at $1$. In order to advance the summation index ahead by $1$, we have to take away $1$ from every instance of the index variable inside the summand.



$$\sum_{i=0}^{n}\binom{n}{i}a^{n-i}b^{i+1}$$



The index shift becomes clear if you let $j = i + 1$ and substitute.



$$= \sum_{j=0+1}^{n+1}\binom{n}{j-1}a^{n-(j-1)}b^{(j-1)+1}$$
$$= \sum_{j=1}^{n+1}\binom{n}{j-1}a^{n-j+1}b^{j}$$


arithmetic - Number in tens place



A number in tens place in result of $4^{2015} \cdot 9^{2016}$ is?



Obviously without using calculator, though I doubt it could count with those high numbers.



By tens place I mean, for example if you have number $2451$, the number in tens place here is $5$.



I know the answer, but I don't know how to get it, so if you got any ideas, please share. :)


Answer




$$
4^{2015}\cdot9^{2016} = 9\cdot36^{2015}
$$



So look at $9\cdot36^n$ for $n=1,2,3,\ldots$:
\begin{align}
9\cdot36^1 & = 324 \\
& = \cdots24 \\[6pt]
9\cdot36^2 & = \cdots64 \\
9\cdot36^3 & = \cdots04 \\

9\cdot36^4 & = \cdots44 \\
9\cdot36^5 & = \cdots84 \\
9\cdot36^6 & = \cdots24 \longleftarrow\text{Now we're back to where we were when }n=1, \\
& \phantom{=\cdots24 \longleftarrow\text{a}}\text{so it starts over again.}
\end{align}
After five steps we return to where we started; thus at $n=1,6,11,16,21,26,\ldots$ we get $\cdots24$.



Of course, in order for this to make sense, you have to realize that when you multiply two numbers, the last two digits of the answer are determined by the last two digits of each of the numbers you're multiplying, and are not affected by any earlier digits. That is clear if you think about the algorithm for multiplication that you learned in elementary school, and you can also show it via some simple algebra.



So we get $24$ whenever the exponent is congruent mod $5$ to $1$, i.e. the last digit is $1$ or $6$. So $9\cdot36^{2016}=24$ and $9\cdot{2015}$ is one step before that, thus $84$.



continuity - Continuous map $mathbb{R}^nrightarrowmathbb{R}^n$




When we say some map $\phi=(\phi_1,\ldots,\phi_n)$ is a continuous map $\mathbb{R}^n\rightarrow\mathbb{R}^n$ we really mean that each component $\phi_i$ is continuous as a function $\mathbb{R}^n\rightarrow\mathbb{R}$ or do we mean something else? $\mathbb{R}^n$ is as a vector space, so we can also consider the distance between any two vectors $\|v-u\|$ and use the definition of continuity on $\mathbb{R}^n$ as a whole. Which approach is the "proper one"? I'm kind of confused.



Edit



I know now that there is theorem that says that component-wise continuity is equivalent to continuity. I will try to prove this assertion.



Let $\phi=(\phi_1,\ldots,\phi_n):\mathbb{R}^n\rightarrow\mathbb{R}^n.$



1) Let each of $\phi_i$ be continuous functions from $\mathbb{R}^n$ to $\mathbb{R}$. Let $\|\cdot \|$ denote euclidean norm in $\mathbb{R}^n$. From continuity we have

$$\forall i \quad \forall x \quad\forall\epsilon>0\quad \exists \delta >0\quad\forall y :\|x-y\|<\delta\implies|\phi_i(x)-\phi_i(y)|<\epsilon/\sqrt{n}$$


Answer



This is one of those cases where category theory clears up the mystery about products of sets with structure. Once the product in $\text Set$ of a collection of sets $\left \{ X_{\alpha } \right \}_{\alpha \in I}$ is obtained, using the universal mapping property (UMP), then if the individual $X_{\alpha }$ have structure---for example, if they have topologies or if they are groups-- then one can say exactly what the product $\mathit must$ be, in order for the UMP to be preserved.



Now, the fact, for example, that $\mathbb R^{\omega }$ is metrizable in the product topology but not in the box toplogy, tells us in some sense, that the product topology is the "right one" to use and the box topology is not if we want nice results; i.e. the UMP is what determines how we should view products. Of course, this does not mean that we can always get what we'd like: for example, if $I$ is uncountable, $\mathbb R^{I}$ is not metrizable in the product toplogy.



As for norms, they are all equivalent on $R^{n}$ for $n\in \mathbb N$, but not on infinite-dimensional spaces. For example, the $l_{p}$ spaces are all different with $l_{p}\subset l_{q}$ whenever $q>p$, the obvious example being the sequence $\left \{ \frac{1}{n} \right \}_{n\in N}$ which is in $l_{2}$ but not in $l_{1}$.


Tuesday 18 October 2016

Restricted Cauchy equation on a non-dense domain



It is really well known that if $f: \mathbf{R}\to \mathbf{R}$ is continuous and
$$

\forall x,y \in \mathbf{R},\,\,\,\,f(x+y)=f(x)+f(y)
$$

then $f$ is linear, i.e., there exists $a \in \mathbf{R}$ such that $f(x)=ax$ for all $x \in \mathbf{R}$.



More generally, it is known that if $f: K \to \mathbf{R}$ is continuous, where $K\subseteq \mathbf{R}$ is a non-empty open connected set with $K+K\subseteq K$, then the functional equation
$$
\forall (x,y) \in K^2,\,\,\,\,f(x+y)=f(x)+f(y)\,\,\,\,\,\,\,\,\,\,\,(\star)
$$

implies that $f$ is linear.




Here a related question:




Question. Does there exist a set $S\subseteq \mathbf{R}^2$ which is not dense in $\mathbf{R}^2$, not containing an open connected set, and such that if $f: \mathbf{R}\to \mathbf{R}$ is continuous function satisfying ($\star$) for all $(x,y) \in S$ then $f$ is linear?




Easy observation: $S$ cannot be bounded (see also the comment of TheoBandit below). Indeed, in the opposite, we could set $f(x)=0$ for all $x$ in a ball with sufficiently large radius; then, any continuous extension will work.


Answer



Yes, there is such a domain. Consider,
$$S = \bigcup_{n \in \Bbb{Z}} \{(x, nx) : x \in \Bbb{R}\}.$$

Note that $S$ is not dense and has empty interior. I claim that, if $f : \Bbb{R} \to \Bbb{R}$ is continuous and satisfies
$$f(x + y) = f(x) + f(y)$$
for all $(x, y) \in S$, then $f$ is scalar homogeneous. First, note that $(0, 0) \in S$, hence
$$f(0 + 0) = f(0) + f(0) \implies f(0) = 0.$$
Next, suppose $n \ge 1$ is an integer. I wish to show that $f(nx) = nf(x)$ for all $x$. We proceed by induction.



The base case is clear. Suppose $f(nx) = nf(x)$ for some $n$. Then, because $(x, nx) \in S$, we have
$$f((n + 1)x) = f(x + nx) = f(x) + f(nx) = f(x) + nf(x) = (n + 1)f(x).$$
The claim holds by induction.




Similarly, this holds for $n < 0$. If we assume $n \le 0$ is such that $f(nx) = nf(x)$, then
$$f(x) + f((n - 1)x) = f(x + (n - 1)x) = f(nx) = nf(x),$$
which implies $f((n - 1)x) = (n - 1)f(x)$ as required. By induction,
$$f(nx) = nf(x) \quad \forall n \in \Bbb{Z}, x \in \Bbb{R}.$$



Next, as you might guess, we establish rational scalar homogeneity. However, this follows directly from the integer scalar homogeneity. We have
$$qf\left(\frac{p}{q}x\right) = f\left(q \frac{p}{q}x\right) = f(px) = pf(x) \implies f\left(\frac{p}{q}x\right) = \frac{p}{q}f(x)$$
where $p, q \in \Bbb{Z}$ and $q \neq 0$.



Since the rationals are dense in $\Bbb{R}$, we must have $f(\lambda x) = \lambda f(x)$ for any real $\lambda$, hence $f$ is linear.



paradoxes - What exactly is the paradox in Zeno's paradox?

I have known about Zeno's paradox for some time now, but I have never really understood what exactly the paradox is. People always seem to have different explainations.



From wikipedia:



In the paradox of Achilles and the Tortoise, Achilles is in a footrace with the tortoise. Achilles allows the tortoise a head start of 100 metres, for example. If we suppose that each racer starts running at some constant speed (one very fast and one very slow), then after some finite time, Achilles will have run 100 metres, bringing him to the tortoise's starting point. During this time, the tortoise has run a much shorter distance, say, 10 metres. It will then take Achilles some further time to run that distance, by which time the tortoise will have advanced farther; and then more time still to reach this third point, while the tortoise moves ahead. Thus, whenever Achilles reaches somewhere the tortoise has been, he still has farther to go. Therefore, because there are an infinite number of points Achilles must reach where the tortoise has already been, he can never overtake the tortoise.

__



And we then say that this is a paradox since he should be able to reach the tortoise in finite time? For me it seems like that in the paradox we are slowing down time proportionally. Aren't we then already using the fact that the sum of those "time sequences" make up finite time? I feel like there is some kind of circular logic involved here.



What exactly is the paradox?

Differentiability implies continuous derivative?

We know differentiability implies continuity, and in 2 independent variables cases both partial derivatives fx and fy must be continuous functions in order for the primary function f(x,y) to be defined as differentiable.



However in the case of 1 independent variable, is it possible for a function f(x) to be differentiable throughout an interval R but it's derivative f ' (x) is not continuous?

Monday 17 October 2016

sequences and series - Error with the proof that all solutions to the Cauchy Functional Equation are linear



If $f(x)$ is continuous, it is known that $f(x+y)=f(x)+f(y)$ implies that $f(x)$ is linear, and non-continuous solutions are discussed in these links. (1,
2,3, 4)



However, what is wrong with this proof that all solutions to the Cauchy Functional Equation are of the form $f(x)=cx$?



If $x$ is rational, it is known that $f(x)=cx$ for some fixed constant $c$, as seen here.



If $x$ is irrational let us assume that $x=n+\alpha$, where $0 \le \alpha <1$.




$f(x)=f(n+\alpha)=f(n)+f(\alpha)$.



Because of the upper result, $f(n)=cn$.



Let the decimal expansion of $\alpha$ be $\sum _{ i=1 }^{ \infty }{ \frac { { a }_{ i } }{ { 10 }^{ i } } } $



Note that $\frac { { a }_{ i } }{ { 10 }^{ i } } $ is rational.



Then, $$f(\alpha)=f(\sum _{ i=1 }^{ \infty }{ \frac { { a }_{ i } }{ { 10 }^{ i } } })=\sum _{ i=1 }^{ \infty }{ f(\frac { { a }_{ i } }{ { 10 }^{ i } } ) } =c\sum _{ i=1 }^{ \infty }{ \frac { { a }_{ i } }{ { 10 }^{ i } } }=c\alpha $$




Therefore $f(x)=cn+c\alpha=cx$. What did I do wrong?


Answer



The answer is in the comments:



How do you prove that $f(\sum _{ i=1 }^{ \infty }{ \frac { { a }_{ i } }{ { 10 }^{ i } } })$ equals $\sum _{ i=1 }^{ \infty }{ f(\frac { { a }_{ i } }{ { 10 }^{ i } } ) }$ without assuming $f$ continuous?



Exactly. Let $b_n=\sum_{j=1}^n a_j10^{-j}$. Then $f(\sum_{j=1}^{\infty}a_j10^{-j})=\sum _{j=1}^{\infty}f(a_j10^{-j})$ is equivalent to $f(\lim_{n\to \infty}b_n)=\lim_{n\to \infty}f(b_n)$. This assumes $f$ is continuous at $\alpha$.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...