Tuesday 30 September 2014

pde - Taking a Fourier transform after taking Laplace transform





Consider the following PDE,
$$\frac{\partial u}{\partial t} − κ
\frac{\partial^2 u}{\partial x^2}
= S_0δ(x)δ(t),$$

subject to the initial condition,
u(x, 0) = δ(x),
with κ > 0, and $S_0$ > 0. Find the solution of this PDE by taking both
a Fourier and a Laplace transformation. You may use that fact that the
Laplace transformation of the Dirac delta function is one, i.e.
L{δ(t)} = 1





Can someone explain to me what happens when taking a fourier transform after a laplace transform.



e.g. How to find $ \mathcal{F}( \mathcal{L}(u))$? Can you show in detail what this gives using the definitions of the fourier and laplace transform. The question above is just the context of why i need to use this but I'm not sure how to find it.
Thanks.


Answer



What the question means, I think, is to take the Fourier Transform in the spatial variable and the Laplace Transform in the time variable.



First, taking the Laplace Transform $\mathcal{L}[u(x,t](s) = w(x, s)$ and letting $u_0(x) := u(x, 0)$ be a known initial condition, we obtain:




$$sw(x,s) - u_0(x) -\kappa\frac{\partial^2 w(x,t)}{\partial x^2} = S_0 \delta(x)$$



which is equivalent to



$$\left(s - \kappa\frac{\partial^2}{\partial x^2}\right)w(x,s) = S_0\delta(x) + u_0(x)$$



Taking the Fourier Transform of this equation, letting $\mathcal{F}[v(x, s)](k) = v(k, s)$, we obtain



$$\left(s+\kappa k^2\right)v(k, s) = S_0 + \int_{-\infty}^{\infty}u_0(x)e^{-ikx}dx$$




or, equivalently



$$v(k,s) = \frac{1}{s + \kappa k^2}\left(S_0 + \int_{-\infty}^{\infty}u_0(x)e^{-ikx}dx\right)$$



(depending on your Fourier Transform convention, various factors of ${2\pi}^{-1}$ may appear).



Now, we only have to invert the two transforms, and then we are done.



The spatial stuff looks pretty messy, but the Laplace Transform can be done easily, as we only have one $s$ in there.




In fact, we have a function that looks something like this:



$$v(k, s) = \frac{F(k)}{s + a(k)}$$



which has an Inverse Laplace Transform of $p(k, t) = \mathcal{L}^{-1}[v(k,s)](t) = F(k)\exp(-a(k)t)$, so we are left with



$$p(k, t) = \exp\left(-\kappa k^2 t\right)\left(S_0 + \int_{-\infty}^{\infty}u_0(x)e^{-ikx}dx\right)$$



The Inverse Fourier Transform of the first term can be reduced to calculating a Gaussian integral, giving us the typical Gaussian solution one would expect from the Heat Equation as a homogeneous solution. This calculation is outlined below:




\begin{align}
u_H(x, t) &= \mathcal{F}[S_0 \exp\left(-\kappa k^2 t\right)](x, t)\\
&= \frac{S_0}{2\pi}\int_{-\infty}^\infty \exp(-\kappa t k^2 + i x k)dk\\
&= \frac{S_0}{2\pi}\int_{-\infty}^\infty \exp\left(-\kappa t\left(k^2 - \frac{ixk}{\kappa t}\right)\right)dk\\
&= \frac{S_0}{2\pi}\int_{-\infty}^\infty\exp\left(-\kappa t\left(k-\frac{ix}{2\kappa t}\right)^2 - \frac{x^2}{4\kappa t}\right)dk\\
&= \frac{S_0}{2\pi}\exp\left(-\frac{x^2}{4\kappa t}\right)\int_{-\infty}^\infty
\exp\left(-\kappa t\left(k-\frac{ix}{2\kappa t}\right)^2\right)dk
\end{align}




Letting $z := k - \frac{ix}{2\kappa t}$, we get that $dk = dz$. The contour on integration shifts to a line parallel to the real axis, but it is not too difficult to show that if one were to consider a a rectangular closed contour the runs along both this line and the real axis, the contribution of the short edges of the rectangle would go to zero as the length of the line approaches infinity. Thus, we can shift the integral back onto the real axis, as the integral along the real axis and along the line parallel to it must be equal in value.



\begin{align}
u_H(x, t) &= \frac{S_0}{2\pi}\exp\left(-\frac{x^2}{4\kappa t}\right)\int_{-\infty}^\infty
\exp\left(-\kappa tz^2\right)dz\\
&= \frac{S_0}{2\pi}\exp\left(-\frac{x^2}{4\kappa t}\right)\sqrt{\frac{\pi}{\kappa t}}\\
&= \frac{S_0}{\sqrt{4\pi\kappa t}}\exp\left(-\frac{x^2}{4\kappa t}\right)
\end{align}



This is the known solution to the Heat Equation as a homogeneous solution.




The inhomogeneous solution can only be written down in integral form, as we do not know anything about the initial distribution $u_0(x)$. It is given by:



\begin{align}
u_I(x, t) &= \frac{1}{2\pi}\int_{-\infty}^\infty\int_{-\infty}^{\infty}\exp\left(-\kappa k^2 t\right)e^{ikx} e^{-iky} u_0(y) dy dk\\
&= \frac{1}{2\pi}\int_{-\infty}^\infty u_0(y)\int_{-\infty}^\infty \exp\left(-\kappa t k^2 + ik(x-y)\right)dkdy\\
&= \frac{1}{\sqrt{4\pi \kappa t}}\int_{-\infty}^\infty u_0(y)\exp\left(-\frac{(x-y)^2}{4\kappa t}\right) dy
\end{align}



which is preciously the convolution of the initial condition $u_0(x)$ with the fundamental homogeneous solution $u_H(x, t)$.




The final solution is then given as the sum of the homogeneous and the inhomogeneous solution.



EDIT: The above was done for a general $u_0(x)$, because I missed that $u_0(x)$ had been specified in the questions. Given that we have $u_0(x) = \delta(x)$, we can calculate the final integral easily:



\begin{align}
u_I(x,t) &= \frac{1}{\sqrt{4\pi \kappa t}}\int_{-\infty}^\infty \delta(y)\exp\left(-\frac{(x-y)^2}{4\kappa t}\right) dy\\
&= \frac{1}{\sqrt{4\pi \kappa t}}\exp\left(-\frac{x^2}{4\kappa t}\right)
\end{align}




This gives us the full solution




$$u(x,t) = \frac{S_0 + 1}{\sqrt{4\pi \kappa t}}\exp\left(-\frac{x^2}{4\kappa t}\right)$$



number theory - On elementary proofs of Fermat's Last Theorem




I came across this one of many claimed elementary proofs of FLT. It looked credible, and I felt surprised seeing that it wasn't drawing much attention from anyone. I investigated and I ended up finding this argument ruling out any chances for this kind of "proofs" to be correct. Now, my question shall be organized in three steps:




  • I have a very basic understanding of the "trick". I get its underlying logic, but unluckily I have a ridiculous knowledge of rings and fields, and in particular I know almost nothing of $p$-adic numbers. Can you confirm that, reasoning in analogy with the familiar number sets, I can safely assume that having a solution in $Q_p$ would imply a counterpart in $Z_p$? Is there a way to make it comprehensible how a solution made of $p$-adic integers would look like?


  • This is what I'm interested in most. Is it possible to understand, at least at an intuitive level, in lay-person's terms, what is the characteristic of the ring of familiar integers that makes it different from other factorial rings? In other words, what is (or are) the features typical only of our beloved usual numbers that make FLT hold for them? In further other words, what properties have been involved by the advanced mathematical tools used to prove FLT?


  • Finally, if we can spot such a characteristic, and it necessarily requires the use of instruments that Fermat didn't have, isn't this definitive evidence that Fermat could not have a proof? Why is this still sometimes questioned? Did he have any chance to perform something that did not fall under the disproof of the "trick", something that wouldn't apply to other rings like that of $p$-adic integers?



Answer



I think the claim of that thread is blatantly overstated. For one thing, there are lots of properties that $\mathbb{Z}$ has, but $Z_p$ does not have. First and foremost, well ordering of the positive elements, which is heavily used as a key to solving many diophantine equations.



Now consider Fermat's elementary proof that
$$x^4 + y^4 = z^4$$
has no solutions for $(x,y,z) \in \mathbb{Z}$ with $xyz \ne 0$.


I'm not sure whether or not there is a solution in $p$-adic integers, for some prime $p$, but if such solutions exist, it's an example showing that the existence of qualifying $p$-adic solutions doesn't imply the existence of qualifying integer solutions.


As I indicated in the comments, I suspect that most flawed attempts, at least the ones where the solver knows some Number Theory and is not obviously crazy, would include steps for which there is no analogue in $Z_p$, so the $Z_p$ criterion would be useless for invalidating those attempts.


For a given proposed proof, a more common way of quickly demonstrating that there must be a mistake$\,-\,$without actually pinpointing the error, is to observe that the argument would still work for the equation $x^2 + y^2 = z^2$, or, alternatively, to apply the argument line by line for the equation $x^3+y^3=z^3$, and see if the proof at least works for that case.


linear algebra - Elementary Row Matrices



Let $A$ =




$$
\begin{align}
\begin{bmatrix}
-4 & 3\\
1 & 0
\end{bmatrix}
\end{align}
$$




Find $2 \times 2$ elementary matrices $E_1$,$E_2$,$E_3$ such that $A$ = $E_1 E_2 E_3$



I figured out the operations which need to be performed which are;



$E_1$ = $R_2 \leftrightarrow R_1$



$E_2$ = $R_2$ = $R_2$ + $4R_1$



$E_3$ = $R_2$ * $\frac{1}{3}$




My question is how would I go about writing the elementary matrices? The solution says that they are;



$E_1$ =
$
\begin{align}
\begin{bmatrix}
1 & -4\\
0 & 1
\end{bmatrix}
\end{align}

$
$E_2$ =
$
\begin{align}
\begin{bmatrix}
3 & 0\\
0 & 1
\end{bmatrix}
\end{align}
$

$E_3$ =
$
\begin{align}
\begin{bmatrix}
0 & 1\\
1 & 0
\end{bmatrix}
\end{align}
$


Answer




Hint: what do elementary matrices correspond to? Can you some how form a correspondence between the row operations you used to reduce the matrix and elementary matrices? In other words, the elementary matrices are related to how $R_{1}$ and $R_{2}$ are manipulated in each row reduction step.


analysis - Complex equation simplification

Let $k$ be a positive integer and $c_0$ a positive constant. Consider the following expression:
\begin{equation} \left(2 i c_0-i+\sqrt{3}\right)^2 \left(-2 c_0+i \sqrt{3}+1\right)^k+\left(-2 i c_0+i+\sqrt{3}\right)^2 \left(-2 c_0-i \sqrt{3}+1\right)^k
\end{equation}
I would like to find a simple expression for the above in which only real numbers appear. It is clear that the above expression is always a real number since
\begin{equation}
\overline{\left(2 i c_0-i+\sqrt{3}\right)^2 \left(-2 c_0+i \sqrt{3}+1\right)^k}= \left(-2 i c_0+i+\sqrt{3}\right)^2 \left(-2 c_0-i \sqrt{3}+1\right)^k.
\end{equation}
But I am not able to simplify it. I am pretty sure I once saw how to do this in a complex analysis course but I cannot recall the necessary tools. Help is much appreciated.

elementary number theory - Prove $gcd(ab, c)le gcd(a,c)gcd(b,c)$



How do we prove that
$$\gcd(ab, c) \le \gcd(a,c) \gcd(b,c)$$
for all positive integers $a,b,c$? I've tried thinking about it with Bezout's and the Euclidean algorithm but I'm not quite getting there. Very new to proof-based math so any help is very appreciated.


Answer




When in doubt with $\gcd$ problems, use prime factorization. While it may not be the most abstract or "elegant" way to prove things, it is very powerful and will get the job done.
So, we can write
\begin{align*}
a &= 2^{x_1} 3^{x_2} 5^{x_3} \cdots \\
b &= 2^{y_1} 3^{y_2} 5^{y_3} \cdots \\
c &= 2^{z_1} 3^{z_2} 5^{z_3} \cdots \\
ab &= 2^{x_1 + y_1} 3^{x_2 + y_2} 5^{x_3 + y_3} \cdots
\end{align*}
Now, when we take the gcd, we take the minimum of the two prime exponents, for each prime. So
$$

\gcd(ab,c) = 2^{\min(x_1 + y_1,z_1)} 3^{\min(x_2 + y_2, z_2)} \cdots,
$$
and in general the power of a prime $p$ here is $\min(x_i + y_i, z_i)$.
On the other hand,
\begin{align*}
\gcd(a,c) &= 2^{\min(x_1,z_1)} 3^{\min(x_2,z_2)} \cdots \\
\gcd(b,c) &= 2^{\min(y_1,z_1)} 3^{\min(y_2,z_2)} \cdots \\
\gcd(a,c) \gcd(b,c) &=
\left( 2^{\min(x_1,z_1)} 3^{\min(x_2,z_2)} \cdots \right)
\left( 2^{\min(y_1,z_1)} 3^{\min(y_2,z_2)} \cdots \right) \\

&= 2^{\min(x_1,z_1) + \min(y_1,z_1)} 3^{\min(x_2,z_2) + \min(y_2,z_2)} \cdots,
\end{align*}
and in general the power of a prime $p$ here is $\min(x_i, z_i) + \min(y_i,z_i)$.



So how can we prove the desired result?
We can do it by proving that the prime powers of $\gcd(ab,c)$ are less than the prime powers of $\gcd(a,c) \gcd(b,c)$.
That is, we just have to show
$$
\min(x_i + y_i, z_i) \le \min(x_i,z_i) + \min(y_i,z_i).
$$

Now this can be shown true by considering whether $x_i + y_i$ or $z_i$ is bigger.


calculus - How to evaluate $int sin^{-1}(x)cos^{-1}(x) , dx$?



I need to evaluate $$\int \sin^{-1}(x)\cos^{-1}(x) \, dx.$$ Can anyone please give me an idea or a hint ? Thanks.


Answer



we will use the fact that $\sin^{-1} x + \cos ^{-1} x = \pi/2$ and a change of variable $\sin^{-1} x = t, x = \sin t, dx = \cos t \, dt$ with these we get $\begin{align}\int \sin^{-1}(x)\cos^{-1}(x) \, dx &=
\int t(\pi/2 - t)\cos t \, dt \\

&=\int (\pi/2 t - t^2) \, d \cos t \\
&= (\pi/2 t - t^2)\cos t - \int (\pi/2 - 2t) \cos t\, dt \\
&= (\pi/2 t - t^2)\cos t + (\pi/2 - 2t) \sin t - 2 \int \sin t\, dt\\
&= (\pi/2 t - t^2)\cos t + (\pi/2 - 2t) \sin t + 2 \cos t +C\\\end{align}$


Monday 29 September 2014

Proof Involving Modular Arithmetic and Fermat's Theorem



I'm trying, but struggling, to prove this statement about congruences. The title is perhaps uniformative, which I apologize for, so feel free to edit or comment if you have a better description.



Theorem: Let $p$ and $r$ be distinct primes greater than $2$. Then, $p^{r-1} + r^{p-1} \equiv 1 \ \text{(mod $pr$)}$.



Here's my best attempt at a proof. It doesn't get 'prove' it, but rather gives a string of facts I was able to piece together which, unless I missed something important, may be sufficient to write the proof, but I'm struggling to piece them together correctly.




Proof Attempt:



Since $p$ and $r$ are prime, it's clear that they are relatively prime to one another, so $\gcd(p,r) = 1$. Thus, we are free to leverage faremont's theorem in considering $p^{r-1}$ and $r^{p-1}$.



From $p^{r-1}$, that $p$ and $r$ are relatively prime tells us that $p^{r-1} \equiv 1 \ \text{(mod $r$)}$. Similarly, we have $r^{p-1} \equiv 1 \ \text{(mod $p$)}$.



Now, by definition, we have, taking $p$ and $r$ as fixed:
\begin{align*}
p^{r-1} \equiv 1 \ \text{(mod $r$)} \iff \exists a \in \mathbb{Z}, p^{r-1} -1 = ar

\end{align*}
and
\begin{align*}
r^{p-1} \equiv 1 \ \text{(mod $p$)} \iff \exists b \in \mathbb{Z}, q^{r-1} - 1 = bp
\end{align*}
So, now I can't quite figure out how to string these together. We could add the two equations so that we have $p^{r-1} + r^{p-1}$ on the same side of an equation, but then we have a $-2$ term that we can't quite eliminate, nor we can we factor out $pr$ from the right-hand side of the equation, though we can factor out a $p$ from one term and an $r$ from another. The only other thing that occurred to me was that $p$ and $r$ being relatively prime implies invertibility. Similarly, that they have a $\gcd$ of $1$ means that we can write $1$ as a linear combination of $p$ and $r$ with integer coefficients, but that still didn't quite help with factoring out a $pr$ from the right-hand side, even if it seems to isolate $p^{r-1} + r^{p-1} - 1$ on one side of the equation.



I'd greatly appreciate any helpful comments or hints. I'd really like to figure as much of this out myself, as I'd like to think I'm reasonably close, so any guidance in the right direction is especially appreciated. I'll edit this question later, hopefully with an updated, correct proof of this and credit whoever provided any thoughts.



Thanks.



Answer



Are you familiar with the property



$a \equiv b \pmod{m_1}$



$a \equiv b \pmod{m_2}$



$\vdots$



$a \equiv b \pmod{m_n}$




$\implies a \equiv b \pmod{\text{lcm}(m_1, m_2, \ldots, m_n)}$?



If not, try to prove it! This gives a direct solution to the problem you're facing. And I believe $p$ and $r$ must be distinct in order for this solution to work. Otherwise, I'm pretty sure $p^{r-1} + r^{p-1} \equiv 2 p^{p-1} \equiv 0 \not\equiv 1 \pmod{p^2}$.


real analysis - What can we conclude from $f(x+y)+f(x-y)=f(xy)$?


Let $f : \mathbb{R}\rightarrow\mathbb{R}$ be a function such that $f(x + y) + f(x − y) = f(xy)$ for all $x, y \in\mathbb{R}$. Then $f$ is:



A. Strictly increasing.



B. Strictly decreasing.



C. Identically zero.




D. Constant but not necessarily zero.




I have no idea how to do this. Thanks for any hint.

integration - Integral of $int^1_0 frac{dx}{1+e^{2x}}$



I am trying to solve this integral and I need your suggestions.
I think about taking $1+e^{2x}$ and setting it as $t$, but I don't know how to continue now.



$$\int^1_0 \frac{dx}{1+e^{2x}}$$
Thanks!


Answer




With the change of variable $u=e^{x}$, you get
$$
\int_{[0,1]}\frac{dx}{1+e^{2x}} = \int_{[1,e]}\frac{1}{u(1+u^2)}du
$$


elementary number theory - Is $0 ^ 0 = 1$ a valid result?




Most people say $0 ^ 0$ is indeterminate, but that's in the context of limits. I mean $0 ^ 0$ when the zeros are absolute.




I've seen that one way to define exponentiation of natural numbers is saying that $a ^ 0 = 1$ for any natural $a$ (including zero), and $a^{1+n} = a * a ^ n$



Under this definition there is no problem, but my concern arises because in the construction of natural numbers, as in Von Neumann's using sets, natural are not the same non-negative integers. Integers are constructed as pairs of natural numbers.



So what would be the value of this expression $0 ^ 0 + (-4)$? You can not just replace $0 ^ 0$ by $1$ using definition above and say the result is $-3$, because you can not operate a natural with an integer. They are different structures. For this expression been properly defined it is needed to be a definition of exponentiation in integers where $0 ^ 0 = 1$



Or perhaps it is not correct to define exponentiation in integers, since it would not be closed. But it would be closed in the complex numbers. What is the definition of exponentiation in the complex, using set theory?



Is it possible that $0 ^ 0$ is equal to $1$ only with natural zeros but not with the zeros of other numerical sets?


Answer




It depends on the definition of exponentiation one uses. There are different definituons of exponentiation based on multiplication, set theory, logarithms, even solution of algrbraic equations (e.g. defining $a^{1/2}$ as the nonnegative solution to $x^2=a$). Some definitions extend naturally to give $0^0=1$, others do not.



Surely thus matter has been discussed on MSE before, so I recommend a search.


Sunday 28 September 2014

linear algebra - $f(x + y) = f(x) + f(y)$. Show that $f$ is continuous.

Let $f:\mathbb{R} \rightarrow \mathbb{R}$ and $ f(x + y) = f(x) + f(y)$.



How can I show that $f$ is continuous, when $f$ is continuous at $f(0)$?

summation - Showing that $1-1/2+ cdots +1/(2n-1)-1/2n$ is equal to $1/(n+1)+1/(n+2)+ cdots +1/(2n)$




$1-1/2+1/3-1/4+ \cdots +1/(2n-1)-1/2n=1/(n+1)+1/(n+2)+ \cdots +1/2n$




I was asked to prove by mathematical induction the validity of the above equation. It isn't hard to prove that it holds for any arbitrary natural number. But how mathematicians (or anyone) discovered that the left side of that equation equals to the right side, it doesn't seem obvious. I've tried to manipulate them with various means such as multiplying the denominator but I can't observe any pattern. Is it by chance that this equation was discovered? Thanks in advance.



Answer



I don't know how someone first discovered this identity, but here's a clearer way of seeing it:
\begin{align*}
1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \cdots - \frac{1}{2n} &= 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \cdots + \frac{1}{2n}- 2\left(\frac{1}{2} + \frac{1}{4} + \frac{1}{6} + \cdots + \frac{1}{2n} \right)\\
&= 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \cdots + \frac{1}{2n}-\left(1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{n} \right)
\end{align*}



This cancels out the first $n$ terms of the sequence, leaving the $(n+1)^\text{st}$ to $2n^\text{th}$ terms, which is the righthand side.


Saturday 27 September 2014

lie algebras - Defining the Lie Bracket on $mathfrak{g} otimes_Bbb{R} Bbb{C}$



I already know how to do the complexification of a real Lie algebra $\mathfrak{g}$ by the usual process of taking $\mathfrak{g}_\Bbb{C}$ to be $\mathfrak{g} \oplus i\mathfrak{g}$. Now suppose I take the approach of trying to complexify things using tensor products. I look at $\mathfrak{g} \otimes_\Bbb{R} \Bbb{C}$ with the $\Bbb{R}$ - linear map



$$\begin{eqnarray*} f : &\mathfrak{g}& \longrightarrow \mathfrak{g} \otimes_\Bbb{R} \Bbb{C} \\

&v&\longmapsto v \otimes 1. \end{eqnarray*}$$



Now suppose I have an $\Bbb{R}$ - linear map map $h : \mathfrak{g} \to \mathfrak{h}$ where $\mathfrak{h}$ is any other complex Lie algebra. Then I can define a $\Bbb{C}$ - linear map $g$ from the complexification $\mathfrak{g} \otimes_\Bbb{R} \Bbb{C}$ to $\mathfrak{h}$ simply by defining the action on elementary tensors as



$$g(v \otimes i) = ih(v).$$



I have checked that $g$ is a $\Bbb{C}$ - linear map. Now my problem comes now in that my $f,g,h$ have to somehow be compatible with the bracket on $[\cdot,\cdot]_\mathfrak{g}$ of $\mathfrak{g}$ and $[\cdot,\cdot]_\mathfrak{h}$ of $\mathfrak{h}$. This is because I don't want them to just be linear maps but also Real/Complex Lie algebra homomorphisms.






My question is: How do we define the bracket on the complexification? A reasonable guess would be $[v \otimes i,w \otimes i] = \left([v,w] \otimes [i,i]\right)$ but this is zero.





Edit: Perhaps I should add, in the usual way of defining the complexification, the bracket on $\mathfrak{g}$ extends uniquely to one on the complexification $\mathfrak{g} \oplus i\mathfrak{g}$. Should it not be the case now that my bracket on $\mathfrak{g}$ extends uniquely to one on the tensor product then?



Edit: How do we know that the Lie Bracket defined by MTurgeon is well-defined? Does it follow from the fact that we are tensoring vector spaces, and so there is one and only one way to represent a vector in here?


Answer



First of all, it seems the right extension is the following:
$$[v\otimes\lambda,w\otimes\mu]:=[v,w]\otimes\lambda\mu.$$




This satisfies bilinearity, and Jacobi's identity. However, how can we show that this is the unique extension of the Lie bracket? We have the following result (taken, for example, from Bump's Lie groups):



Proposition: If $V$ and $U$ are real vector spaces, any $\mathbb R$-bilinear map $V\times V\to U$ extends uniquely to a $\mathbb C$-bilinear map $V_{\mathbb C}\times V_{\mathbb C}\to U_{\mathbb C}$.



Proof: This basically follows from the properties of tensor products. Any $\mathbb R$-bilinear map $V\times V\to U$ corresponds to a unique $\mathbb R$-linear map $V\otimes_{\mathbb R} V\to U$. But any $\mathbb R$-linear map extends uniquely to a $\mathbb C$-linear map of the complexified vector spaces (this is easy to prove). Hence, we have a $\mathbb C$-linear map $(V\otimes_{\mathbb R} V)_{\mathbb C}\to U_{\mathbb C}$. But we have the following isomorphism:
$$(V\otimes_{\mathbb R} V)_{\mathbb C}\cong V_{\mathbb C}\otimes_{\mathbb C} V_{\mathbb C};$$
on the left-hand side, the tensor product is over $\mathbb R$, and on the right-hand side, it is over $\mathbb C$. Finally, our $\mathbb C$-linear map $V_{\mathbb C}\otimes_{\mathbb C} V_{\mathbb C}\to U_{\mathbb C}$ corresponds to a unique $\mathbb C$-bilinear map $V_{\mathbb C}\times V_{\mathbb C}\to U_{\mathbb C}$.


sequences and series - Mathematical Induction: Sum of first n odd perfect cubes

The series is $$P_k: 1^3 + 3^3 + 5^3 + ... + (2k-1)^3 = k^2(2k^2-1)$$ and I have to replace
$P_k$ with $P_{k+1}$ to prove the series.




I have to show that $$k^2(2k^2-1) + (2k-1)^3 = (k+1)^2[2(k+1)^2-1]$$
I'm sorry that I'm asking but their are just so many factors the algebra just passes over my
head. Any help is appreciated.

soft question - Elevator pitch for a (sub)field of maths?

When I first saw the title of this question, I forgot for a moment I was on meta, and thought it was asking about quick, catchy, attractive, informative one-or-two-liner summaries of various fields of mathematics. It turned out not to be… so here’s the question I thought it was!



There are lots of times when one wants a quick way to explain to someone (family, students, teachers, colleagues) what some field is about, and (hopefully) catch their interest as well — perhaps to come and learn about it themselves, perhaps just to understand why you find it interesting, perhaps to convince them why they should give you funding to work on it…




The question linked nicely describes how a good such pitch should work:




This isn't as easy as it sounds. Imagine the user who will never read your FAQ and you have two seconds to grab their attention. It should be catchy but descriptive. It should be thoroughly clear but painfully concise. Make every... word... count.




A couple of extra thoughts: the level of technicality of the pitch should probably depend on how specialised the field is. You don't need to describe “algebra” or “calculus” to a professional mathematician; dually, you probably aren't trying to explain “homotopy theory of CDGA’s” to your friend who never took any maths courses (at least, not in one sentence). This isn’t about any level of maths in particular — whether in recreational maths or cutting-edge research, everyone can benefit from a good slogan sometimes. Posting a few examples myself, to get the ball rolling



I’m not sure to what extent guidelines/conventions for big-list community-wiki questions have solidified on this site, but a couple which work well on MO and SE are:





  • just one example (tagline) per posted answer, so that they can be voted up/down individually;


  • don't be shy of making near-duplicates if you think an idea could have been executed a bit better, nor of suggesting improvements to answers in their comments.




Related questions: Best intuitive metaphors for math concepts; Cocktail party math (at MO).

continuity - Checking whether a function is continuous or not



Set

$$ f(x) = \begin{cases} x^2 \cdot \sin(1/x), &\text{when $x\neq 0$;}\\
0, &\text{when $x=0$}.
\end{cases}$$



Now we have to check whether $f''(x)$ is continuous at $x= 0$ and $''(0)$ exists or not.
All I've done is calculating the $f''(x)$ as I don't know how to proceed. If you can help me about how to think this. I know that we can check RHS = value at point = LHS, but I cannot apply it here because of the sin function. I don't get it, thank you for helping.



P.s. what is reputation? Why do I need it to upload picture? :-o


Answer



We have

$$
f'(x)=\begin{cases}
2\,x\sin\dfrac1x-\cos\dfrac1x&\text{if }x\ne0,\\
0&\text{if } x=0.
\end{cases}
$$$f'$ is discontinuous at $x=0$, since $\lim_{x\to0}f'(x)$ does not exist. This implies that $f''(0)$ does not exist.


limits - Calculate $limlimits_{x to 0+}{ e^{frac{1}{x^2}}sin x}$

I want to investigate the value of $\lim_\limits{x \to 0+}{e^{\frac{1}{x^2}}\sin x}$. Since the expontial tends really fast to infinity but the sine quite slowly to 0 in comparison I believe the limit to be infinity. But I cannot find I way to prove it. I tried rewriting using the standard limit $\frac{\sin x}{x}$ as $\frac{\sin x}{x}\cdot xe^{\frac{1}{x^2}}$ but I still get an indeterminate form "$1 \cdot 0 \cdot \infty$".

Friday 26 September 2014

calculus - is $sumlimits _{n=1}^{infty}lnleft(frac{left(n+1right)^{2}}{nleft(n+2right)}right)



I don't know why but I'm having a hard time determining whether this series
$$
\sum\limits _{n=1}^{\infty}\ln\left(\frac{\left(n+1\right)^{2}}{n\left(n+2\right)}\right)
$$
converges to a real limit.




I did try to break it down according to $\ln$ identities.
$$
\ln\left(\frac{\left(n+1\right)^{2}}{n\left(n+2\right)}\right)=\ln\left(\left(n+1\right)^{2}\right)-\ln\left(n\left(n+2\right)\right)=2\ln\left(n+1\right)-\ln n-\ln\left(n+2\right)
$$
and then tried to increase it to get to a series that converges to a real number:
$$
2\ln\left(n+1\right)-\ln n-\ln\left(n+2\right)\leq 2\ln\left(n+1\right)-\ln n-\ln\left(n+1\right)=\ln\left(n+1\right)-\ln n
$$
so $S_n$ will be like that
$$

S_n=\ln2-\ln1+\ln3-\ln2+\dots+\ln(n+1)-\ln n \\=\ln (n+1)-\ln1=\ln (n+1)
$$
and so $\lim_{n\to\infty}S_n=\infty$



I know this series does converge to a real number (Well according to Wolfram Alpha :) $\ $)



Any help would be appreciated.


Answer



Hint:





Theorem: Let $\lim_{n\to\infty}n^pu_n=A$. $$\sum u_n$$ converges if $p>1$ and $A$ is finite.




Now take $p=2$.


sequences and series - $sum_{ngeq 1}frac{(-1)^n ln n}{n}$

How can we compute the series $\displaystyle \sum_{n\geq 1}\frac{(-1)^n \ln n}{n}$?



I know it is $\eta '(1)$ , where $\eta$ is the $\eta$ Dirichlet Function , i know its value. But I don't know how to compute it.



An approach I tried is to expand the series, then gather together the odd and the even terms , use $\zeta$ (Riemann's function) and that's all. Then no idea.



Any ideas are welcome.

real analysis - Proving a set is compact!

This is the last one i need help with and the help is much appreciate as I seem to have found myself stuck and pretty much turned in a blank worksheet to my professor. He says these types of problems will be on our final, and I have no clue where to start.




Suppose I claim that the set {(x,y) $\in$ $\Bbb R^2$ : $e^x + e^y \le 100$ and x+y $\ge 0 $} is compact. Prove this while also stating the theorems needed in obtaining the proof. I also think it would be helpful if anyone can verify why the hypotheses are satisfied if it isn't any bother...

real analysis - Let $g:mathbb{R}tomathbb{R}$ be a measurable function such that $g(x+y) =g(x)+g(y).$ Then $g(x) = g(1)x$ .


Let $g:\mathbb{R}\to\mathbb{R}$ be a measurable function such that
$$g(x+y) =g(x)+g(y).$$

How to prove that $g(x) = cx$ for some $c\in \mathbb{R}?$







The main thing to do here relies upon the fact that such function should be continuous and therefore by natural argument the answer will follow.



Using this
Additivity + Measurability $\implies$ Continuity




Therefore I found out that there is nothing missing in this question.

elementary set theory - Functions and unions on sets

I seek an elementary set theory proof. (All new to me.)



Let $ f: X \rightarrow Y $ be a function. $A_i$ are subsets of $X$, $B_i$ are subsets of $Y$.



Prove that $f\left(\bigcup_iA_i\right) = \bigcup_if(A_i)$



I sense that the approach is to show they're subsets of each other, but I can't out how to formulate this.




(Whilst we're at it, is the same true with the intersection?)

elementary number theory - How do I change to a different modulo?

I'm trying to solve the following problem:




What remainder does integer $n$ have when divided by $142$ if we know that $20n + 4$ and $72n - 12$ have the same remainder when divided by $142$?





I 'translated' the problem to maths:



$$
20n + 4 \equiv 72n - 12 \pmod {142} \\
n \equiv x \pmod {142}
$$



And we are essentially looking for $x$. Solving the linear congruence on top:




$$
20 + 4 \equiv 72n - 12 \pmod {142} \\
52n \equiv 16 \pmod {142} \\
13n \equiv 4 \pmod {71} \\
13n \equiv 4 + 8 \times 71 \pmod {71} \\
13n \equiv 572 \pmod {71} \\
n \equiv 44 \pmod {71} \\
$$




So $n = 71k + 44$ for any $k \in \mathbb{Z}$.



The problem is that I can't answer the original question as there, the modulo is $142$ and now it is $71$. How could I somehow 'change' the modulo and get the right answer? Or what would be a way to think about this problem?



PS: I realise that I'll get more solutions. For example, if I had $2x \equiv 4 \pmod 8$, I would divide by $2$ and get $x \equiv 2 \pmod 4$. Then $x = 4k + 2$. For these numbers are for example $6, 9, 11, \ldots$. I can see that if we were dividing these by $8$, not $4$, the remainders would be $2$s and $6$s.

sequences and series - Progressions modulo $n$




I don't understand how to do these 2 tasks:



1) Prove that any arithmetic progression modulo $n$ has a period that divides $n$.



2) Prove that any geometric progression modulo a prime number $p$ has a period that divides $p-1$.



A progression modulo some number $n$ is when you have a progression and then you replace every $a_i$ by $a_i\mod n$.



A period is the number of elements in the smallest repeated sub-sequence, for example $...1,2,3,1,2,3...$ has period $3$.




In the first task, if we have a progression with difference $d$, and $d$ and $n$ are relatively prime, then the period will be $n$ because $1$ is the greatest common divisor and that's why all elements ($0,...,n-1$) will be repeated but I don't understand how to prove for the general case. Maybe when the gcd is some other number $k$, it means that every $k-th$ number will be present in the repeated sub-sequence and the $n/period=k$?


Answer



1. Let $a_k$ be an arithmetic progression. Then
$$ a_{k+m}=a_k+mr$$



Thus
$$a_{k+m} \equiv a_k \pmod{n} \Leftrightarrow \\
mr \equiv 0 \pmod{n} \Leftrightarrow \\
n|mr \\
$$




Now prove that the smallest $m$ which satisfies this relation is
$$m=\frac{n}{\gcd(n,r)}$$



which is a divisor of $n$.



2. Is similar:



Let $b_k$ be an arithmetic progression. Then
$$ b_{k+m}=b_k\cdot r^k$$




If some $b_k \equiv 0 \pmod{p}$ the problem is easy, otherwise
Thus
$$b_{k+m} \equiv b_k \pmod{p} \Leftrightarrow \\
r^m \equiv 1 \pmod{p} \\
$$



Now, by Fermat Little Theorem you have $r^{p-1} \equiv 1 \pmod{p}$. If $m$ is your period, show that
$$r^{\gcd(r,p-1)}\equiv 1 \pmod{p}$$


algebra precalculus - Finding out the dimensions

A square room having a total floor area $1000000 m^2$ is to be partitioned into two rooms by a single interior wall. The difference between the perimeters of the resulting tow offices is to be 400 feet. What are the dimensions (length and breadth) in feet of the two rooms I am having a little bit difficulties to understand this math



I can not write down the reasoning because, I didn't understand the question. Can you help me ? I'm confused because what they meant by single interior wall? How will I write an equation of the length and breadth?

Thursday 25 September 2014

Modulus of complex number



$$
|2e^{it}-1|^2$$




I don't understand how to work this out, I know if I had for example $|2ti-1|^2$ then I would square the real and imaginary parts and add them to get the modulus squared, but here I have $|2e^{it}-1|^2$ and I don't understand what to do.



Any help would be much appreciated.


Answer



Considering that $t$ is real, we use Euler's formula $e^{i\theta}=\cos\theta+i\sin\theta$



$$|2e^{it}-1|^2=|2\cos t+2i\sin t-1|^2=(2\cos t-1)^2+(2\sin t)^2$$



This evaluates as $(5-4\cos t)$ and as you can see, the value is dependent on $t$ which is the argument of the complex number $e^{it}$ in polar form.


linear algebra - Primitive elements of GF(8)



I'm trying to find the primitive elements of $GF(8),$ the minimal polynomials of all elements of $GF(8)$ and their roots, and calculate the powers of $\alpha^i$ for $x^3 + x + 1.$




If I did my math correct, I found the minimal polynomials to be $x, x + 1, x^3 + x + 1,$ and $x^3 + x^2 + 1,$ and the primitive elements to be $\alpha, \dots, \alpha^6 $



Would the powers of $\alpha^i$ as a polynomial (of degree at most two) be: $\alpha, \alpha^2, \alpha+ 1, \alpha^2 + \alpha, \alpha^2 + \alpha + 1,$ and $\alpha^2 + 1$?



Am I on the right track?


Answer



Those are all correct. Here's everything presented in a table:



$$\begin{array}{lll}
\textbf{element} & \textbf{reduced} & \textbf{min poly} \\

0 & 0 & x \\
\alpha^0 & 1 & x+1 \\
\alpha^1 & \alpha & x^3+x+1 \\
\alpha^2 & \alpha^2 & x^3+x+1 \\
\alpha^3 & \alpha+1 & x^3+x^2+1 \\
\alpha^4 & \alpha^2+\alpha & x^3+x+1 \\
\alpha^5 & \alpha^2+\alpha+1 & x^3 + x^2 + 1 \\
\alpha^6 & \alpha^2+1 & x^3 + x^2 + 1 \\
\end{array}$$


probability theory - $P(limsup A_n)=1 $ if $forall A in mathfrak{F}$ s.t. $sum_{n=1}^{infty} P(A cap A_n) = infty$



Let $\mathfrak{F} = (A_n)_{n \in \mathbb{N}}$. Prove that $P(\limsup A_n)=1$ if $\forall A \in \mathfrak{F}$ s.t. $P(A) > 0, \sum_{n=1}^{\infty} P(A \cap A_n) = \infty$. (Side question 1 Is second Borel-Cantelli out because we don't know if the $A_n$'s are independent?)



Suppose $\forall A \in \mathfrak{F}$ s.t. $P(A) > 0, \sum_{n=1}^{\infty} P(A \cap A_n) = \infty$, but $P(\limsup A_n)<1$. My prof gave us this convenient hint: Show $\exists M > 0$ s.t. $P(\bigcap_{j=M}^{\infty} A_j^{c}) > 0$.




$1 > P(\limsup A_n)$



$1 - P(\limsup A_n) > 0$



$P(\liminf A_n^{c}) > 0$



By definition of $\liminf A_n^{c}$, $\exists M > 0 $ s.t. $P(\bigcap_{j=M}^{\infty} A_j^{c}) > 0$. (Side questions 2 and 3 Isn't this =1? ...Wait, is that what we're trying to prove? Haha)



$\to 1 - P(\bigcup_{j=M}^{\infty} A_j) > 0$




$\to 1 > P(\bigcup_{j=M}^{\infty} A_j)$



Main question: Now I guess we have to come up with some A s.t.



$\to 1 > P(\bigcup_{j=M}^{\infty} A_j) \geq \sum_{n=1}^{\infty} P(A \cap A_n)$, or is there something else to do?


Answer



Show that the series $\sum\limits_n P(A \cap A_n)$ converges for $A=\bigcap\limits_{n=M}^{\infty} A_n^{c}$.


Complex numbers and geometric series



a) Use the formula for the sum of a geometric series to show that



$$\sum _{k=1}^n\:\left(z+z^2+\cdots+z^k\right)=\frac{nz}{1-z}-\frac{z^2}{\left(1-z\right)^2}\left(1-z^n\right),\:z\ne 1$$




I thought the formula for geometric series is $$\frac{a\left(1-r^n\right)}{1-r}=\frac{z\left(1-z^n\right)}{1-z}$$



How do I appraoch this?



b) Let $$z=\cos\left(\theta \right)+i\sin\left(\theta \right),\text{ where }0<\theta <2\pi.$$



By considering the imaginary part of the left-hand side of the equation of $a$, deduce that



$$\sum _{k=1}^n (\sin(\theta)+\sin(2\theta)+\cdots+\sin(k\theta ))=\frac{(n+1)\sin(\theta ) -\sin(n+1)\theta }{4\sin^2\left(\frac{\theta }{2}\right)}$$




assuming



$$\frac{z}{1-z}=\frac{i}{2\sin\left(\frac{\theta }{2}\right)} \left(\cos\left( \frac{\theta }{2} \right) +i\sin\left(\frac{\theta }{2}\right)\right)$$


Answer



This arabesque just might prove useful.
$$\sum_{k=1}^n \sum_{j=1}^k x^j = \sum_{j=1}^n \sum_{k=j}^n x^j
= \sum_{j=1}^n {{x^j - x^{n+1}\over 1 - x}}$$
Now resolve the remaining sum


Wednesday 24 September 2014

cardinals - A simple bijection between $mathbb{R}$ and $mathbb{R}^4$ or $mathbb{R}^n$?




How to form a bijection from $(0,1]$ to $\mathbb{R}$:



$$f(x) = \left\{\begin{array}{ll}
2-\frac{1}{x}&\text{if }x\in(0, .5]\\
\frac{2x-1}{1-x}&\text{if }x\in(.5, 1].
\end{array}\right.$$



So, to go from $\mathbb{R}$ to $\mathbb{R}^4$ shouldn't be so hard... First we convert all of $\mathbb{R}$ in to a decimal representation. The numbers then have the form:
$$a_1a_2a_3a_4\ldots$$ 

Where the $a_i$s are the digits $0$, $1$, $2,\ldots,9$. 



At some point there is a decimal point, suppose it precedes the $a_j$th digit (could be the first)



Eliminate all duplicate representations:  $3.41=3.4099999\ldots$ and $0002 = 2$, by choosing the one with the fewest digits. 



Now map the remaining representations to
$$( a_1a_5a_9\ldots, a_2a_6a_{10}\ldots, a_3a_7a_{11}\ldots, a_4a_8a_{12}\ldots)$$



Put a decimal point in each one preceding the $a_j$, $a_{j+1}$, $a_{j+2}$ and $a_{j+3}$ digits.




This is not bijective, though! I know such a mapping exists, but I don't want in existence proof I want a scalable mapping I can use.



Is there some modification I can make to make it bijective?


Answer



Let $f$ be any bijection from $\mathbb{R}$ to $P(\mathbb{N})$ (the simplest one I could think of uses continued fractions, e.g. see here). To construct a bijection from $\mathbb{R}$ to $\mathbb{R}^n$ take $$g_i(A) = \left\{ \left.\frac{x-i}{n} \right|\ x\in A, x =i\ (\mathrm{mod}\ n) \right\}$$ and set $$h_i = f^{-1} \circ g_i \circ f$$ and then $$h(x) = \langle h_0(x), h_1(x), \ldots, h_{n-1}(x) \rangle$$ will be your bijection.


Tuesday 23 September 2014

reference request - Risch algorithm analogue for differential equations




I know that we can determine whether an integral has closed form, that is, is a composition of elementary functions. That problem is (more or less) solved by Risch algorithm. For differential equation solutions we consider a bit weaker condition, a solution can contain integrals of elementary functions.



Is there an algorithm which decides if solution can be expressed like that?


Answer



Yes. see this Wikipedia link (and some more characters to be on the safe side): https://www.wikiwand.com/en/Picard%E2%80%93Vessiot_theory


elementary number theory - $ 7mid x text{ and } 7mid y Longleftrightarrow 7mid x^2+y^2 $


Show that
$$ 7\mid x \text{ and } 7\mid y \Longleftrightarrow 7\mid x^2+y^2 $$




Indeed,



First let's show



$7\mid x \text{ and } 7\mid y \Longrightarrow 7\mid x^2+y^2 $




we've $7\mid x \implies 7\mid x^2$ the same for $7\mid y \implies 7\mid y^2$ then
$ 7\mid x^2+y^2 $




  • Am i right and can we write $a\mid x \implies a\mid x^P ,\ \forall p\in \mathbb{N}^*$



Now let's show




$7\mid x^2+y^2 \Longrightarrow 7\mid x \text{ and } 7\mid y$



$7\mid x^2+y^2 \Longleftrightarrow x^2+y^2=0 \pmod 7 $



for



\begin{array}{|c|c|c|c|c|} \hline
x& 0 & 1 & 2& 3 & 4 & 5 & 6 \\ \hline
x^2& 0 & 1 & 4& 2 & 2 & 4 & 1 &\pmod 7\\ \hline
y& 0 & 1 & 2& 3 & 4 & 5 & 6 \\ \hline

y^2& 0 & 1 & 4& 2 & 2 & 4 & 1 & \pmod 7 \\ \hline
\end{array}



which means we have one possibility that $x=y= 0 \pmod 7 $




  • Am I right and are there other ways?

Monday 22 September 2014

number theory - Proof for divisibility by $7$



One very classic story about divisibility is something like this.





A number is divisible by $2^n$ if the last $n$-digit of the number is divisible by $2^n$.
A number is divisible by 3 (resp., by 9) if the sum of its digit is divisible by 3 (resp., by 9).
A number $\overline{a_1a_2\ldots a_n}$ is divisible by 7 if $\overline{a_1a_2\ldots a_{n-1}} - 2\times a_n$ is divisible by 7 too.




The first two statements are very well known and quite easy to prove. However I could not find the way on proving the third statement.



PS: $\overline{a_1a_2\ldots a_n}$ means the digits of the number itself, not to be confused with multiplication of number.


Answer




$$5(\overline{a_1a_2\ldots a_n})=50(\overline{a_1a_2\ldots a_{n-1}})+5a_n=\overline{a_1a_2\ldots a_{n-1}}-2a_n\pmod{7}$$


number theory - How many 0's are at the end of 20!



I'm not exactly sure how to answer this question, any help would be appreciated. After reading this I'm still not sure.




Cheers


Answer



There is a general formula that can be used. But it is good to get one's hands dirty and compute.



If $20!$ seems dauntingly large, calculate $10!$. You will note it ends with two zeros. Multiplying $10!$ by all the numbers from $11$ to $20$ except $15$ and $20$ will not add to the zeros. Multiplying by $15$ and $20$ will add one zero each.



Remark: Suppose that we want to find the number of terminal zeros in something seriously large, like $2048!$. It is not hard to see that this number is $N$, where $5^N$ is the largest power of $5$ that divides $2048!$. This is because we need a $5$ and a $2$ for every terminal $0$, and the $5$s are the scarcer resource.



To find $N$, it is helpful to think in terms of money. Every number $n$ between $1$ and $2048$ has to pay a $1$ dollar tax for every $5$ "in it." So $45$ has to pay $1$ dollar, but $75$ has to pay $2$ dollars, because $75=5^2\cdot 3$. And a $5$-rich person like $1250$ has to pay $4$ dollars.




Let us gather the tax in stages. First, everybody divisible by $5$ pays a dollar. These are $5$, $10$, $15$ and so on up to $2045$, that is, $5\cdot 1, 5\cdot 2,\dots, 5\cdot 409$. So there are $409$ of them. It is useful to bring in the "floor" or "greatest integer $\le x$ " function, and call the number of dollars gathered in the first stage $\lfloor 2048/5\rfloor$.



But many numbers still owe some tax, namely $25,50,75,\dots,2025$. Get them to pay $1$ dollar each. These are the multiples of $25$, and there are $\lfloor 2048/25\rfloor$ of them.



But $125$, $250$, and so on still owe money. Get them to pay $1$ dollar each. We will gather $\lfloor 2048/125\rfloor$ dollars.



But $625$, $1250$, and $1875$ still owe money. Gather $1$ dollar from each, and we will get $\lfloor 2048/625\rfloor$ dollars.



Now everybody has paid up, and we have gathered a total of

$$\lfloor 2048/5\rfloor + \lfloor 2048/25\rfloor +\lfloor 2048/125\rfloor +\lfloor 2048/625\rfloor$$
dollars. That's the number of terminal zeros in $2048!$.


limits - Sequence defined recursively: $sum_{k=1}^n x_k = frac{1}{sqrt{x_{n+1}}}$




Let $(x_n)_{n \ge 1}$ defined as follows:
$$x_1 \gt 0, x_1+x_2+\dots+x_n=\frac {1}{\sqrt {x_{n+1}}}.$$ Compute the limit $\lim _ {n \to \infty} n^2x_n^3.$



MY TRY: I thought about using Stolz-Cesaro lemma, but I couldn't get to an appropriate form that leads to an easier limit.


Answer



Let $y_n=x_1+\ldots+x_n$ for $n\ge1$. Then, the recursion reads $y^2_n\,(y_{n+1}-y_n)=1.$ We see that $y_{n+1}-y_n>0$ for $n\ge1.$ Since $y^2$ is monotone, it follows that
$$n=\sum^n_{k=1}y^2_k\,(y_{k+1}-y_k)\le\int^{y_{n+1}}_{y_1}y^2\,dy=\frac13(y^3_{n+1}-y^3_1),$$
i.e. $$y_{n+1}\ge3^{1/3}n^{1/3}.\tag1$$ Consequently $$\frac1{y^2_n}\le3^{-2/3}\frac1{(n-1)^{2/3}},$$ and $$y_{n+1}=y_2+\sum^n_{k=2}\frac1{y^2_k}\le y_2+3^{-2/3}\sum^n_{k=2}\frac1{(k-1)^{2/3}}=y_2+3^{-2/3}\sum^{n-1}_{k=1}\frac1{k^{2/3}}.$$ Now we can estimate the RHS with a telescope: $$k^{1/3}-(k-1)^{1/3}=\frac1{k^{2/3}+k^{1/3}(k-1)^{1/3}+(k-1)^{2/3}}\ge\frac1{3k^{2/3}},$$ that means $$y_{n+1}\le y_2+3^{1/3}(n-1)^{1/3}.\tag2$$
(1) and (2) together give $$\lim_{n\rightarrow\infty}y_n\,n^{-1/3}=3^{1/3}.$$

Since $x_{n+1}=y_{n+1}-y_n=1/y^2_n,$ we have $$\lim_{n\rightarrow\infty}x_n\,n^{2/3}=3^{-2/3},$$ and thus $$\lim_{n\rightarrow\infty}x^3_n\,n^2=3^{-2}=\frac19.$$


Sunday 21 September 2014

combinatorics - Number of ways to choose $k$ subsets such that $ B_1 cap B_2 cap cdot cdot cdot cap B_k = emptyset$.



Let $ \space n,k \in \mathbb Z \space $ such that $1 \le k \le n \space$. Let $\space A=\{1,2,...,n\}$. Find the number of ways to choose $k$ subsets $\space B_1,B_2,...,B_k\space $ of $A$ such that $ B_1 \cap B_2 \cap \cdot \cdot \cdot \cap B_k = \emptyset$.



Well, I did it using inclusion exclusion, but I'm wondering if this is possible using a recursive formula.




By inclusion exclusion: $\sum_{i=0}^n \binom{n}{i} (-1)^i (2^{n-i})^k$ and by binom $(2^k-1)^n$


Answer



Let $F(k,n)$ denote the answer you seek.



Note first that $F(k,1)=2^k-1$. to see this, we remark that the only two subsets are $\emptyset, \{1\}$. Thus in your list of $k$ we can choose freely from these two, but we can't choose $\{1\}$ every time.



Now consider $F(k,n)$. If you delete the element $n$ from every set (if it is there) you get a selection counted by $F(k,n-1)$. But starting from a selection of that form we can add $n$, or not, to each set, but we can't add it to every set. Thus $$F(k,n)=F(k,n-1)\times (2^k-1)$$ and we are done.


general topology - complement of zero set of holomorphic function is connected



I'm stuck with the following part of exercise 1.1.8 in Hubrechts book Complex geometry:



Prove that, if $U \subset \mathbb C^n$ is open connected, then $U \setminus Z(f)$, the complement of zero set of a non trivial holomorphic function $f:U\to \mathbb C$, is connected.




I know I could use Riemann extension theorem, but I'm messing things with this point: suppose $U \setminus Z = A \cup B$ with $A$ and $B$ open non-empty disjoint; how do I see that there's point $x \in \overline A \cap \overline B \cap Z$?


Answer



Suppose $\overline{A}\cap\overline{B}\cap Z = \varnothing$. Since $A$ and $B$ are open (in $U$, or equivalently in $\mathbb{C}^n$), we have



$$\overline{A}\cap B = \varnothing = A\cap\overline{B},$$



and thus $\overline{A}\cap \overline{B} \subset Z$. The supposition thus implies that $\overline{A}$ and $\overline{B}$ are disjoint, and thus



$$\varnothing = \overset{\Large\circ}{Z} = U\setminus \overline{U\setminus Z} = U\setminus \overline{A\cup B} = U\setminus (\overline{A}\cup \overline{B}),$$




which means that $U$ is the disjoint union of the nonempty closed sets $\overline{A}$ and $\overline{B}$, and therefore $U$ is not connected. This contradicts the premise that $U$ is connected, hence the supposition $\overline{A}\cap\overline{B}\cap Z = \varnothing$ must have been wrong.



So the conclusion that $\overline{A}\cap\overline{B}\cap Z \neq \varnothing$ follows if $Z$ is any nowhere dense closed subset of $U$. Since there are nowhere dense closed sets $F\subset U$ such that $U\setminus F$ is not connected, you need special properties of the zero sets of holomorphic functions to conclude that $U\setminus Z$ must be connected. Off the top of my head, I can't think of another way than the Riemann extension theorem.


algebra precalculus - Connection between quadratic equations and complex numbers where $Delta > 0$ (of the quadratic formula)

I recently studied how the complex quadratic equation of $x^2-2x+5$ was solved:



Solution to <span class=$x^2-2x+5$">



Then I typed it in for the star of this question:



$$x^2-2x-5$$



And it used the quadratic formula, which I found boring. So I wondered: what will happen if I use the same method as the image above and just pretend it will have a complex solution. Will the imaginary numbers cancel out? What will happen?




So I did that, and my question is: how did this work so well? What is actually happening here? Is this a complex number thing, or just the fact that I'm splitting $x$ into two components?



Or did I just luck into the solution and actually have a subtle mistake?



Here are the steps I took.



$$x^2-2x-5=0$$



$$(a+bi)^2 - 2(a+bi) - 5 = 0$$




$$(a^2 + 2abi +b^2i^2) - 2a - 2bi - = 0$$



Reordering it to the 'template' of the image. (note: $b^2i^2 = -b^2$)



$$(a^2 - b^2 - 2a - 5) + i(-2b + 2ab) = 0$$



Alright, I'm on the right track as it is intuitive to me that only -5 should be the only changed value.



\begin{bmatrix}
a^2 - b^2 - 2a - 5 = 0 \\

-2b + 2ab = 0
\end{bmatrix}



Here is where I venture of a little bit as my linear algebra is almost non-existent.



$$2ab - 2b = 0$$



$$2ab = 2b$$



$$2a = 2$$




$$a = 1$$



Since it's immediately visible what $b$ is in the same equation, I went to the other equation as I figured, it might have more information about $b$. We already have a believable value for $a$, so I subtitute it in there.



$$1^2 - 2*1 - b^2 - 5 = 0$$



$$-b^2 - 6 = 0$$



$$-6 = b^2$$




$$b = \sqrt{-6}$$



$$b = \sqrt{6}i \text{ or } b = -\sqrt{6}i$$



And substitute back for $x = a + bi$ we get:



$$x = 1 + (\pm\sqrt{6}i)i$$



$$x = 1 + (\pm\sqrt{6}i^2)$$




$$x = 1 + (\pm\sqrt{6}*-1)$$



$$x = 1 + (\pm\sqrt{6})$$



$$x = 1 \pm\sqrt{6}$$



Wait, that is correct (use the quadratic formula or something like SymPy to check, I use the website above called https://www.symbolab.com/solver/).



How is this possible? How is it possible that if I model this equation with complex numbers, I also get a valid solution? When isn't this possible?

Saturday 20 September 2014

real analysis - Using definition of a limit to prove that $lim_{n rightarrow infty} frac{2}{n^4} = 0$




Using the definition of a limit, prove that $$\lim_{n \rightarrow \infty} \frac{2}{n^4} = 0$$




My take: I want to prove that given $\epsilon > 0$, there $\exists N \in \mathbb{N}$ such that $\forall n \ge N$




$$\left |\frac{2}{n^4} - 0 \right | < \epsilon$$



Notice that $\frac{2}{n^4} < \frac{2}{n} < \epsilon$, so I think I can say $n > 2\epsilon$ (not sure if I can do this).



So, given all that rough work,



Proof: Given $\epsilon >0$, choose $N > 2\epsilon$ and suppose $n \ge N$ such that



$$\left | \frac{2}{n^4} \right | < \frac{2}{n} \le \frac{2}{N} \le \frac{1}{(2\epsilon)^2} = \epsilon?$$




I'm sure I screwed it up somewhere. Can someone help me please?


Answer



I continue on your proof.



Your take: I want to prove that given $\epsilon > 0$, there $\exists N \in \mathbb{N}$ such that $\forall n \ge N$



$$\left |\frac{2}{n^4} - 0 \right | < \epsilon$$



Let $\epsilon>0$, we want to find an $N$ such that $\forall n\in\mathbb{N},~ n\ge N\Longrightarrow\left |\frac{2}{n^4}\right | < \epsilon$.




Observe on this inequality: $\left |\frac{2}{n^4}\right | < \epsilon$. Since the "$n$" we discuss is only under natural numbers, so we roughly assume that $n\in\mathbb{N}$. Then we can go on our observations:



$$\begin{alignat}{2}
&\left |\frac{2}{n^4}\right | < \epsilon\\
\underset{~n\in\mathbb{N}}{\Longleftrightarrow}&\frac{2}{n^4}<\epsilon\\
\Longleftarrow &~\frac{2}{n^4}<\underbrace{\frac{2}{n}<\epsilon}_{\text{hope it can be true}}
\end{alignat}
$$



Hence, from the "$\Longleftarrow$", when $\frac{2}{n}<\epsilon$, then $\left |\frac{2}{n^4}\right | < \epsilon$ can be true. But when will $\frac{2}{n}<\epsilon$?

$$\begin{alignat}{2}
&\frac{2}{n}<\epsilon\\
\Longleftrightarrow&2\Longleftrightarrow&n>\frac{2}{\epsilon}
\end{alignat}
$$






So until now, we have gain that $\displaystyle\forall n\in\mathbb{N},~n>\frac{2}{\epsilon}\Longrightarrow\left|\frac{2}{n^4} - 0 \right| < \epsilon$.

Then we choose $N$ to be $\lceil\frac{2}{\epsilon}\rceil$, then $\forall n\geq N,~\left|\frac{2}{n^4} - 0 \right| < \epsilon$. $\square$


sequences and series - Proving that every trap is a lure



I am using the book "Learn Limits Through Problems". It states that an interval on a infinite sequence is a "trap" if a finite terms lie outside the interval while an interval will be called a "lure" if an infinite amount of terms lie within the interval. If I'm using the harmonic series as an example, I would think that the whole interval would be considered a trap since there are finite amount of terms (0) outside the interval. It also makes sense that this whole interval would be considered a lure since there are infinite points within the sequence. However, I am not sure how to formally prove this. Any hints will be appreciated. Thanks.


Answer




Suppose I have infinitely many beads (terms in the sequence) and a basket (interval). I put some beads in the basket and leave some outside. If there are only finitely many beads left outside (basket is trap) then how many must have been put inside?


convergence divergence - Prove that the given sequence converges

Prove that the given sequence $\{a_n\}$ converges:



$a_1 > 0, a_2 > 0$




$a_{n+1} = \frac{2}{a_n + a_{n-1}}$ for $n \geq 2$



As I observed, this sequence does not seem to be monotonic and that it could be bounded since the values of $a_1$ and $a_2$ are arbitrary positive numbers.



If the limit of the sequence existed, it would be equal to 1 by letting the limit of $a_n$ be x as n goes to infinity, and solving the equation x = $\frac{2}{x + x}$ => x = 1 or -1, from which we choose x = 1 since x must be positive.



The only idea that came to my mind is bounding the sequence using two other sequences that could be shown to converge to 1 (Let these sequences be $b_n$ and $c_n$):



$b_n <= a_n <= c_n$




If we could find such sequences,and prove that they converge to 1, the problem would be solved. So, I tried to bound the sequence from both sides, and try to show that the limits are equal to 1, but failed to find such sequences. I found that it is a little difficult to analyze sequences of the form presented in the problem since the sequence fluctuates a lot.



I am not sure how to start off, any ideas or tricks for such problems would be appreciated.

real analysis - $lim_{prightarrowinfty}||x||_p = ||x||_infty$ given $||x||_infty = max(|x_1|,|x_2|)$

I have seen the proof done different ways, but none using the norm definitions provided.



Given:
$||x||_p = (|x_1|^p+|x_2|^p)^{1/p}$ and $||x||_\infty = max(|x_1|,|x_2|)$




Prove:
$\lim_{p\rightarrow\infty}\|x\|_p = \|x\|_\infty$



I have looked at the similar questions:
The $ l^{\infty} $-norm is equal to the limit of the $ l^{p} $-norms. and Limit of $\|x\|_p$ as $p\rightarrow\infty$ but they both seem to use quite different approaches (we have not covered homogeneity so that is out of the question, and the other uses a different definition for the infity norm).

general topology - Constructing a circle from a square





I have seen a [picture like this] several times:



troll proof



featuring a "troll proof" that $\pi=4$. Obviously the construction does not yield a circle, starting from a square, but how to rigorously and demonstratively prove it?




For reference, we start with a circle inscribed in a square with side length 1. A step consists of reflecting each corner of figure $F_i$ so that it lies precisely on the circle and yielding figure $F_{i+1}$. $F_0$ is the square with side length 1. After infinitely many steps we have a figure $F_\infty$. Prove that it isn't a circle.



Possible ways of thinking:




  1. Since the perimeter of figure $F_i$ indeed does not change during a step, it is invariant. Since it does not equal the perimeter of the circle, $\pi\neq4$, it cannot be a circle.



While it seems to work, I do not find this proof demonstrative enough - it does not show why $F_\infty$ which looks very much like a circle to us, is not one.





  1. Consider one corner of the square $F_0$. Let $t$ be a coordinate along the edge of this corner, $0 \leq t \leq 1$ and $t=0, t=1$ being the points of tangency for this corner of $F_0$ and the circle.
    By construction, all points $t \in A=\{ \frac{n}{2^m} | (n,m\in \mathbb{N}) \& (n<2^m)\}$ of $F_\infty$ lie on the circle. I think it can be shown that the rest of the points, $\bar{A}=[0;1] \backslash A$, lie in an $\varepsilon$-neighbourhood $U$ of the circle. I also think that in the limit $\varepsilon \to 0$, points $ t\in\bar{A}$ also lie on the circle. Am I wrong in thinking this? Can we get a contradiction from this line of thought?



Any other elucidating proofs and thoughts are also welcomed, of course.


Answer



You have rigorously defined $F_i$, but how do you define $F_\infty$? You cannot say: "after infinitely many steps...".




In this case you could define $F_\infty = \bigcap_i F_i$ (i.e. the intersection of all $F_i$), since $F_i$ is a decreasing sequence this is a good notion of limit. Notice however that $F_\infty$ is a circle! But this does not mean that the perimeter of $F_i$ should converge to the perimeter of $F_\infty$.



You could also choose a metric on subsets of the plane to define some sort of convergence $F_i \to F_\infty$ as $i\to \infty$. In any case, if you choose any good metric you find that either $F_\infty$ is the circle or that the sequence does not converge.



The point here is that the perimeter is not continuous with respect to the convergence of sets... so even if $F_i\to F_\infty$ (in any decent notion of convergence) you cannot say that $P(F_i)\to P(F_\infty)$ (where $P$ is the perimeter).


trigonometry - How is $frac{sin(x)}{x} = 1$ at $x = 0$




I have a function:
$$\text{sinc}(x) = \frac{\sin(x)}{x}$$
and the example says that: $\text{sinc}(0) = 1$, How is it true?




I know that $\lim\limits_{x \to 0} \frac{\sin(x)}{x} = 1$, But the graph of the function $\text{sinc}(x)$ shows that it's continuous at $x = 0$ and that doesn't make sense.


Answer



In an elementary book, they should define $\mathrm{sinc}$ like this
$$
\mathrm{sinc}\; x = \begin{cases}
\frac{\sin x}{x}\qquad x \ne 0
\\
1\qquad x=0
\end{cases}

$$
and then immediately prove that it is continuous at $0$.



In a slightly more advanced book, they will just say
$$
\mathrm{sinc}\;x = \frac{\sin x}{x}
$$
and the reader will understand that removable singularities should be removed.


Induction Proofs - Mathematics




How do I show by mathematical induction that $2$ divides $n^2 - n$ for all $n$ belonging to the set of Natural Numbers


Answer



To prove this with induction (although there is a simpler way) you can proceed as follows.



For $n=1$ this is true since $2$ divides $0$. Let it be true for $n=k$ i.e. that $$2 \underbrace{\mid}_{\text{ divides }} (k^2-k)=k(k-1)$$ then for $n=k+1$ you have that $$(k+1)^2-(k+1)=k^2+2k+\not1-k-\not1=k^2-k+2k=k(k-1)+2k$$ Now, observe that $2$ divides $k(k-1)$ by the induction hypothesis and obviously $2$ divides also $2k$. Thus $2$ divides $k(k-1)+2k$ and this completes the proof by induction.


arithmetic - Exercise: Construct two numbers using figures 4,5 and 6 whose product is as big as possible



Precisely what it says in the title.



Construct two numbers using figures 4,5 and 6 whose product is as big as possible.




I don't know whether I can use a same number again (4456, f.e.) or only 4,5 and 6 (I'm guessing it's this one).



I could go about it with trial and error, but would like to know the reasoning behind it.



PS: I thought about 654 * 645 because the bigger the number in the place of the hundreds, the bigger the result.


Answer



There are only 6 choices, namely



$$6\times 54 = 324$$
$$6\times 45 = 270$$




$$5\times 64 = 320$$
$$5\times 46 = 230$$



$$4\times 56 = 224$$



$$4\times 65= 260$$



Thus the maximum is $$6\times 54 = 324$$


Friday 19 September 2014

elementary set theory - Sur- in- bijections and cardinality.




I think about surjection, injection and bijections from $A$ to $B$ as $\ge$, $\le$, and $=$ respectively in terms of cardinality. Is this correct? And extrapolating from that, are these theorems correct?



If there exist two surjections $f:A\rightarrow B$ and $g:B\rightarrow A$, then $|A|=|B|$ ($|A|\ge|B|$ and $|B|\ge|A|$).



If there exists a surjection $f:A\rightarrow B$ and an injection $g:A\rightarrow B$, then $|A|=|B|$ ($|A|\ge|B|$ and $|A|\le|B|$).



Are these theorems correct?



I know the case for two injections is true.


Answer




If you assume the axiom of choice, then the existence of a surjection $f:A\to B$
implies an injection $e:B\to A$: for $b\in B$ choose $e(b)\in f^{-1}(b)$.
Together with the what you know about injections, this gives you everything you want.


Calculating limit of function



To find limit of $\lim_{x\to 0}\frac {\cos(\sin x) - \cos x}{x^4} $.
I differentiated it using L Hospital's rule. I got
$$\frac{-\sin(\sin x)\cos x + \sin x}{4x^3}\text{.}$$ I divided and multiplied by $\sin x$.

Since $\lim_{x\to 0}\frac{\sin x}{x} = 1$, thus I got
$\frac{1-\cos x}{4x^2}$.On applying standard limits, I get answer $\frac18$. But correct answer is $\frac16$. Please help.


Answer



Using Prosthaphaeresis Formulas,



$$\cos(\sin x)-\cos x=2\sin\frac{x-\sin x}2\sin\frac{x+\sin x}2$$



So, $$\frac{\cos(\sin x)-\cos x}{x^4}=2\frac{\sin\frac{x-\sin x}2}{\frac{x-\sin x}2}\frac{\sin\frac{x+\sin x}2}{\frac{x+\sin x}2}\cdot\frac{x-\sin x}{x^3}\cdot\frac{x+\sin x}x\cdot\frac14$$



We know, $\lim_{h\to0}\frac{\sin h}h=1$




$$\text{Apply L'Hospital's Rule on }\lim_{x\to0}\frac{x-\sin x}{x^3}$$



$$\text{and we get }\lim_{x\to0}\frac{x+\sin x}x=1+\lim_{x\to0}\frac{\sin x}x$$


combinatorics - Suppose we have two dice one fair and one tha brings $6$ with quintuple probability.Find the probability to throw randomly one die and show $6$

Suppose we have two dice one fair and one that brings $6$ with quintuple probability than the other numbers.We get a die randomly and we throw it.What is the probability to have $6$?In the same problem if we know that we have $6$ what is the probability that we have throwed the second die?



Any ideas for these parts especially the second one?

functional analysis - Show that $infty$-norm and $C^1$-norm are not equivalent.

Show that $\infty$-norm and $C^1$-norm are not equivalent.



For the $C^1([a,b],\mathbb{R})$ space, show that



$\displaystyle ||g||_\infty=sup_{a\leq t\leq b}|g(t)|$



and




$\displaystyle ||g||_{C^1}=sup_{a\leq t\leq b}|g(t)|+sup_{a\leq t\leq b}|g'(t)|$



are not equivalent.



My attempt:



$\displaystyle |g'(t)|>0 \implies sup_{a\leq t\leq b}|g'(t)|>0$



So then




$\displaystyle ||g||_\infty=sup_{a\leq t\leq b}|g(t)| \leq sup_{a\leq t\leq b}|g(t)|+sup_{a\leq t\leq b}|g'(t)| \leq p ||g||_{C^1}$ for constant $p \geq 1$.



If the two norms are not equivalent then I'm assuming that



$||g||_\infty \ngeq q ||g||_{C^1}$ for any constant $0

Is this a good approach or does anyone have a better idea?

Thursday 18 September 2014

real analysis - Use the mean value to prove a certain result

I need to prove the following:



Use the Mean-Value Theorem to prove that:




$$\sqrt{1+h}<1+\frac{1}{2}h$$ for $h>0$



My attempt:



we first note that given that $h>0$ then



$$1+\frac{1}{2}h >1$$



and




$$1+h>1 \Rightarrow \sqrt{1+h}<\sqrt{1}=1$$



then we have that squaring both sides:



$$1+h<(1+\frac{1}{2}h)^{2}=1+h+\frac{1}{4}h^{2} \Rightarrow 0<\frac{1}{4}h^{2}$$



which is true, then we are done.



My question is How can I use the MVT to prove this? Thank you for your help.

popular math - How can you calculate the points and then to result an exact percentage?



I really want to know. Let’s say, your math teacher gives you a test, and she marks each subject different points.




Ok, and if I done let’s say, half of the test correct, then how can I calculate in the way that gives me the exact percentage?



I mean, 50% of the answer is correct. Or, 80% of the math home work is correct.



How you do this? it’s really interesting.



Sorry for adding a wrong tag, I didn’t know what kind of algebra or something that is part from Math, not physics or Chemistry. You know what I mean?


Answer



If the tests are of different sizes, 1 common way to create an acceptable answer is to calculate the geometric mean.




An example:



A student scores 2/5 on test 1, 3/7 on test 2 and 9/10 on test 3. (40%, 42%, 90%)



A "fair" representation of how well they did could be: $(\frac{2}{5} \times \frac{3}{7} \times \frac{9}{10})^{(1/3)} \approx 0.536\ =53.6\%$



This has the effect of trying to treat each exam equally regardless of total marks of each test.



Opinion - (this is the best way to do it)







Alternatively, you could add up all the scored marks and divide by the total possible marks:



$\dfrac{2+3+9}{5+7+10} \approx 0.636 = 63.6\%$



This has the tendency to bias the results towards whichever test has the most possible marks.







There is no perfect answer sadly.


algebra precalculus - how far has bird flown when two trains cross each other



Here's the question:



A train leaves point A and heads to point B (which is 100 km away) at 20 km/hr (the track is a straight line between the two points).



Another train leaves point B and heads towards point A at 30 km/hr.



A bird sitting on the firs train takes off as soon as the train starts and flies back and forth between the two trains until the trains pass each other.




If the bird flies 40 km/hr, how far has the bird flown at the time the trains pass each other?



Here's what I've tried:



Distance = 100
Rate of train A = 20
Time it takes train A to travel the track = (100/20) = 5 hours
Time it takes train B to travel the track = (100/30) = 3.33 hours



So is the time that the two trains meet 5 - 3.33 = 1.67 hours



And then I would multiply 40 *1.67 = 66.8 km to get the distance the bird traveled.



Is my logic correct, and did I arrive at the correct answer? If not, please let me know where I have erred.



Thanks!


Answer



Well, your last idea to find the distance the bird flew is nice. However, the first part kind of isn't. For the first part, what you need to do is to find $x$ such that train $A$ travels $x$ in the same amount of time $B$ needs to go $100-x$ kilometers. This leads to solving:

$$ \frac{x}{20} = \frac{100 - x}{30} $$
This is a linear equation and it should not be too hard for you to solve by yourself.



To illustrate the next part, let me give away that the answer is given by $x=40$. This means that $A$ travels $40$ km in the same time as $B$ $60$ km. The time it took the two trains to meet is thus given by $\frac{40}{20} \left(=\frac{100-40}{30}\right) = 2$ hours. This then implies that the bird flew in total $80 ( = 2\cdot 40)$ km.


algebra precalculus - Substitution to linear + nth power form



Given an arbitrary polynomial:



$$a_0 + a_1x + a_2x^2 ... a_nx^n$$




Does there exist a series of substitutions (or single substitution if you choose to combine them) that leaves this function in the form:



$$p_1w + p_2w^r$$



I am aware there are substitutions (referred to as polynomial depression) that leave the polynomial in the form:



$$p_1 + p_2w + rw^n$$



For example in this article:




http://en.wikipedia.org/wiki/Bring_radical#Bring.E2.80.93Jerrard_normal_form


Answer



Yes, given the general equation,



$$a_nx^n+a_{n-1}x^{n-1}+\dots+a_0 =0\tag{1}$$



you can use a deg $n-1$ Tschirnhausen transformation,



$$y =b_nx^{n-1}+b_{n-1}x^{n-2}+\dots b_1$$




to reduce (1) to binomial form,



$$y^n+c_0 = 0$$



Unfortunately, in general the unknowns $b_i$ entail solving an equation of at least $(n-1)!$ degree, hence is not in radicals for $n\geq5$. (For $n=4$, the system results in a solvable sextic.)



But you can eliminate, in radicals, the three terms $x^{n-1},x^{n-2},x^{n-3}$ simultaneously. A clear step-by-step description is given here.


Wednesday 17 September 2014

elementary number theory - A couple of problems involving divisibility and congruence



I'm trying to solve a few problems and can't seem to figure them out. Since they are somewhat related, maybe solving one of them will give me the missing link to solve the others.



$(1)\ \ $ Prove that there's no $a$ so that $ a^3 \equiv -3 \pmod{13}$



So I need to find $a$ so that $a^3 \equiv 10 \pmod{13}$. From this I get that $$a \equiv (13k+10)^{1/3} \pmod{13} $$ If I can prove that there's no k so that $ (13k+10)^{1/3} $ is a integer then the problem is solved, but I can't seem to find a way of doing this.




$(2)\ \ $ Prove that $a^7 \equiv a \pmod{7} $



If $a= 7q + r \rightarrow a^7 \equiv r^7 \pmod{7} $. I think that next step should be $ r^7 \equiv r \pmod{7} $, but I can't figure out why that would hold.



$(3)\ \ $ Prove that $ 7 | a^2 + b^2 \longleftrightarrow 7| a \quad \textbf{and} \quad 7 | b$



Left to right is easy but I have no idea how to do right to left since I know nothing about what 7 divides except from the stated. Any help here would be much appreciated.



There're a lot of problems I can't seem to solve because I don't know how to prove that a number is or isn't a integer like in problem 1 and also quite a few that are similar to problem 3, but I can't seem to find a solution. Any would be much appreciated.


Answer




HINT $\rm\ (2)\quad\ mod\ 7\!:\ \{\pm 1,\:\pm 2,\:\pm3\}^3\equiv\: \pm1\:,\:$ so squaring yields $\rm\ a^6\equiv 1\ \ if\ \ a\not\equiv 0\:.$



$\rm(3)\quad \ mod\ 7\!:\ \ if\ \ a^2\equiv -b^2\:,\:$ then, by above, cubing yields $\rm\: 1\equiv -1\ $ for $\rm\ a,b\not\equiv 0\:.$



$\rm(1)\quad \ mod\ 13\!:\ \{\pm1,\:\pm3,\:\pm4\}^3 \equiv \pm 1,\ \ \{\pm2,\pm5,\pm6\}^3\equiv \pm 5\:,\: $ and neither is $\rm\:\equiv -3\:.$



If you know Fermat's little theorem or a little group theory then you may employ such to provide more elegant general proofs - using the above special cases as hints.


trigonometry - Prove this formula for $cos{ntheta}-cos{nalpha}$


If $n$ be any positive integer, prove that $$\cos{n\theta}-\cos{n\alpha}=2^{n-1}[\cos \theta - \cos \alpha]\left[\cos \theta -\cos \left(\alpha + \frac {2\pi}n\right)\right]\cdots\left[\cos \theta -\cos \left(\alpha + (n-1)\frac {2\pi}n\right)\right]$$




I am struggling to establish this result. I have tried induction, but it doesn't help. Could I have a hint?

discrete mathematics - Cantor's Diagonalization & Cantor Pairing Function

I don't fully understand the concept behind...



(1) The Cantor Pairing Function and




(2) Cantor's Diagonalization Method.






I understand that (1) and (2) involve proving if a set is countable or not.



For example, let's consider the set of all real numbers, $\mathbb{R}$, which is known to be uncountable. We can prove this set is uncountable using Cantor's Diagonlization method.



To keep things simple, let's look at the set of real numbers between 0 and 1. We can prove that this subset is uncountable by using contradiction. Thus, we initially assume the subset is countable, like so:





$y_1$ = $0 \: . \: x_{11} \: x_{12} \: x_{13} \: . . .$



$y_2$ = $0 \: . \: x_{21} \: x_{22} \: x_{23} \: . . .$



$y_3$ = $0 \: . \: x_{31} \: x_{32} \: x_{33} \: . . .$



.




.



.



$y_k$ = $0 \: . \: x_{k1} \: x_{k2} \: x_{k3} \: . . .$




where each $x_{ij}$ is a decimal place.



Now we can consider $y' = 0. x_{11} \: x_{22} \: x_{33} . . .$, for $y' \in [0,1]$. This particular $y'$ is the diagonal... We can now immediately see that $y'$ is not in the subset, which proves that the set of real numbers between 0 and 1 is not countable.




So now I have 2 questions...



(1) How does Cantor's diagonalization method prove that this subset is uncountable? I understand that $y'$ is not in the subset, but how does this prove we can't count everything? Is this because we can create permutations of $y'$ that are also not in the subset? For example, let $y''$ and $y'''$ be permutations of $y'$:



$y'' = 0. \: x_{22} \: x_{33} \: x_{44} . . . \: x_{kk} . . . \: x_{11}$



$y''' = 0. \: x_{33} \: x_{44} \: x_{55} . . . \: x_{kk} . . . \: x_{11} \: x_{22}$



etc..




(2) Can I use Cantor's pairing function to prove the set is countable or not? If I can, how would I use it? If I can't, why can't I use it?






As a side note, I saw this question earlier about Cantor's pairing function: Prove that the union of countably many countable sets is countable.



It only confused me further. I don't understand how this method creates a one-to-one mapping to $\mathbb{N}$

Tuesday 16 September 2014

Inductive proof on a sequence




I had a quiz today with an inductive proof that gave me some trouble.
Given a sequence $a_n=\begin{cases}1,n=1\\3,n=2\\a_{n-2}+2a_{n-1},n\ge3
\end{cases}$
prove that all the values are odd.



So I was confused if I should use mathematical induction or strong induction.
I started with M.I. and made it this far, but I'm stuck and do not see where I have to go.



PF,




(Base Case) when $n=3$



$a_3=a_1+2a_2=1+2(3)=7$ So our base case holds.



(Inductive Hypothesis)



Assume True for $n=k$, $a_k=a_{k-2}+2a_{k-1}$, for $k \ge 3$



(Inductive Step)




Show for $n=k+1$, $a_{k+1}=a_{(k+1)-2} +2a_{(k+1)-1}$



$a_{k+1}=a_{k-1} +2a_{k}$



$a_{k+1}=a_{k-1} +2(a_{k-2}+2a_{k-1})$ By our inductive hypothesis



This is where I get confused. I'm not sure what step to take next, or if I am using the wrong form of induction all together. I tried to multiply it out and combine the terms, but I am still lost. Any help or insight would be greatly appreciated. Thank you!


Answer



You can use strong induction. First, note that the first two terms $a_1$ and $a_2$ are odd. Then, for $n\geq 3$, assume you know that $a_1, \ldots, a_{n-1}$ are all odd (this is the strong part of the induction).





  • By definition, $a_{n} = a_{n-2} + 2a_{n-1}$.

  • By the inductive hypothesis, $a_{n-1}$ and $a_{n-2}$ are both odd.

  • If we know that $2\times \mathsf{odd} = \mathsf{even}$, then we can additionally know that $2a_{n-1}$ is even.

  • If we know that $\mathsf{odd} + \mathsf{even} = \mathsf{odd}$, then we can additionally know that $a_{n-2}+2a_{n-1}$ is odd.

  • Hence $a_{n} = a_{n-2} + 2a_{n-1}$ is odd, which completes the induction.






P.S. Here's a thinking tool: The reason I thought strong induction would be the right approach is that the formula for $a_n$ not only refers to the element right before ($a_{n-1}$), but two elements before $(a_{n-2})$, and so it seemed easier if I could assume that all earlier elements had the inductive property.

algebra precalculus - Prove that the next integer greater than $(3+sqrt{5})^n$ is divisible by $2^n$ where $n$ is a natural number.

Prove that the next integer greater than $(3+\sqrt{5})^n$ is divisible by $2^n$ where $n$ is a natural number. The problem is given in a chapter of induction.
It is actually a part of this whole question:




If $S_n = (3+\sqrt{5})^n + (3-\sqrt{5})^n$, show that $S_n$ is an integer and that $S_{n+1} = 6S_n - 4S_{n-1}$.
Deduce that the next integer greater than $(3+\sqrt{5})^n$ is divisible by $2^n$.



I could do the first two parts. The first part I have done by induction and the second part by simply using the given formula. I cannot proceed at all in the third part. I think it may need induction. Please give any solutions regarding the third one and see whether the second one can be proved using induction.

real analysis - Prove that if $g(x)$ is uniformly continuous in $(a,b]$ and $[b,c)$ then is uniformly continuous in $(a,c)$



I open this question to check my own proof and to ask a related question.



My proof: if $g(x)$ is uniformly continuous in $(a,b]$ and $[b,c)$ then





$$\forall\varepsilon_1>0,\exists\delta_1>0,\forall x\in (a,b]:|x-b|<\delta_1\implies|g(x)-g(b)|<\varepsilon_1$$
$$\forall\varepsilon_2>0,\exists\delta_2>0,\forall y\in [b,c):|y-b|<\delta_2\implies|g(y)-g(b)|<\varepsilon_2$$




If we call $\varepsilon_0=\varepsilon_1+\varepsilon_2$ and $\delta_0=\delta_1+\delta_2$ and due triangle inequality




$$|x-y|\le|x-b|+|y-b|\\|g(x)-g(y)|\le|g(x)-g(b)|+|g(y)-g(b)|$$





Then we have the case that




$$\forall\varepsilon_0>0,\exists\delta_0>0,\forall x\in (a,b]\land\forall y\in [b,c):|x-y|<\delta_0\implies|g(x)-g(y)|<\varepsilon_0\tag{1}$$




And by definition of $g(x)$ we have too that




$$\forall\varepsilon_0>0,\exists\delta_a>0,\forall x,y\in (a,b]:|x-y|<\delta_a\implies|g(x)-g(y)|<\varepsilon_0\tag{2}$$

$$\forall\varepsilon_0>0,\exists\delta_b>0,\forall x,y\in [b,c):|x-y|<\delta_b\implies|g(x)-g(y)|<\varepsilon_0\tag{3}$$




Cause $(1)$, $(2)$ and $(3)$ if we took $\delta_{\omega}=\min\{\delta_0,\delta_a,\delta_b\}$ then we can finally write




$$\forall\varepsilon_0>0,\exists\delta_{\omega}>0,\forall x,y\in (a,c):|x-y|<\delta_{\omega}\implies|g(x)-g(y)|<\varepsilon_0$$




Two questions:





  1. Is my proof right? I think is right but Im not completely sure.

  2. Can you tell me some different $\delta{-}\varepsilon$ proof for the same problem?



Thank you in advance.


Answer



Let $\varepsilon>0$. By uniform continuity on $(a,b]$ and $[b,c)$, there is $\delta_1>0$ s.t. $$\forall x,y\in (a,b],\ |x-y|<\delta_1\implies |g(x)-g(y)|<\frac{\varepsilon}{2}$$
and there is $\delta_2>0$ s.t. $$\forall x,y\in [b,c),\ |x-y|<\delta_2\implies |g(x)-g(y)|<\frac{\varepsilon}{2}.$$

Let $\delta=\min\{\delta_1,\delta_2\}$ and $x\leq b\leq y$ s.t. $|x-y|\leq \delta$. Then,
$$|g(x)-g(y)|\leq |g(x)-g(b)|+|g(y)-g(b)|\leq \frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon.$$



As you see, $\varepsilon$ is fixed at the beginning. Since it's unspecified, we have the result for all $\varepsilon>0$, and this prove the claim.


substitution - Showing an inequality using Cauchy-Schwarz

I managed to solve the following inequality using AM-GM:

$$
\frac{a}{(a+1)(b+1)}+\frac{b}{(b+1)(c+1)}+\frac{c}{(c+1)(a+1)} \geq \frac{3}{4}
$$

provided that $a,b,c >0$ and $abc=1$.



However it was hinted to me that this could also be solved with Cauchy-Schwarz inequality but I have not been able to find a solution using it and I'm really out of ideas.

contest math - Polynomial $P(a)=b,P(b)=c,P(c)=a$



Let $a,b,c$ be $3$ distinct integers, and let $P$ be a polynomial with integer coefficients.Show that in this case the conditions $$P(a)=b,P(b)=c,P(c)=a$$ cannot be satisfied simultaneously.



Any hint would be appreciated.



Answer



Hint: If $P(a)=b$ and $P(b)=c$ then $a-b$ divides $b-c$.


calculus - $int_0^{pi/4}!frac{mathrm dx}{2+sin x}$ , $int_0^{2pi}!frac{mathrm dx}{2+sin x}$



Please help me integrate



$$\int_0^{\pi/4}\!\frac{\mathrm dx}{2+\sin x}$$



and



$$\int_0^{2\pi}\!\frac{\mathrm dx}{2+\sin x}$$




I've tried the standard $u = \tan \frac{x}{2}$ substitution but it looks horrible.



Thanks in advance!


Answer



Let's give another try to your failed technique...



$$\displaystyle\int \frac{dx}{2+\sin x}$$



Let $u = \tan \frac{x}{2}$




$$\int \frac{du}{u^2+u+1} = \int \frac{du}{\left(u+\frac{1}{2}\right)^2 + \left(\frac{\sqrt{3}}{2}\right)^2}$$



Let $s=u+\frac{1}{2}$



$$\begin{align}\int \frac{ds}{s^2 + \left(\frac{\sqrt{3}}{2}\right)^2} &= \frac{2}{\sqrt{3}}\arctan\left(\frac{2s}{\sqrt{3}}\right)\\ &=\frac{2}{\sqrt{3}}\arctan\left(\frac{2u+1}{\sqrt{3}}\right)\\&=\frac{2}{\sqrt{3}}\arctan\left(\frac{2\tan\frac{x}{2}+1}{\sqrt{3}}\right)\end{align}$$



Evaluating the above from $0$ to $\dfrac{\pi}{4}$ yields approximately $0.33355$ while $0$ to $2\pi$ gives $\dfrac{2\pi}{\sqrt{3}}$.


summation - Can we generalise this double sum identity?

In one of my old (Dutch) algebra/analysis problem books, I came across the following "cute" double sum identity:



$$
\sum_{k=1}^n\sum_{h=1}^k \frac{h^2-3h+1}{h!} = - n - 1 + \frac{1}{n!},

$$



which the reader was asked to prove true for all $n\in\mathbb{N}$. The proof is relatively straightforward, with induction on $n$. (Although I'd love to see a more imaginative proof!)



My question is, can we somehow generalise this identity? Because this is a rather non-obvious result, I have a hunch that the textbook writer derived it from some more general theorem. But I have no idea what to look for.



Any insights will be warmly appreciated!

linear algebra - Eigenvalues and eigenvectors of similar matrices.

Suppose there is a transformation $T$ and let $A$ be a matrix representation of $T$ with chosen basis. If I find out the eigenvalues of matrix $A$, these eigenvalues will be the eigenvalues of the transformation $T$?




Then what about eigenvectors of $T$? As far as I know, similar matrices have same eigenvalues, so any matrix representation of $T$ with different basis has same eigenvalues, but eigenvectors corresponding to eigenvalues are dependent of matrix representation.



Then, what can I say about eigenvectors of $T$ by just looking at the eigenvectors of matrices?

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...