Wednesday 31 August 2016

Solve complex equation $5|z|^3+2+3 (bar z) ^6=0$



I'm stuck in trying to solve this complex equation




$$ 5|z|^3+2+3 (\bar z)^6=0$$



where $\bar z$ is the complex conjugate.
Here's my reasoning: using $z= \rho e^{i \theta}$ I would write



$$ 5\rho^3+ 2 + 3 \rho^6 e^{-i6\theta} = 0 \\ 5\rho^3+ 2 + 3 \rho^6 (\cos(6 \theta) - i \cdot \sin(6 \theta)) = 0 \\$$



from where I would write the system



$$\begin{cases} 5\rho^3+ 2 + 3 \rho^6 \cos(6 \theta) = 0 \\ 3 \rho^6 \sin(6 \theta) = 0\end{cases}$$




But here I get an error, since, from the second equation, I would claim $ \theta = \frac{k \pi}{6}$ for $ k=0…5$, but $\theta = 0$ means the solution is real and the above equation doesn't have real solutions…where am I mistaken?


Answer



Let $w = \overline{z}^3$. Then we have



$$
5|w|+2+3w^2 = 0
$$



As you point out, this constrains $w = k$ or $w = ki$ for real $k$.




Case 1. $w = k$



$$
3k^2+5|k|+2 = 0
$$



which yields no solutions since the left-hand-size is always positive.



Case 2. $w = ki$




$$
-3k^2+5|k|+2 = 0
$$



which yields $k = \pm 2$, so $w = \pm 2i$.



The rest is left as an exercise.


Is there an explicit irrational number which is not known to be either algebraic or transcendental?



There are many numbers which are not able to be classified as being rational, algebraic irrational, or transcendental. Is there an explicit number which is known to be irrational but not known to be either algebraic or transcendental?



Answer



Maybe the best-known example is Apery's constant,
$$\zeta(3) = \sum_{n = 1}^{\infty} \frac{1}{n^3} = 1.20205\!\ldots ,$$
which Apery proved was irrational a few decades ago; this result is known as Apery's Theorem.



By contrast, $\zeta(2) = \sum_{n = 1}^{\infty} \frac{1}{n^2}$ has value $\frac{\pi^2}{6}$, which is transcendental because $\pi$ is.




Apéry, Roger (1979), Irrationalité de $\zeta(2)$ et $\zeta(3)$, Astérisque (61), 11–13.




geometry - How to prove $cosleft(piover7right)-cosleft({2pi}over7right)+cosleft({3pi}over7right)=cosleft({pi}over3 right)$



Is there an easy way to prove the identity?





$$\cos \left ( \frac{\pi}{7} \right ) - \cos \left ( \frac{2\pi}{7} \right ) + \cos \left ( \frac{3\pi}{7} \right ) = \cos \left (\frac{\pi}{3} \right )$$




While solving one question, I am stuck, which looks obvious but without any feasible way to approach.



Few observations, not sure if it would help
$$
\begin{align}

\dfrac{\dfrac{\pi}{7}+\dfrac{3\pi}{7}}{2} &= \dfrac{2\pi}{7}\\\\
\dfrac{\pi}{7} + \dfrac{3\pi}{7} + \dfrac{2\pi}{7} &= \pi - \dfrac{\pi}{7}
\end{align}
$$


Answer



Yes, This problem in 1963 IMO.http://www.artofproblemsolving.com/Forum/viewtopic.php?p=346908&sid=8ad587e18dd5fa9dd5456496a8daadfd#p346908


algebra precalculus - Why does the discriminant tell us how many zeroes a quadratic equation has?



The quadratic formula states that:



$$x = \frac {-b \pm \sqrt{b^2 - 4ac}}{2a}$$



The part we're interested in is $b^2 - 4ac$ this is called the discriminant.



I know from school that we can use the discriminant to figure out how many zeroes a quadratic equation has (or rather, if it has complex, real, or repeating zeroes).




If $b^2-4ac > 0$ then the equation has 2 real zeroes.
If $b^2-4ac < 0$ then the equation has 2 complex zeroes.
If $b^2-4ac = 0$ then the equation has repeating zeroes.



But I don't uderstand why this works.


Answer



Disclaimer: Throughout this answer, when I say "root" I mean "real root". The fact that when there are no real roots there are two complex roots is a special case of the Fundamental Theorem of Algebra. The fact that if there are any real roots then there are no complex roots follows from some algebraic tricks (you can write the quadratic as a product of linear factors, for example, and complex roots must come in conjugate pairs, etc.). I now focus on the role of the discriminant.






Answer: The graph of the equation $y=ax^{2}+bx+c$ (with $a\neq0$) is a parabola. A parabola has a single turning point, called its vertex.




Assume $a>0,$ so the vertex of the parabola is a global minimum. By sketching the graph of the equation, you can clearly see that there are three possible cases:




  1. if the vertex lies above the $x$-axis, then there are no roots;

  2. if the vertex lies on the $x$-axis, then there is exactly one root;

  3. if the vertex lies below the $x$-axis, then there are two roots.



enter image description here




It turns out that the $x$-coordinate of the vertex is $-\frac{b}{2a},$ and the $y$-coordinate is therefore
$$a\left(-\frac{b}{2a}\right)^{2}+b\left(-\frac{b}{2a}\right)+c = \frac{4ac-b^{2}}{4a}.$$
Hence, the $y$-coordinate of the vertex is zero (and hence there is only one root) precisely when $4ac-b^{2}=0.$ The $y$-coordinate is positive precisely when $4ac-b^{2}>0$ (remember: we assumed $a>0$ for now), that is, $b^{2}-4ac<0,$ and then we have no roots. The $y$-coordinate is negative precisely when $4ac-b^{2}<0,$ that is, $b^{2}-4ac>0,$ and then we have two roots.



If $a<0$ then the conclusions of 1 and 3 are swapped, and similar arguments follow through.






Addendum: A similar geometric case analysis can be used to provide discriminants of higher degree polynomials (provided you know how to calculate the turning points). On the other hand, you then have more turning points, and I'd bet the number of cases increases rapidly, so that even for quartics the analysis is probably fairly involved.


summation - Evaluate the sum of this series





Please help me find the sum of this series:



$$1 + \frac{2}{3}\cdot\frac{1}{2} + \frac{2}{3}\cdot\frac{5}{6}\cdot\frac{1}{2^2} + \frac{2}{3}\cdot\frac{5}{6}\cdot\frac{8}{9}\cdot\frac{1}{2^3} + \cdots$$



All I could figure out was to find the $n^{\text{th}}$ term as:



$$a_n = \frac{2 \cdot (2+3) \cdots(2+3(n-1))}{3 \cdot 6 \cdot 9 \cdots 3(n-1)} \cdot\frac{1}{2^{n-1}}$$



What To do ahead of it. I don't know. Please help.


Answer




Let $S$ denote the sum. We write each term (with indices starting at $0$) as



$$ \left( \prod_{k=1}^{n} \frac{3k-1}{3k} \right) \frac{1}{2^n}
= \frac{\prod_{k=0}^{n-1} (-\frac{2}{3}-k)}{n!} \left(-\frac{1}{2}\right)^n
= \binom{-2/3}{n} \left(-\frac{1}{2}\right)^n. $$



Then we easily recognize $S$ as a bionmial series and hence



$$ S = \sum_{n=0}^{\infty}\binom{-2/3}{n} \left(-\frac{1}{2}\right)^n = \left(1 - \frac{1}{2}\right)^{-2/3} = 2^{2/3}.$$


linear algebra - Finding rank of a matrix using elementary column operations



I am learning about finding the rank of a matrix and in my textbook, I have only seen elementary row operations being used. For example, given any matrix, to find its rank, we need to simply just use elementary row operations to reduce the matrix into row echelon form and the rank is then just the number of non-zero rows. But, can we also use elementary column operations to find the rank? For example, say I have a $3 \times 4$ matrix:




$$\begin{bmatrix} a_{11} & a_{12} & a_{13} & a_{14} \\ a_{21} & a_{22} & a_{23} & a_{24} \\ a_{31} & a_{32} & a_{33} & a_{34} \end{bmatrix}$$
and after using elementary column operations, say I reduce it into:



$$\begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}$$



Can I conclude that the rank is $3$ because there are three non-zero rows?


Answer



Every elementary row operation corresponds to left multiplication by an invertible matrix while every elementary column operation corresponds to right multiplication by an invertible matrix. Thus, if you started with a matrix $A$ and after $k$ elementary row operations and $\ell$ elementary column operations, you obtain a matrix $B$, then




$$B = E_1 \cdots E_k A F_1 \cdots F_\ell$$



where each $E_j$ is an elementary row operation and each $F_j$ is an elemtnatary column operation. Multiplying by an invertible matrix doesn't change the rank, so $B$ has the same rank as $A$.



In your particular example, this does indeed imply that the rank is $3$.



See this article for more detail.


Tuesday 30 August 2016

complex analysis - Laurent Series for singularities and poles

Hi guys I was wondering how I can understand if the sin and the cos has essential singularities. for instance if I want to understand if 0 which singularity is i, can write the Laurent series only of the sin (centred in 0) and see how it works , or MUST write the Laurent series of all the function (centered in zero) ?

Same for cos , help I want to understand this topic very well. Thk.




$$\int_{+\partial D}\dfrac{\sin\left(\dfrac{1}{z}\right)\cos\left(\dfrac{1}{z-2}\right)}{z-5}\,\mathrm{dz}$$


calculus - Limit of exponential functions

Why





I know that for calculating this limit a Taylor series expansion around x=0 is done for the function, and after taking the limit of it's Taylor series expansion the result of the limit is obtained.




My question is why in this case a Taylor series expansion must be done before taking the limit?

algebra precalculus - Find $cos(2alpha)$ given $cos(theta -alpha)$ and $sin(theta +alpha)$

My question is:




If $\cos(\theta -\alpha) = \frac{3}{5}$ and $\sin(\theta +\alpha) =\frac{12}{13}$, find $\cos(2\alpha)$.




Attempt I:
\begin{align*}
&\cos^2(\theta -\alpha)+\sin^2(\theta +\alpha)
= \frac{9}{25} + \frac{144}{169}\\

\Rightarrow &\cos^2(\theta -\alpha)- \cos^2(\theta +\alpha)
= \frac{9}{25} + \frac{144}{169} - 1 = \frac{896}{4225}.
\end{align*}
But I thought it won't work.



Then I tried this:



Attempt II:
\begin{align*}
&\begin{cases}

\cos\theta \cos\alpha + \sin\theta \sin\alpha = \cos(\theta -\alpha) = \frac{3}{5},\\
\sin\theta \cos\alpha + \cos\theta \sin\alpha = \sin(\theta +\alpha) =\frac{12}{13},
\end{cases}\\
&\cos\theta (\sin\alpha + \cos\alpha) + \sin\theta (\sin\alpha + \cos\alpha) = \frac{99}{65},\\
&\sqrt{2}\left[\sin\left(\alpha+\frac{\pi}{4}\right) \cos\left(\alpha -\frac{\pi}{4}\right)\right] = \frac{99}{65}.
\end{align*}
But I thought here its better to convert both parts to cosines, so I did:
\begin{align*}
&\sqrt{2}\left[\cos\left(\alpha-\frac{\pi}{4}\right) \cos\left(\alpha -\frac{\pi}{4}\right)\right] = \frac{99}{65},\\
&\sqrt{2} \cos^2\left(\alpha-\frac{\pi}{4}\right) = \frac{99}{65}.

\end{align*}
But I think it also didn't work ....



Please guide. Thanks.

Show that $G(s)=1-alpha(1-s)^{beta}$ is the probability generating function of a nonnegative integer valued random variable




I'm working on the following exercise:




Show that $G(s)=1-\alpha(1-s)^{\beta}$ is the probability generating function of a nonnegative integer valued random variable when $\alpha, \beta\in(0,1)$.




I tried the following:



The probability generating function of a discrete random variable $X$ is defined by $G_X(s)=\mathbb{E}(s^X)=\sum_{k=0}^{\infty}s^k\cdot\mathbb{P}(X=k)$, and thus $G_X^{(k)}(0)=k!\cdot\mathbb{P}(X=k)$, where $G_X^{(k)}(s)$ denotes the $k$'th derivative with respect to $s$. Thus $\mathbb{P}(X=k)=\frac{G_X^{(k)}(0)}{k!}$. Working this out I find:




$$\begin{align}\mathbb{P}(X=0)&=\frac{G_X^{(0)}(0)}{0!}=G_X^{(0)}(0)=G_X(0)=1-\alpha\\
\mathbb{P}(X=1)&=\frac{G_X^{(1)}(0)}{1!}=\alpha\beta\\
\mathbb{P}(X=2)&=\frac{G_X^{(2)}(0)}{2!}=\frac{\alpha\beta(1-\beta)}{2!}\\
& \ \ \vdots\\
\mathbb{P}(X=n)&=\frac{G_X^{(n)}(0)}{n!}=\frac{\alpha\beta(1-\beta)(2-\beta)\cdots(n-\beta)}{n!}\\
\end{align}$$



If $\alpha, \beta\in(0,1)$ it follows that all probabilities $\mathbb{P}(X=k)$, for $k\in\mathbb{N}\cup\{0\}$ are in $(0,1)$. For this nonnegative integer valued random variable I will assume that $\mathbb{P}(X=x)=0$ for $x\not\in\mathbb{N}\cup\{0\}$.




To show that this is indeed the probability generating function of some nonnegative integer valued random variable I have to show that $$\sum_{k=0}^{\infty}\mathbb{P}(X=k)=1-\alpha+\sum_{k=1}^{\infty}\frac{\alpha\beta(1-\beta)(2-\beta)\cdots(k-\beta)}{k!}\\=1-\alpha+\alpha\sum_{k=1}^{\infty}\frac{\beta(1-\beta)(2-\beta)\cdots(k-\beta)}{k!}$$ equals $1$. I didn't succeed to show this, but I have the feeling I have to use the Binomial Theorem somehow. Any ideas? Thanks in advance!


Answer



For $\alpha, \beta \in (0,1)$, your $G$ satisfies




  1. $0< G(s)<1$ for all $s\in[0,1)$.

  2. $G$ is infinitely differentiable on $[0,1)$ with $G^{(n)}\ge 0$.

  3. $\lim_{s\to 1^-}G(s)=\lim_{s\to 1^-}1-\alpha(1-s)^{\beta}=1$.




So $G$ is a probability generating function.






Edit: You are trying to use that $G(s)=\sum_{n=0}^{\infty}\frac{G^{(n)}(0)}{n!}s^n$, but you do not have to calculate the power series on the RHS and plug in $s=1$. You already have its closed form (the LHS) and you can do this directly.


elementary set theory - Cardinality of $mathbb{R}$ and $mathbb{R}^2$

I am working on this exercise for an introductory Real Analysis course:




Show that |$\mathbb{R}$| = |$\mathbb{R}^2$|.




I know that $\mathbb{R}$ is uncountable. I also know that two sets $A$ and $B$ have the same cardinality if there is a bijection from $A$ onto $B$. So if I show that there exists a bijection from $\mathbb{R}$ onto $\mathbb{R}^2$ then I beleive that shows that |$\mathbb{R}$| = |$\mathbb{R}^2$|.



Let $x_i \in \mathbb{R}$, where each $x_i$ is expressed as an infinite decimal, written as $x_i = x_{i0}.x_{i1}x_{i2}x_{i3}...,$. Each $x_{i0}$ is an integer, and $x_{ik} \in \left \{ 0,1,2, 3, 4, 5, 6, 7, 8, 9 \right \}$. Then, let




$$f(x_i)=(x_{i0}.x_{i1}x_{i3}x_{i5}... ,x_{i0}.x_{i2}x_{i4}x_{i6}...)$$



What should I do to show that $f: \mathbb{R} \to \mathbb{R}^2$ is an injective function? Any suggestions or help with the question would be appreciated.

Monday 29 August 2016

calculus - Problems with $int_{1}^{infty}frac{sin x}{x }dx$ convergence



I'd love your help with deciding whether the following integral converges or not and in what conditions: $\int_{1}^{\infty}\frac{\sin x}{x}$.



1. First, I wanted to use Dirichlet criterion: let $f,g: [a,w) \to R$ integrable function, $f$ is monotonic and $g$ is continuous and $f \in C^1[a,w]$. If in addition to these conditions, $G(x)=\int_{a}^{x}g(t)$ is bounded and $\lim_{x \to w}f(x)=0$ so $\int_{a}^{x}fg$ converges. I can choose $f=\frac{1}{x}$ and $g(x)=\sin x$, they applies all the conditions,(aren't they?) so why can't I use Dirichlet for this integral?




2. I used Wolfarm|Alpha and it says that $\int_{1}^{\infty}\frac{\sin x}{x}dx$ does converge to $\frac\pi2$ .is it only a conditional convergence? (and if so, does is count as non convergence?)



3. I was told that this integral does not absolute converges, meaning $\int_{1}^{\infty}\frac{|\sin x|}{x}dx$ does not converges, How can I prove it?



Thanks a lot.


Answer



Jonas Meyer has already pointed out a link where the properties of this integral are discussed. However, I would like to show you an alternative proof that this integral does not converge absolutely (I saw this in Analysis I/II by Zorich and it doesn't seem as well-known as it should be imho):



The idea is simple. We first see that the only issue with convergence is at $\infty$, since $\frac{\sin(x)}{x}$ is continuous at $0$.




Now $0\le |\sin(x)| \le 1$ implies



$$\frac{|\sin(x)|}{x} \ge \frac{\sin^2(x)}{x}$$



And we note that $\int_1^\infty \frac{\sin^2(x)}{x}dx$ should essentially have the same convergence-properties as $\int_1^\infty \frac{\cos^2(x)}{x}dx$ (with some hand-waving at this point). But if this is true, then $\int_0^\infty \frac{\sin^2(x)}{x}dx$ converges if and only if $$\int_1^\infty \left(\frac{\sin^2(x)}{x}+ \frac{\cos^2(x)}{x}\right) dx = \int_1^\infty \frac{dx}{x}$$
converges. The latter is clearly not true, so $\int_0^\infty \frac{\sin^2(x)}{x}dx$ doesn't converge, either.



More formally, we have




$$
\begin{align}
\int_0^\infty \frac{|\sin(x)|}{x} dx &\ge \int_{\pi/2}^\infty \frac{\sin^2(x)}{x} dx \\
&= \int_{0}^\infty\frac{\cos^2(x)}{x+\pi/2} dx \\
\end{align}
$$



Hence



$$

\begin{align}
\int_0^\infty \frac{|\sin(x)|}{x} dx &\ge \frac12 \int_0^\infty \left(\frac{\sin^2(x)}{x} + \frac{\cos^2(x)}{x+\pi/2} \right)dx \\
&= \frac12\int_{0}^\infty\left(\frac{1}{x+\pi/2} + \frac{\frac\pi2 \sin^2(x)}{x(x+\pi/2)} \right) dx \\
&\ge \frac12 \int_{0}^\infty\frac{1}{x+\pi/2} dx \\
&= \infty
\end{align}
$$


algebra precalculus - How to determine if $2+x+y$ is a factor of $4-(x+y)^2$?



I know it is a factor but how could have I determined that it was? Feel free to link whatever concept is needed than solve it. Studying for clep and it's one of the practice problems. When I expand it I get nonsense.


Answer



Use difference of squares: $$4 - (x+y)^2 = 2^2 - (x+y)^2 = (2 - (x+y))(2+(x+y)).$$


Sunday 28 August 2016

limits - Is the sequences${S_n}$ convergent?

Let $$S_n=e^{-n}\sum_{k=0}^n\frac{n^k}{k!}$$



Is the sequences$\{S_n\}$ convergent?



The following is my answer,but this is not correct. please give some hints.



For all $x\in\mathbb{R}$, $$\lim_{n\rightarrow\infty}\sum_{k=0}^n\frac{x^k}{k!}=e^x.$$
then




$$\lim_{n\rightarrow\infty}e^{-n}\sum_{k=0}^n\frac{n^k}{k!}=1.$$

calculus - Function $s(x)=1+sum_{k=1}^{infty} frac{x^k}{k^k}$ - is there any other way to define it?

This series converges for all $x \in (-\infty, \infty)$, thus the function is analytic on the real line and defined by its Taylor series.




However, unlike the exponential function, this one is very elusive - I can't find any other representation for it - no integral, no differential equation, nothing.



Its derivatives exist (an infinite number of them), but it seems they can't be connected to the function itself in any other way.



I know that $s(1)-1$ is called 'Sophomore's Dream', because it has integral representation:



$$\sum_{k=1}^{\infty} \frac{1}{k^k}=\int_0^1 \frac{1}{t^t}dt$$



I don't know if the same method can be used to find a general integral representation for $s(x)$, but I have some hope.







The most interesting (in my opinion) property of this function - $s(x)$ and its derivatives all have exactly one zero (for $x_0<0$) and one minimum ($x_m<0$ and $s(x_m)<0$). They are connected in a sense, since obviously if $s(x)$ has a minimum, then $s'(x)$ has a zero.



Here is a plot of $s(x)$ for $x<0$:



enter image description here



I added $1$ to the sum in the definition of $s(x)$ for consistency - it can be seen from the form of its derivatives:




$$s(x)=1+x+\frac{x^2}{2^2}+\frac{x^3}{3^3}+\dots=1+\sum_{k=1}^{\infty} \frac{x^k}{k^k}$$



$$s'(x)=1+\frac{x}{2}+\frac{x^2}{9}+\frac{x^3}{64}+\dots=\sum_{k=1}^{\infty} \frac{k~x^{k-1}}{k^k}$$



$$s''(x)=\frac{1}{2}+\frac{2x}{9}+\frac{3x^2}{64}+\frac{4x^3}{625}+\dots=\sum_{k=1}^{\infty} \frac{k(k-1)~x^{k-2}}{k^k}$$



$$s'''(x)=\frac{2}{9}+\frac{3x}{32}+\frac{12x^2}{625}+\frac{5x^3}{1944}+\dots=\sum_{k=1}^{\infty} \frac{k(k-1)(k-2)~x^{k-3}}{k^k}$$



It is apparent, that $\lim_{n \to \infty} s^{(n)}(0) = 0$, however, the zero on the negative line actually moves to the left with each differentiation (see the position of the minimum). So I'm not sure, how the 'infinite derivative' of $s(x)$ would look.





I'd like to know if someone studied this function, and get a reference for it. But the main question is - what other definitions are possible for $s(x)$, except for the series?







Thanks to this great answer, I'm able to write the integral definition:



$$s(x)=1+\int_0^1 x ~u^{-u~x} du$$




So I consider my question answered.



By the way, if we define the function:



$$p(x)=\frac{s(x)-1}{x}=\int_0^1 u^{-u~x} du=\sum_{k=1}^{\infty} \frac{x^{k-1}}{k^k}$$



We get a very monotone (exponential-like) behavior, without any zeroes or minima. The function and several of its derivatives are plotted below.



enter image description here

real analysis - Prove that $sum {{a_n}} $ converges iff the sequence of partial sums is bounded where $a_ngeq 0$

Let (${a_n}$) be a sequence of nonnegative real numbers. Prove that $\sum {{a_n}} $ converges iff the sequence of partial sums is bounded.




Uh I don't know how to do this proof. Please help!

algebra precalculus - Find two numbers, given their difference and quotient





The difference between two numbers is 3. If four times the smaller is divided by the larger, the quotient is 5. Find the numbers.




I am strengthening my math, practicing on my own time. Can you please help me understand how the formula for this problem is created?






Solved: I answered the question using substitution. I chose an equation, then isolated one variable, after Isolating the variable I substituted it into the other equation.


Answer




Let your two numbers be $x$ and $y$, with $x>y$. Then $x-y=3$. We also know that $\frac{4y}{x}=5$.



You can then isolate for either $x$ or $y$ in one of the equations. Then substitute this in the other equation and solve for the remaining variable.


Saturday 27 August 2016

probability - Calculate the expected value



To get the expected value of $E(X), E(Y) $ and $E(X, Y)$ given:

$$
f_{X,Y}(x,y) = 3x
$$
where $0\le x \le y \le 1.$



My solution is, first get the margin distribution:
\begin{aligned}
f_x(x) &= 3x(1-x) \\
f_y(y) &= \frac{3}{2} y^2
\end{aligned}




Then calculate the expected value:
\begin{aligned}
E(X) &= \int_0^y 3x^2(1-x) \; dx = y^3-\frac{3}{4} y^4 \\
E(Y) &= \int_x^1 \frac{3}{2} y^3 \; dy = \frac{3}{8} (1 - x^4)
\end{aligned}



and
\begin{aligned}
E(XY)=\int_0^y \int_x^1 xy \cdot 3x \;dy\; dx = \frac{1}{2} y^3 - \frac{3}{10} y^5

\end{aligned}



However, my calculated expected values contain variable $x$ and $y$, do I make some mistakes?


Answer



[after you added the joint distribution of $X,Y$]:
to find $\mathbf{E}XY$ use
$$
\int_{0}^{1} \int_{0}^{y}xy f(x,y)dx dy
$$
Can you handle the rest?



calculus - Power series representation




I'm trying to find the series representation of $ f(x)=\int_{0}^{x} \frac{e^{t}}{1+t}dt $. I have found it using the Maclaurin series, differentiating multiple times and finding a pattern. But I think there must be an easier way, using the power series of elementary functions. I know that $e^{x}=\sum_{0}^{\infty}\frac{x^{n}}{n!}$ and $\frac{1}{1+x}=\sum_{}^{\infty}(-1)^{n}x^{n}$ but I don't know how to use it here. Thanks



(Don't hesitate to correct my English)


Answer



$$e^x=\sum_{n\ge 0}\frac{x^n}{n!}$$



$$\frac{1}{x+1}=\sum_{n\ge 0}(-1)^nx^n$$



$$\frac{e^x}{x+1}=\sum_{b\ge 0}\sum_{a\ge 0} \frac{(-1)^b}{a!}x^{a+b}$$




$$\frac{e^x}{x+1}=\sum_{n\ge 0}\sum_{a+b=n}_{a,b\ge 0} \frac{(-1)^b}{a!}x^{a+b}$$



$$\frac{e^x}{x+1}=\sum_{n=0}^\infty x^n(-1)^n(\sum_{k=0}^n (-1)^{k}\frac{1}{k!})$$






Now the interesting thing about these coefficients above that are given in the form of partial sums is that it turns out that we have:



$$\sum_{k=0}^n\frac{(-1)^k}{k!}=\frac{1}{e}+\frac{1}{n!}(\frac{(-1)^n+1}{2}-\{ \frac{n!}{e}\})$$




Where {.} is the fractional part function.



So that we get:



$$\frac{e^x}{x+1}=\sum_{n=0}^\infty x^n(-1)^n(\frac{1}{e}+\frac{1}{n!}(\frac{(-1)^n+1}{2}-\{ \frac{n!}{e}\}))$$



$$=\frac{1}{e}\sum_{n=0}^\infty x^n(-1)^n+\sum_{n=0}^\infty
x^n(-1)^n\frac{1}{n!}(\frac{(-1)^n+1}{2}-\{ \frac{n!}{e}\})$$




$$=\frac{1}{e(x+1)}+\frac{1}{2}\sum_{n=0}^\infty\frac{x^n}{n!}(1+(-1)^n)-\sum_{n=0}^\infty\frac{x^n(-1)^n}{n!}\{ \frac{n!}{e}\}$$
$$=\frac{1}{e(x+1)}+\frac{e^x+e^{-x}}{2}-\sum_{n=0}^\infty\frac{x^n(-1)^n}{n!}\{ \frac{n!}{e}\}$$
$$=\frac{1}{e(x+1)}+\frac{e^x+e^{-x}}{2}+\sum_{n=0}^\infty\frac{x^n}{n!}\{ \frac{n!}{e}\}(-1)^{n+1}$$



Now integrating term by term gives,



$$\int_{0}^x \frac{e^t}{1+t} dt=\frac{\ln(x+1)}{e}+\frac{e^x-e^{-x}}{2}+\sum_{n=0}^\infty\frac{x^{n+1}}{(n+1)!}\{ \frac{n!}{e}\}(-1)^{n+1}$$







Thus for all $|x|<1$ the following expansion holds:



$$f(x)=\int_{0}^x \frac{e^t}{1+t} dt=\frac{\ln(x+1)}{e}+\frac{e^x-e^{-x}}{2}+\sum_{n=1}^\infty\frac{(-x)^n}{n!}\{ \frac{(n-1)!}{e}\}$$



Also pretty off topic, but the positive $n^{\text{th}}$ coefficients that took the form of partial sums in our first list of series manipulations above correspond to solutions to the hat-check problem when $n$ hats are being considered.


linear algebra - Algorithm for generating positive semidefinite matrices



I'm looking for an efficient algorithm to generate large positive semidefinite matrices.



One possible way I know of is:




  • generate a random square matrix

  • multiply it with its transpose.

  • calculate all eigenvalues of the result matrix and check if all of them are non-negative.




However, this approach is infeasible given a large matrix, say $1000 \times 1000$ or more.



Could anyone please suggest an efficient way to generate a positive semidefinite matrix?
Is there any MATLAB function for this job?



Thanks,


Answer



Any matrix multiplied by it's transpose is going to be PSD; you don't have to check it. On my computer raw Octave, without SSE, takes 2 seconds to multiply a 1000x1000 matrix with itself. So not all that infeasible.




If you don't like that, you can always just generate a random diagonal matrix. That's sort of the trivial way, though :) What do you need the matrix for? If it's as test input to another algorithm, I'd just spend some time generating random PSD matrices using the above matrix-matrix multiplication and save the results off to disk.


calculus - Does $lim_{xrightarrow0}left(frac{f(x) - f(0)}{x}right) = lim_{frac{x}{3}rightarrow0}left(frac{f(x) - f(0)}{x}right)$?




Given that $ f'(0) = 3$, I need to solve $$\lim_{x\rightarrow0}\left(\frac{f(3x) - f(0)}{x}\right)$$



Because I know that $\lim_{x\rightarrow0}\left(\frac{f(x) - f(0)}{x}\right) = 3$, I made the substitution $3x = h$.



Thus, $$\lim_{x\rightarrow0}\left(\frac{f(3x) - f(0)}{x}\right) = 3 \lim_{\frac{h}{3}\rightarrow0}\left(\frac{f(h) - f(0)}{h}\right)$$



If, in fact the following is true, then I can conclude that $\lim_{x\rightarrow0}\left(\frac{f(3x) - f(0)}{x}\right) = 9$, and my work is done.



$$\lim_{x\rightarrow0}\left(\frac{f(x) - f(0)}{x}\right) = \lim_{\frac{x}{3}\rightarrow0}\left(\frac{f(x) - f(0)}{x}\right)\,?$$


Answer




You are supposed to use here the substitution of limits:




Substitution in limits: Let $f$ be defined in a deleted neighborhood of $a$ and let $\lim_{x\to a} f(x) =L$. Further let $g$ be defined in a certain deleted neighborhood of $b$ such that $g(x) \neq a$ for all values of $x$ in this deleted neighborhood of $b$ and $\lim_{x\to b} g(x) =a$. Then $\lim_{x\to b} f(g(x)) =L$.




For the current question $a=b=0$ and $$F(x) =\frac{f(x) - f(0)}{x},G(x)=3x$$ Then we are given $\lim_{x\to 0}F(x)=3$ and hence $\lim_{x\to 0}F(G(x))=3$ ie $$\lim_{x\to 0}\frac{f(3x)-f(0)}{3x}=3$$ Multiplying the above by $3$ we can see that $$\lim_{x\to 0}\frac{f(3x)-f(0)}{x}=9$$ Also note that although the intended meaning of the notation $\lim_{x/3\to 0}$ is clear this is not a standard notation.



The rule of substitution is normally used without too much symbolism as follows
\begin{align*}

L&=\lim_{x\to 0}\frac{f(3x)-f(0)}{x}\\
&=\lim_{t\to 0}3\cdot\frac{f(t)-f(0)}{t}\text{ (putting } t=3x)\\
&=3\cdot 3=9
\end{align*}


sequences and series - What is meant by the absolute convergence of the infinite product $prodlimits_{n=1}^{infty} frac {1} {1+a_n}$?

Let $\{a_n \}$ be a sequence of real numbers. Consider the infinite product






$$\prod\limits_{n=1}^{\infty} \frac {1} {1+a_n}.$$





When do we say that the above infinite product is absolutely convergent? In the notion of an infinite series I know that a series $\sum\limits_{n=1}^{\infty} a_n$ is absolutely convergent if $\sum\limits_{n=1}^{\infty} |a_n|$ is convergent. For the case of infinite series I also know that absolute convergence implies convergence. Do all these results hold for infinite product too? Please help me in this regard.



Thank you so much for your valuable time.

Friday 26 August 2016

limits - Evaluate $lim_{x to -infty} ln(-x^3+x)$




Evaluate $$\lim_{x \to -\infty} \ln(-x^3+x).$$




I was wondering if I can solve this limit in this way:
$$\lim_{x \to -\infty} \ln(-x^3+x)=\lim_{x \to -\infty} \ln\left[x^3\left(1+\frac{1}{x^2}\right)\right].$$




At this point, I just considered $\ln(x^3)$ because $1$ doesn't make any difference and $1/x^2$ tends towards $0.$
So, the result will be $0.$ And I found it because I know the graph of the logarithm of $x$ to the power of an odd number. So, my second question is, is it possible to understand what the result of $\lim_{x\to - \infty} \ln(x^3)$ algebraically without thinking of the graph?



Any suggestion and help will be appreciated.
Thank you in advance and have a good day :)


Answer



The correct way is



$$\lim_{x \to -\infty} \ln(-x^3+x)=\lim_{x \to -\infty} \ln\left[-x^3\left(1-\frac{1}{x^2}\right)\right]=\lim_{x \to -\infty} \left[\ln -x^3+\ln\left(1-\frac{1}{x^2}\right)\right]=\\=\lim_{x \to -\infty} \ln -x^3+\lim_{x \to -\infty} \ln\left(1-\frac{1}{x^2}\right)=+\infty+0=+\infty$$




indeed



$$\lim_{x \to -\infty} \ln -x^3=\lim_{x \to -\infty} 3\ln -x=\lim_{y \to +\infty} 3\ln y=+\infty$$


analysis - Can I say $f$ is differentiable at $c$ if $D_u(c) = nabla f(c) cdot u$ for all unit vectors $u$?



Can I say $f$ is differentiable at $c$ if $D_u(c) = \nabla f(c) \cdot u$ for all unit vectors $u$?



I think I can because it guarantees a tangent plane.



But I don't know how to prove this precisely.




Anyone can give an advice or a proof?



Thanks.


Answer



Now you cannot say this. Consider $f \colon \mathbb R^2 \to \mathbb R^2$ given by
$$
f(x) = \begin{cases} 1 & x_1^2 = x_2, x_2 > 0\\
0 & \text{otherwise} \end{cases}
$$
at $c=0$. Then for any ray $[0,\infty) \cdot u$, $f$ is constant on some intervall $[0,\epsilon) \cdot u$, that is $D_u f(0) = 0$. But $f$ is not continuous at 0, hence not differentiable.



algebra precalculus - Solution of equation $frac{xcdot 2014^{frac{1}{x}}+frac{1}{x}cdot 2014^x}{2} = 2014$




Solution of equation $\displaystyle \frac{x\cdot 2014^{\frac{1}{x}}+\frac{1}{x}\cdot 2014^x}{2} = 2014$




$\bf{My\; Try::}$ Clearly Here $x>0$, Now Using $\bf{A.M\geq G.M}$




So Here $\displaystyle x\cdot 2014^{\frac{1}{x}}>0$ and $\displaystyle \frac{1}{x}\cdot 2014^x>0$ .



So $\displaystyle \left(\frac{x\cdot 2014^{\frac{1}{x}}+\frac{1}{x}\cdot 2014^x}{2}\right)\geq \left(x\cdot 2014^{\frac{1}{x}}\cdot \frac{1}{x}\cdot 2014^x\right)^{\frac{1}{2}} = \sqrt{2014^{\left(x+\frac{1}{x}\right)}} = 2014$



And equality Hold , When $\displaystyle x = \frac{1}{x}\Rightarrow x= 1$



Can we solve it without Using $\bf{A.M\geq G.M}$



If Yes, Then please explain me.




Thanks


Answer



Applying AM-GM probably gives the most elegant solution (and indeed the presentation is quite suggestive of AM-GM), but this can also be solved by using an idea behind AM-GM. Noting first that $x\ge0$, we have:



$$\begin{align}
&\frac{x\cdot 2014^{\frac{1}{x}}+\frac{1}{x}\cdot 2014^x}{2} = 2014
\\\implies&x2014^{\frac{1}{x}-1}+\frac{1}{x}2014^{x-1}=2
\\\implies&\left(\sqrt{x2014^{\frac{1}{x}-1}}-\sqrt{\frac{1}{x}2014^{x-1}}\right)^2+2\sqrt{2014^{\frac{1}{x}+x-2}}=2
\\\implies&\left(\sqrt{x2014^{\frac{1}{x}-1}}-\sqrt{\frac{1}{x}2014^{x-1}}\right)^2+2\sqrt{2014^{\left(\sqrt{\frac{1}{x}}-\sqrt{x}\right)^2}}=2

\\\implies&2\sqrt{2014^{\left(\sqrt{\frac{1}{x}}-\sqrt{x}\right)^2}}\le2
\\\implies&2014^{\left(\sqrt{\frac{1}{x}}-\sqrt{x}\right)^2}\le1
\\\implies&\left(\sqrt{\frac{1}{x}}-\sqrt{x}\right)^2\le0
\\\implies&\left(\sqrt{\frac{1}{x}}-\sqrt{x}\right)^2=0
\\\implies&\sqrt{\frac{1}{x}}-\sqrt{x}=0
\\\implies&\frac{1}{x}=x
\\\implies&x=1
\end{align}$$



where we have used the fact that $x\ge 0$ several times.



Thursday 25 August 2016

representation theory - Computing Brauer characters of a finite group

I am studying character theory from the book "Character Theory of Finite Groups" by Martin Isaac. (I am not too familiar with valuations and algebraic number theory.)
In the last chapter on modular representation theory, Brauer characters, blocks, and defect groups are introduced.



My question is this: How do we find the irreducible Brauer characters and blocks, given a group and a prime?



For instance, let's say we have $p=3$ and the group $G = S_5$, the symmetric group.
An example of the precise calculation or method used to determine the characters would be very helpful. Thanks.

calculus - Solving functional equation $f(x)f(y) = f(x+y)$





I'm having some trouble solving the following equation for $f: A \rightarrow B$ where $A \subseteq \mathbb{R}$ and $B \subseteq
\mathbb{C}$ such as:



$$f(x)f(y) = f(x+y) \quad \forall x,y \in A$$



The solution is supposed to be the form $f(x) = e^{kx}$ where k can be complex but I don't see how to show it.



Note: I'm trying to solve this equation to find irreductible representation of U(1) but it doesn't really matter I think


Answer



The general solution of the given functional equation depends on conditions imposed to $f$.

We have:



The general continuous non vanishing complex solution of $f(x+y)=f(x)f(y)$ is $\exp(ax+b\bar x)$, where $\bar x$ is the complex conjugate of $x$. Note that this function is not holomorphic.



If we request that $f$ be differentiable then the general solution is $f(x)=\exp (ax)$, as proved in the @doetoe answer.






Added:




here I give a similar but a bit simpler proof:



Note that $f(x+y)=f(x)f(y) \Rightarrow f(x)=f(x+0)=f(x)f(0) \Rightarrow f(0)=1$ ( if $f$ is not the null function). We have also: $f'(x+y)=f'(x)f(y)$ (where $f'$ is the derivative with respect to $x$).
Now, setting $x=0$ and dividing by $f(y)$ yields:
$$
f'(y)=f'(0)f(y) \Rightarrow f(y)=k\exp\left(f'(0)y\right)
$$
and, given $f(0)=1$ we must have $k=1$.







The continuity request can be weakened but, if $f$ is not measurable then, with the the aid of an Hamel basis, we can find more general ''wild'' solutions.



For a proof see: J.Aczél : lectures on functional equations and their applications, pag. 216.


How does Mathematical Induction work?


How does mathematical induction actually work?




After surfing on the internet for a while, I found the following analogy.
Consider rectangular tiles (dominoes) stacked on beside the other. When we force the first tile to fall, the others begin to fall.
To actually know if all the tiles have fallen, we need to know the following-





  • If the first tile has fallen or not. (If not, non of them have fallen)

  • If the first tile has fallen, then we can pick some random tile from the stack to check if that has fallen. If this tile has fallen, then the previous tile must have also fallen.

  • From this, we can conclude that all the tiles have fallen.



But I don't understand how this idea of induction works with numbers?

elementary number theory - Prove that $gcd(a^n - 1, a^m - 1) = a^{gcd(n, m)} - 1$

For all $a, m, n \in \mathbb{Z}^+$,



$$\gcd(a^n - 1, a^m - 1) = a^{\gcd(n, m)} - 1$$

algebra precalculus - How to calculate $sqrt{frac{-3}{4} - i}$











I know that the answer to $\sqrt{\dfrac{-3}{4} - i}$ is $\dfrac12 - i$. But how do I calculate it mathematically if I don't have access to a calculator?


Answer



One of the standard strategies (the other strategy is to do what JM suggested in the comment to the qn) is to complete the square and use the fact that $i^2 = -1$.



$$\sqrt{\frac{-3}{4}-i}$$




Add and subtract 1 to get:



$$\sqrt{\frac{-3}{4}+1-i-1}$$



Use $i^2 = -1$ to get:



$$\sqrt{\frac{-3}{4}+1-i+i^2}$$



Simplify $\frac{-3}{4}+1$ to get:




$$\sqrt{\frac{1}{4}-i+i^2}$$



Rewrite $-i$ as $-2 \frac{1}{2}i$ to get:



$$\sqrt{\frac{1}{2^2}-2 \frac{1}{2}i+i^2}$$



Complete the square to get:



$$\sqrt{(\frac{1}{2}-i)^2}$$




Get rid of the square root to get:



$$\frac{1}{2}-i$$


calculus - What is the derivative of $frac{x^2+2^x}{x^2cdot 2^x}$ at 1?



Let there be $$f(x)=\frac{x^2+2^x}{x^2\cdot 2^x}$$
I'm asked to calculate $f'(1)$. I know that I should apply the division rule, then find the derivative $f'(x)$ while applying power, chain and product rules when necessary, and finally evaluate $f'(1)$.



The thing is all these steps seemed way to lengthy to me. I figured out that maybe there somehow was a shorter way to find $f'(x)$ without going through all these lengthy calculations.



Any help would be appreciated and thanks in advance.


Answer



Note that $$f(x)=\frac{x^2+2^x}{x^2\cdot 2^x}=2^{-x}+x^{-2}.$$



number theory - Prove that among any 12 consecutive positive integers there is at least one which is smaller than the sum of its proper divisors



Prove that among any 12 consecutive positive integers

there is at least one which is smaller than the sum of
its proper divisors. (The proper divisors of a positive
integer n are all positive integers other than 1 and n
which divide n. For example, the proper divisors of 14 are 2
and 7)


Answer



Hint: Among any $12$ consecutive positive integers, there is one that is a multiple of $12$.



Can you show that $12n$ is smaller than the sum of its divisors for any positive integer $n$?


calculus - how to strictly prove $sin x



$$\sin xIn most textbooks, to prove this inequality is based on geometry illustration (draw a circle, compare arc length and chord ), but I think that strict proof should be based on analysis reasoning without geometry illustration. Who can prove it? Thank you very much.






ps:





  1. By differentiation, monotonicity and Taylor formula, all are wrong, because $(\sin x)'=\cos x$ must use $\lim_{x \to 0}\frac{\sin x}{x}=1$, and this formula must use $\sin x< x$. This is vicious circle.


  2. If we use Taylor series of $\sin x$ to define $\sin x$, strictly prove $\sin x



Answer



We can define $\sin x$ as power series. Applying the knowledge of power series, obtain the derivative of $\sin x$, and then we will easy prove the inequality. Concluding geometry of $\sin x$, please refer to this.


Wednesday 24 August 2016

abstract algebra - What is the field $mathbb{Q}(pi)$?



I'm having a hard time understanding section 29,30,31 of Fraleigh.



In 29.16 example, what is the field $\mathbb{Q}(\pi)$? and why is it isomorphic to the field $\mathbb{Q}(x)$ of rational functions over $\mathbb{Q}$?



(According to the definition, the field $\mathbb{Q}(\pi)$ is the smallest subfield of $E$ (extension field of $\mathbb{Q}$) containing $\mathbb{Q}$ and $\pi$.)




Thank you!


Answer



Define a ring homomorphism $f\colon \mathbb{Q}[x]\to\mathbb{R}$ by
$$f(p)=p(\pi)$$
so that (for example) $$f(\tfrac{1}{3})=\tfrac{1}{3},\quad f(x)=\pi,\quad f(2x^2+5)=2\pi^2+5,$$
and so on. The image of a ring homomorphism is a subring of the codomain; in the case of this particular ring homomorphism $f$, the image is given the name $\mathbb{Q}[\pi]$. It is the "smallest" subring of $\mathbb{R}$ that contains $\mathbb{Q}$ and $\pi$.



What is the kernel of this homomorphism $f$? That is, what polynomials $p\in\mathbb{Q}[x]$ have $\pi$ as a root? The answer is none (other than the obvious $p=0$). This is what it means for $\pi$ to be transcendental (in 1882, Lindemann proved that $\pi$ is transcendental). The first isomorphism theorem for rings now tells us that
$$\mathbb{Q}[x]/(\ker f)\cong \mathbb{Q}[\pi]$$

but since the kernel of $f$ is trivial, this statement just says that $\mathbb{Q}[x]\cong \mathbb{Q}[\pi]$. In other words, we see that $f$ is a ring isomorphism from $\mathbb{Q}[x]$ to $\mathbb{Q}[\pi]$.



Now see if you can prove the following general fact: if two integral domains $D_1$ and $D_2$ are isomorphic, then their respective fields of fractions $\mathrm{Frac}(D_1)$ and $\mathrm{Frac}(D_2)$ are also isomorphic.


functional equations - Is the product rule for logarithms an if-and-only-if statement?

If a function $f(x)$ is proportional to $\ln x$, then we know
$$ f(xy) = f(x) + f(y). $$



My question is, is the converse true? If we know that, for an unknown function f,
$$ f(xy) = f(x) + f(y), $$
can we conclude that the function must be proportional to $\ln x$? Why?

solving absolute value equation 2




My question is : Solve simultaneously-



$$\left\{\begin{align*}&|x-1|+|y-2|=1\\&y = 3-|x-1|\end{align*}\right.$$



I tried to solve this question by the method told by Marvis as I had understood that method (its here: Solve an absolute value equation simultaneously



But the solution set i got for the above question is not correct.
My solution was: $y \geq 2$, and x=3 or x=y-2.
I would like to know the final correct solution.



Answer



We shall proceed on similar lines as the answer here.



You have that $\lvert x - 1 \rvert + \lvert y - 2 \rvert = 1$. This gives us that $$\lvert x - 1 \rvert = 1 - \lvert y-2 \rvert.$$Plugging this into the second equation gives us $$y = 3 - \left( 1 - \lvert y-2 \rvert \right) = 2 + \lvert y - 2 \rvert$$
This gives us that $$y - \lvert y - 2 \rvert = 2.$$
If $y > 2$, then we get that $$y - y+ 2 =2,$$ which is true for all $y >2$.



If $y \leq 2$, then we get that $$y + y-2 =2 \implies y=2$$ Hence we get that $$y \geq 2$$ From the second equation, we get that $$\lvert x - 1 \rvert = 3 - y$$
Since $\lvert x - 1 \rvert \geq 0$, we need $$3-y \geq 0$$ This means that $y \leq 3$. Hence, we have that $2 \leq y \leq 3$.




If $x \geq 1$, then $x-1 = 3-y \implies x = 4-y$. Note that since $y \in [2,3]$, $x = 4-y \geq 1$.



If $x < 1$, then $x-1 = y-3 \implies x = y-2$. Note that since $y \in [2,3]$, $x = y-2 < 1$.



Hence, the solution set is given as follows. $$2 \leq y \leq 3 \\ \text{ and }\\ x = 4-y \text{ or } y-2$$


calculus - How to prove that $limlimits_{xto0}frac{sin x}x=1$?




How can one prove the statement
$$\lim_{x\to 0}\frac{\sin x}x=1$$
without using the Taylor series of $\sin$, $\cos$ and $\tan$? Best would be a geometrical solution.



This is homework. In my math class, we are about to prove that $\sin$ is continuous. We found out, that proving the above statement is enough for proving the continuity of $\sin$, but I can't find out how. Any help is appreciated.


Answer



sinc and tanc at 0



The area of $\triangle ABC$ is $\frac{1}{2}\sin(x)$. The area of the colored wedge is $\frac{1}{2}x$, and the area of $\triangle ABD$ is $\frac{1}{2}\tan(x)$. By inclusion, we get
$$

\frac{1}{2}\tan(x)\ge\frac{1}{2}x\ge\frac{1}{2}\sin(x)\tag{1}
$$
Dividing $(1)$ by $\frac{1}{2}\sin(x)$ and taking reciprocals, we get
$$
\cos(x)\le\frac{\sin(x)}{x}\le1\tag{2}
$$
Since $\frac{\sin(x)}{x}$ and $\cos(x)$ are even functions, $(2)$ is valid for any non-zero $x$ between $-\frac{\pi}{2}$ and $\frac{\pi}{2}$. Furthermore, since $\cos(x)$ is continuous near $0$ and $\cos(0) = 1$, we get that
$$
\lim_{x\to0}\frac{\sin(x)}{x}=1\tag{3}
$$

Also, dividing $(2)$ by $\cos(x)$, we get that
$$
1\le\frac{\tan(x)}{x}\le\sec(x)\tag{4}
$$
Since $\sec(x)$ is continuous near $0$ and $\sec(0) = 1$, we get that
$$
\lim_{x\to0}\frac{\tan(x)}{x}=1\tag{5}
$$


calculate the limit of this sequence $sqrt{1+sqrt{1+sqrt{1+sqrt{1..}}}}$











i am trying to calculate the limit of $a_n:=\sqrt{1+\sqrt{1+\sqrt{1+\sqrt{1+}}}}..$ with $a_0:=1$ and $a_{n+1}:=\sqrt{1+a_n}$ i am badly stuck not knowing how to find the limit of this sequence and where to start the proof. i did some calculations but still cannot figure out the formal way of finding the limit of this sequence. what i tried is:
$$(1+(1+(1+..)^\frac{1}{2})^\frac{1}{2})^\frac{1}{2}$$ but i am totally stuck here


Answer



We (inductively) show following properties for sequence given by $a_{n+1} = \sqrt{1 + a_n}, a_0 =1$





  1. $a_n \ge 0$ for all $n\in \Bbb N$

  2. $(a_n)$ is monotonically increasing

  3. $(a_n)$ is bounded above by $2$



Then by Monotone Convergence Theorem, the sequence converges hence the limit of sequence exists. Let $\lim a_{n} = a$ then $\lim a_{n+1} = a$ as well. Using Algebraic Limit Theorem, we get



$$
\lim a_{n+1} = \sqrt{1 + \lim a_n} \implies a = \sqrt {1 + a}

$$



Solving above equation gives out limit. Also we note that from Order Limit Theorem, we get $a_n \ge 0 \implies \lim a_n \ge 0$.


algebra precalculus - A completes 36% work in 12 days.



A completes 36% work in 12 days.



B is twice as fast as A.




C is twice as fast as B.



How many approx days are required for finishing the remaining 64% work if all of them work together.?



a)2 b)4 c)6 d)8



Answer is a but i am getting option b...



i have shown my solution below..



Answer



work done by A in 1 day is 36/1200



work done by B in 1 day is 36/600



work done by C in 1 day is 36/300



work done by A,B,C in 1 day is 36/1200 + 36/600 + 36/300



Thus A,B,C need 1/(36/1200 + 36/600 + 36/300) days to finish 100% work




i.e. A,B,C need 4.76 days to finish 100% work



Thus A,B,C need 64x4.76/100 days to finish 64% work



i.e. A,B,C need 3.04 days to finish 64% work



Thus answer should be b as 3.04 which is closer to 4 than 2..



But the answer was given as a... ? vch 1 do u feel is correct




pls Help...


integration - Closed form for $ int_0^infty {frac{{{x^n}}}{{1 + {x^m}}}dx }$



I've been looking at



$$\int\limits_0^\infty {\frac{{{x^n}}}{{1 + {x^m}}}dx }$$




It seems that it always evaluates in terms of $\sin X$ and $\pi$, where $X$ is to be determined. For example:



$$\displaystyle \int\limits_0^\infty {\frac{{{x^1}}}{{1 + {x^3}}}dx = } \frac{\pi }{3}\frac{1}{{\sin \frac{\pi }{3}}} = \frac{{2\pi }}{{3\sqrt 3 }}$$



$$\int\limits_0^\infty {\frac{{{x^1}}}{{1 + {x^4}}}dx = } \frac{\pi }{4}$$



$$\int\limits_0^\infty {\frac{{{x^2}}}{{1 + {x^5}}}dx = } \frac{\pi }{5}\frac{1}{{\sin \frac{{2\pi }}{5}}}$$



So I guess there must be a closed form - the use of $\Gamma(x)\Gamma(1-x)$ first comess to my mind because of the $\dfrac{{\pi x}}{{\sin \pi x}}$ appearing. Note that the arguments are always the ratio of the exponents, like $\dfrac{1}{4}$, $\dfrac{1}{3}$ and $\dfrac{2}{5}$. Is there any way of finding it? I'll work on it and update with any ideas.







UPDATE:



The integral reduces to finding



$$\int\limits_{ - \infty }^\infty {\frac{{{e^{a t}}}}{{{e^t} + 1}}dt} $$



With $a =\dfrac{n+1}{m}$ which converges only if




$$0 < a < 1$$



Using series I find the solution is




$$\sum\limits_{k = - \infty }^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a + k}}} $$




Can this be put it terms of the Digamma Function or something of the sort?


Answer




I would like to make a supplementary calculation on BR's answer.



Let us first assume that $0 < \mu < \nu$ so that the integral
$$ \int_{0}^{\infty} \frac{x^{\mu-1}}{1+x^{\nu}} \; dx $$
converges absolutely. By the substitution $x = \tan^{2/\nu} \theta$, we have
$$ \frac{dx}{1+x^{\nu}} = \frac{2}{\nu} \tan^{(2/\nu)-1} \theta \; d\theta. $$
Thus
$$ \begin{align*}
\int_{0}^{\infty} \frac{x^{\mu-1}}{1+x^{\nu}} \; dx
& = \int_{0}^{\frac{\pi}{2}} \frac{2}{\nu} \tan^{\frac{2\mu}{\nu}-1} \theta \; d\theta \\

& = \frac{1}{\nu} \beta \left( \frac{\mu}{\nu}, 1 - \frac{\mu}{\nu} \right) \\
& = \frac{1}{\nu} \Gamma \left( \frac{\mu}{\nu} \right) \Gamma \left( 1 - \frac{\mu}{\nu} \right) \\
& = \frac{\pi}{\nu} \csc \left( \frac{\pi \mu}{\nu} \right),
\end{align*} $$
where the last equality follows from Euler reflexion formula.


arithmetic - Geometric series word problem help?

Philip received 75 points on a project for school. He can make changes and receive two-tenths of the missing points back. He can make corrections as many times as he wants. Create the formula for the sum of this geometric series and explain your steps in solving for the maximum grade Philip can receive. Identify this as converging or diverging.





  • The formula would be $$S_n= \frac{a_1-a_1r^n}{ 1-r}$$



    but am I missing something and how do I find $n$? Is it diverging?


Smallest positive root of polynomial with bounded coefficients



Given is a positive integer $n$. A polynomial has all coefficients being integers whose absolute value does not exceed $n$. What is the smallest possible positive root, if there is any?



If the root is rational, then by the rational root theorem, it cannot be smaller than $1/n$. The polynomial $nx-1=0$ has $x=1/n$ as the only solution. But if a root is irrational, can it be smaller than $1/n$?


Answer



Let the polynomial be $f(x)=a_kx^k+a_{k+1}x^{k+1}+\dots+ a_mx^m$, where $a_k\neq 0$. Note that if $0\frac{1}{n+1}.$$




(Note that this bound does not really require the coefficients of $f(x)$ to be integers. It just requires them to all have absolute value $\leq n$, and that the first nonzero coefficient has absolute value $\geq 1$.)



Conversely, we can find such polynomials $f(x)$ with positive roots arbitrarily close to $\frac{1}{n+1}$. Namely, consider $$f_m(x)=1-\sum_{j=1}^m nx^j.$$ Note that $f_m\left(\frac{1}{n+1}\right)$ converges to $0$ from above as $m\to\infty$. Moreover, for any $\epsilon>0$, $f_m\left(\frac{1}{n+1}+\epsilon\right)<0$ for all sufficiently large $m$ (since you can choose $m$ such that $f_m\left(\frac{1}{n+1}\right)<\epsilon$, and $f_m\left(\frac{1}{n+1}\right)-f_m\left(\frac{1}{n+1}+\epsilon\right)\geq n\epsilon$ by looking at the linear term alone). By the intermediate value theorem, $f_m$ must then have a root between $\frac{1}{n+1}$ and $\frac{1}{n+1}+\epsilon$.


integration - Using Complex Analysis to Compute $int_0 ^infty frac{dx}{x^{1/2}(x^2+1)}$




I am aware that there is a theorem which states that for $0

My attempt is the following: Let's take $f(z)=\frac{1}{z^{1/2}(z^2+1)}$ as the complexification of our integrand. Define $\gamma_R=\gamma _R^1+\gamma_R^2+\gamma_R^3$, where $\gamma_R^1(t)=t$ for $t$ from $1/R$ to $R$; $\gamma_R^2(t)=\frac{1}{R}e^{it}$, where $t$ goes from $\pi /4$ to $0$ ; and $\gamma_R^3(t)=e^{\pi i/4}t, $ where $t$ goes from $R$ to $1/R$ (see drawing).enter image description here



The poles of the integrand are at $0,\pm i$, but those are not contained in the contour, so by the residue theorem $\int_{\gamma_R}f(z)dz=0$. On the other hand, $\int_{\gamma_R}=\int_{\gamma_R^1}+\int_{\gamma_R^2}+\int_{\gamma_R^3}$. As $R\to \infty$, $\int_{\gamma_R^1}f(z)dz\to \int_0 ^\infty \frac{1}{x^{1/2}(x^2+1)}dx$. Also, = $\vert \int_{\gamma_R^2}f(z)dz\vert \le \frac{\pi }{4R}\cdot \frac{1}{R^2-1}$ and the lattest expression tends to $0$ as $R\to \infty$. However, $\int_{\gamma_R^3}f(z)=i\int_R ^{1/R}tdt=\frac{i/R^2-iR^2}{2}$, which is unbounded in absolute value for large $R$.



Is there a better contour to choose? If so, what is the intuition for finding a good contour in this case?


Answer



For this, you want the keyhole contour $\gamma=\gamma_1 \cup \gamma_2 \cup \gamma_3 \cup \gamma_4$, which passes along the positive real axis ($\gamma_1$), circles the origin at a large radius $R$ ($\gamma_2$), and then passes back along the positive real axis $(\gamma_3)$, then encircles the origin again in the opposite direction along a small radius-$\epsilon$ circle ($\gamma_4$). Picture (borrowed from this answer):




$\hspace{4.4cm}$enter image description here



$\gamma_1$ is green, $\gamma_2$ black, $\gamma_3$ red, and $\gamma_4$ blue.



It is easy to see that the integrals over the large and small circles tend to $0$ as $R \to \infty$, $\epsilon \to 0$, since the integrand times the length is $O(R^{-3/2})$ and $O(\epsilon^{1/2})$ respectively. The remaining integral tends to
$$ \int_{\gamma_1 \cup \gamma_3} = \int_0^{\infty} \frac{x^{-1/2}}{1+x^2} \, dx + \int_{\infty}^0 \frac{(xe^{2\pi i})^{-1/2}}{1+(xe^{2\pi i})^2} \, dx, $$
because we have walked around the origin once, and chosen the branch of the square root based on this. This simplifies to
$$ (1-e^{-\pi i})\int_0^{\infty} \frac{x^{-1/2}}{1+x^2} \, dx = 2I. $$
Now you need to compute the residues of the function at the two poles, using the same branch of the square root. The residues of $1/(1+z^2) = \frac{1}{(z+i)(z-i)}$ are at $z=e^{\pi i/2},e^{3\pi i/2}$, so you find
$$ 2I = 2\pi i \left( \frac{(e^{\pi i/2})^{-1/2}}{2i} +\frac{(e^{3\pi i/2})^{-1/2}}{-2i} \right) = 2\pi \sin{\frac{1}{4}\pi} = \frac{2\pi}{\sqrt{2}} $$







However, I do recommend that you don't attempt to use contour integration for all such problems: imagine trying to do
$$ \int_0^{\infty} \frac{x^{s-1}}{(a+bx^n)^m} \, dx, $$
for general $a,b,s,m,n$ such that it converges, using that method! No, the useful thing to know is that
$$ \frac{1}{A^n} = \frac{1}{\Gamma(n)}\int_0^{\infty} \alpha^{n-1}e^{-\alpha x} \, dx, $$
which enables you to do more general integrals of this type. Contour integration's often a quick and cheap way of doing simple integrals, but becomes impractical in some general cases.


special functions - Inverse of elliptic integral of second kind

The Wikipedia articles on elliptic integral and elliptic functions state that “elliptic functions were discovered as inverse functions of elliptic integrals.” Some elliptic functions have names and are thus well-known special functions, and the same holds for some elliptic integrals. But what is the relation between the named elliptic functions and the named elliptic integrals?



It seems that the Jacobi amplitude $\varphi=\operatorname{am}(u,k)$ is the inverse of the elliptic integral of the first kind, $u=F(\varphi,k)$. Or related to this, $x=\operatorname{sn}(u,k)$ is the inverse of $u=F(x;k)$. It looks to me as if all of Jacobi's elliptic functions relate to the elliptic integral of the first kind. For other named elliptic functions listed by Wikipedia, like Jacobi's $\vartheta$ function or Weierstrass's $\wp$ function, it is even harder to see a relation to Legendre's integrals.



Is there a way to express the inverse of $E$, the elliptic integral of the second kind, in terms of some named elliptic functions? I.e. given $E(\varphi,k)=u$, can you write a closed form expression for $\varphi$ in terms of $k$ and $u$ using well-known special functions and elementary arithmetic operations?



In this post the author uses the Mathematica function FindRoot to do this kind of inversion, but while reading that post, I couldn't help wondering whether there is an easier formulation. Even though the computation behind the scenes might in fact boil down to root-finding in any case, it feels like this task should be common enough that someone has come up with a name for the core of this computation.

real analysis - Evaluation of $limlimits_{xrightarrow0} frac{tan(x)-x}{x^3}$



One of the previous posts made me think of the following question: Is it possible to evaluate this limit without L'Hopital and Taylor?



$$\lim_{x\rightarrow0} \frac{\tan(x)-x}{x^3}$$


Answer




The statement $\dfrac{\tan(x)-x}{x^3} \to c$ as $x \to 0$ is equivalent to
$\tan(x) = x + c x^3 + o(x^3)$ as $x \to 0$, so this is a statement about a
Taylor polynomial of $\tan(x)$, and I'm not sure what would count as doing
that "without Taylor". However, one thing you could do is start from $$\sin(x) = x + o(x)$$ integrate to get $$\cos(x) = 1 - x^2/2 + o(x^2)$$ then $$\sec(x) = \frac{1}{1-x^2/2 + o(x^2)} = 1 + x^2/2 + o(x^2)$$ $$\sec^2(x) = \left(1 + x^2/2 + o(x^2)\right)^2 = 1 + x^2 + o(x^2)$$ and integrate again to get
$$\tan(x) = x + x^3/3 + o(x^3)$$


Tuesday 23 August 2016

Simplifying exponential fraction



I have this exponential fraction
$$\frac{2^{n+1}}{5^{n-1}}$$



I was wondering how we simplify something like this.




I know if the top and bottom had the same like $\frac{2^{n+1}}{2^{n+1}}$, you would just subtract the exponent.



But in my situation, I'm not too sure how to tackle it.


Answer



$\frac{2^{n+1}}{5^{n-1}}=2\times 5\times \frac{2^n}{5^n}=10\times \left(\frac{2}{5}\right)^n=10\times 0.4^n$ if you wish. But if you are dealing with simplifying fractions, I do think your answer is fine.


calculus - Find the limit of $(2sin x-sin 2x)/(x-sin x)$ as $xto 0$ without L'Hôpital's rule

I wonder how to do this in different way from L'Hôpital's rule:




$$\lim_{x\to 0}\frac{2\sin x-\sin 2x}{x-\sin x}.$$



Please help me solve this without using L'Hopital's rule.

linear algebra - Row reduction over any field?

EDIT: as stated in the first answer, my initial question was confused. Let me restate the question (I have to admit that it is now quite a different one):



Let's say we have a matrix $A$ with entries from $\mathbb{C}$. Such matrices form a vector space over $\mathbb{R}$ and also form another vector space over $\mathbb{C}$. If we want to row reduce $A$, we see that often we cannot row reduce it over $\mathbb{R}$, that is, using elementary operations involving only scalars which are real numbers, however we can row reduce it using operations that involve complex numbers.



In conclusion, it seems to me that the row reduction of a matrix with elements from a field (all those matrices form a vector space) may or may not be possible, depending on the underlying field of that vector space.. So this helpful tool (row reduction) is not always available for us to use when we get a little more far from, let's say, the most "elementary" vector spaces.



Is my observation correct?



Question as initially stated (please ignore):
Consider the vector space of complex matrices over the field of (i) complex numbers and (ii) real numbers. It is straightforward to find examples where a complex matrix can be row-reduced (say, to the identity matrix) in case (i) but cannot be row-reduced in case (ii).




What gives? So, we can assume that a matrix may be invertible over $\mathbb{C}$ but not over $\mathbb{R}$?



Thanks in advance..

calculus - Elegant form of $left(sum_{n=0}^infty a_n x^nright)^d$






Let $(a_n)_n$ be a bounded sequence of real numbers and $x \in (0,1)$, we can then consider the series:
$$
\sum_{n=0}^\infty a_n x^n.
$$
Let now $d\in \{2,3,\dots\}$ we can then consider $\left(\sum_{n=0}^\infty a_n x^n\right)^d$. I would like to write this in the following form:
$$
\left(\sum_{n=0}^\infty a_n x^n\right)^d = \sum_{n=0}^\infty b_n x^n, \qquad (1)
$$
for certain $b_n$.






From the Multinomial Theorem we find that it can be written as:
$$
\sum_{\sum_n k_n = d} \binom{d}{k_1,\dots} \prod_{t=0}^\infty (a_t x^{t})^{k_t}
=
\sum_{\sum_n k_n = d} \binom{d}{k_1,\dots} \sum_{v=0}^{\infty}\sum_{\sum_t t k_t = v} \left(\prod_{t=0}^\infty (a_t)^{k_t} \right) x^{v}
$$
we can now try to move all these sums to the left, this way we can write:
$$

\left(\sum_{n=0}^\infty a_n x^n\right) = \sum_{v=0}^\infty \left( \sum_{\sum_n k_n = d} \binom{d}{k_1,\dots} \sum_{\sum_t t k_t =v} \left( \prod_{t=0}^\infty a_t^{k_t} \right) \right) x^v
$$
we have thus found the representation $(1)$ with:
$$
b_v = \sum_{\sum_n k_n = d} \binom{d}{k_1,\dots} \sum_{\sum_t t k_t=v} \left( \prod_{t=0}^\infty a_t^{k_t}\right).
$$
I was wondering if there isn't a more elegant representation of these $b_v$?


Answer



I don't know about what you consider "elegant", but are you aware that Faà di Bruno's formula can be expressed in terms of (partial) Bell polynomials? (See also Charalambides's book.)




Applied to your problem, we have



$$\left(\sum_{n\ge 0}a_n x^n\right)^d=\sum_{n\ge0}\left(\sum_{j=0}^{d}\frac{d!}{(d-j)!}a_0^{d-j}B_{n,j}(a_1,2a_2,\dots,n!a_n)\right)\frac{x^n}{n!}$$


linear algebra - How to find eigenvalues of the matrix

This is a question from our end-semester exam:




How to find the eigenvalues of the given matrix:




M=\begin{bmatrix}
5,1,1,1,1,1\\
1,5,1,1,1,1\\
1,1,5,1,1,1\\

1,1,1,5,1,1\\
1,1,1,1,4,0\\
1,1,1,1,0,4\\
\end{bmatrix}



I know that $4$ is an eigenvalue of $M$ with multiplicity atleast $3$ since $M-4I$ has $4$ identical rows.



Is there any way to find all eigenvalues of this matrix? I could find only $3$ out of $6$.

sequences and series - Help with an infinite sum of exponential terms?



I've been trying to calculate the mean squared displacement of a particle confined to a one-dimensional box, and I managed to get an answer in terms of an infinite series of the basic form
$$
\sum_{n=1}^\infty\frac{(-1)^n}{n^2}\exp(-an^2)\;\;.

$$
I can't figure out if this series has a simple solution; I can't find it in any table of series, nor can I seem to expand the exponential and collect like terms without running into $\sum(-1)^2$ terms. Does this have a solution or is this as far as I can go? Thanks in advance.


Answer



By using the Jacobian Theta function the resulting series can be placed into the form
\begin{align}
f(a) = \sum_{n=1}^{\infty} \frac{(-1)^{n}}{n^{2}} \, e^{- a \, n^{2}} = \frac{a}{2} - \frac{\pi^{2}}{12} - \frac{1}{2} \, \int_{0}^{a} \theta\left(\frac{1}{2}, \frac{i u}{\pi}\right) \, du.
\end{align}



This is developed by the following. Differentiate $f(a)$ to obtain
$$f^{'}(a) = \sum_{n=1}^{\infty} (-1)^{n+1} \, e^{-a \, n^{2}}.$$

Now,
$$ \theta(x, it) = 1 + 2 \, \sum_{n=1}^{\infty} e^{- \pi \, t \, n^{2}} \, \cos(2 n \pi x) $$
which yields
$$ f^{'}(a) = \frac{1}{2} \, \left( 1 - \theta\left(\frac{1}{2}, \frac{i \, a}{\pi}\right) \right).$$
Integrating with respect to $a$ yields
$$f(a) = \frac{a}{2} - \frac{1}{2} \, \int_{0}^{a} \theta\left(\frac{1}{2}, \frac{i \, u}{\pi}\right) \, du + c_{0}$$
Since $f(0) = - \frac{1}{2} \, \zeta(2)$ then the presented result is obtained.


Monday 22 August 2016

linear algebra - What does a function of matrices do to the eigenvalues of matrices in its domain? Two examples and request for generalization if possible



I think, for example, that if $\lambda$ is an eigenvalue of a matrix $A$, then $\lambda^2$ is an eigenvalue for $A^2$ and that $\frac{1}{\lambda}$ is an eigenvalue for $A^{-1}$ provided $A$ is invertible.



Is there a class of matrix valued functions functions $f$ for which the eigenvalues of $f(A)$ are $\widetilde{f}(\lambda)$, where $\widetilde{f}$ is the "analogous" scalar function? Not sure how to say the last part precisely which is also why it would be great if someone could direct me towards the terms to look up.


Answer



Given any real analytic function in a neighborhood of $0,$ we get $f(A)$ defined as long as some induced norm for $A$ is smaller than the radius of convergence for the Taylor series of $f.$ Or, of course, if the radius is infinite, as in $e^x.$



However, the Cayley Hamilton Theorem says that $f(A)$ can be rewritten as a polynomial in $A,$ of degree no larger than $n.$




You just need to figure out your question for polynomials of various types of Jordan normal forms.



Notice that this does not directly apply to $1/x,$ which gives $A^{-1}$ only if $A$ is actually invertible. However, it does apply to $1/(1+x),$ or $(I+A)^{-1}$ when $A$ is near the $0$ matrix.


proof writing - Sum of Binomial Coefficients?

Question: Trying to find a proof of the following equation.




For any $m,n\in \mathbb{N}^0$,
$$\sum_{k=0}^{m} \binom{n+k}{k} = \binom{n+m+1}{m}.$$



I know that Vandermonde's identity might be useful but not sure where to start.

Sunday 21 August 2016

real analysis - Continuity/differentiability at a point and in some neighbourhood of the point

For a function $f: U \to \mathbb{R}$ where $U$ is a subset of $\mathbb{R}$, it seems like that it being continuous at a point doesn't imply that there is a neighbourhood of the point where it can be continuous. Similarly, it seems like that it being differentiable at a point doesn't imply that there is a neighbourhood of the point where it can be differentiable. I was wondering if there are some counterexamples to confirm the above?



Added:



What are some necessary and/or sufficient conditions for continuity/differentiability at a point and in some neighbourhood of the point to be equivalent?



Can the case of continuity be generalized to mappings between topological spaces?




Thanks and regards!

improper integrals - Show that $int_{-infty}^{infty}frac{x^2dx}{(x^2+1)^2(x^2+2x+2)}=frac{7pi}{50}$

Show that



$$\int_{-\infty}^{\infty}\frac{x^2dx}{(x^2+1)^2(x^2+2x+2)}=\frac{7\pi}{50} $$



So I figured since it's an improper integral I should change the limits




$$\lim_{m_1\to-\infty}\int_{m_1}^{0}\frac{x^2dx}{(x^2+1)^2(x^2+2x+2)}+ \lim_{m_2\to\infty}\int_{0}^{m_2}\frac{x^2dx}{(x^2+1)^2(x^2+2x+2)}$$



I'm however not sure how to evaluate this. Any help would be great - thanks.

Help with functional equation $F(x,x)+F(x,c-x)-F(c-x,x)-F(c-x,c-x)=0$



How can we find $F$ satisfying: exists a $c$ such that $$F(x,x)+F(x,c-x)-F(c-x,x)-F(c-x,c-x)=0 \text{ for all } x,y $$



Several quadratic polynomials in $x,y$ satisfy the above property. I'm trying to generate more examples of functions that satisfy it. I will be very grateful for any other examples.



The property says there is one $c$ that works for all $x$ but given $F$ we have freedom to find a $c$ that works. For example:





  1. If $F(x,y)=Ax^2+By^2+Cxy+Dx+Ey+F$ and $A+C\neq 0$ we set $c=\frac{-2D}{A+C}$ and one can easily verify $F$ satisfy the functional equation.

  2. Also if $F(x,y)=(A+By)x^2+Cy^2+Dxy+Ex+Fy+H$ there are two values of $c$.

  3. As Jeb pointed in the comments if $F(x,y)$ is even in $x$ and also even in $y$ then $c=0$ works.



My first idea was to try to transform the equation into a PDE by differentiating wrt $x$ but we do not get a PDE because we are evaluating $F$ at different points: $x$ and $c-x$. Honestly, I'm not even sure of what tags to use.


Answer



For any function, $F(x,y)$, define the function $g(x,c)$ satisfying




$$
g(x,c) = F(x,x)+F(x,c-x)-F(c-x,x)-F(c-x,c-x)
$$



We therefore require that $\exists c$ s.t. $\forall x,\ g(x,c)=0$.



If we let $x=z+\frac{c}2$ and $\hat F(x,y)=F(x+\frac{c}2,y+\frac{c}2)$, then we have
$$
\hat g(z,c)=g(z+\frac{c}2,c)=\hat F(z,z)+\hat F(z,-z)-\hat F(-z,z)-\hat F(-z,-z)

$$
Therefore, without loss of generality, we can consider the case of $c=0$, for which we require that $\hat g(z,0)=0$. In this case, we may split the even and odd components of our function in each of the two dimensions, thus giving us
$$
\hat F(x,y) = A(x,y)+B(x,y)+C(x,y)+D(x,y)
$$
satisfying
$$A(x,y)=A(-x,y)=A(x,-y)\\B(x,y)=-B(-x,y)=B(x,-y)\\C(x,y)=C(-x,y)=-C(x,-y)\\D(x,y)=-D(-x,y)=-D(x,-y)$$
From which (letting $\hat g(x,c)=\hat g(x,0)=G(x)$), we may determine that
$$\begin{align}
G(x)=&A(x,x)+A(x,-x)-A(-x,x)-A(-x,-x)\\&+B(x,x)+B(x,-x)-B(-x,x)-B(-x,-x)\\&+C(x,x)+C(x,-x)-C(-x,x)-C(-x,-x)\\&+D(x,x)+D(x,-x)-D(-x,x)-D(-x,-x)\\

=&4B(x,x)
\end{align}$$
Therefore, the requirement is that $B(x,x)=0$. Recall that $B(x,y)$ represents the component of $\hat F(x,y)$ that is even in $y$ and odd in $x$. This provides us with a "natural" method of creating a solution. Given a function $\hat F(x,y)$, we can find $B(x,y)$ as
$$
B(x,y)=\frac{\hat F(x,y)-\hat F(-x,y)+\hat F(x,-y)-\hat F(-x,-y)}4
$$
and therefore, we may define a modified function
$$
\tilde F(x,y)=\begin{cases}\frac{3\hat F(x,x)+\hat F(-x,x)-\hat F(x,-x)+\hat F(-x,-x)}4 & x=y\\ \hat F(x,y) & \text{otherwise} \end{cases}
$$

Note that this leads to a result identical to that given by Ewan Delanoy, it is merely expressed in a different manner. For a more "natural" solution, we can require that $B(x,y)=0$, and thus define
$$
\tilde F(x,y) = \frac{3\hat F(x,y)+\hat F(-x,y)-\hat F(x,-y)+\hat F(-x,-y)}4
$$
From here, we may choose any $c$, and shift the function so that
$$
F(x,y)=\tilde F\left(x-\frac{c}2,y-\frac{c}2\right)
$$
and we have a solution for a given $c$. Notice that, for
$$

F(x,y)=Ax^2+By^2+Cxy+Dx+Ey+F
$$
with $c=\frac{-2D}{2A+C}$ we have
$$
\hat F(x,y) = Ax^2+By^2+Cxy-\frac{2BD-CD+E(2A+C)}{2A+C}y+\frac{C^2F+4ACF+4A^2F−CDE−2ADE+BD^2−AD^2}{(2A+C)^2}
$$
which, you can see, does not have the $x$ term, which is the only term in a bivariate quadratic polynomial that is even in $y$ and odd in $x$. Note the $2A+C$ on the denominator of $c$ - the value given in the question is incorrect.


proof verification - Prove divisibility in modular arithmetic



I know that a, b, n, and s are all integers and that $as \equiv b \mod n$. I want to prove that gcd(a,n) divides b. I think I have most of the pieces figured out, but I am not sure how to complete the proof. All $k_i$ are integers. From $as \equiv b \mod n$, I know that $as \equiv k_1 n +b$ . gcd(a,n) = d and d divides $a$ or $d|a$ and $d|n$, so $d|as$ and $d|k_1 n$ . Then $as = k_2 d$ and $k_1 n = k_3 d$. From there, $as = k_3 d + b$ and so $(as)/(k_3 d) = b$. I see that this isn't what I'm trying to prove. How do I continue, or am I even on the right track?


Answer



You've got $as = k_2 d$ but you haven't used it.




You can go about this with much less pain if you don't bother using these $k_i$. Just from the fact that $as + kn = b$, and since $d \mid as$ and $d \mid kn$, so $d \mid (as + kn)$.


Complex contour integration of $frac{e^{iz}}{z}$

While calculating $\int_0^\infty \frac{\sin(x)}{x} dx $ I integrated the complex function $f(z) = \frac{e^{iz}}{z}$ over the contour $C = [-R,R] \cup \gamma_R $. $\gamma_R = R e^{it}$ where $0 \leq t \leq \pi$. I had some trouble to show that $\int_{\gamma_R} f(z) \rightarrow 0$ as $R \rightarrow \infty$. Is it possible to show this without using Jordan's Lemma but instead using the ML estimate?

limits - Find $lim_{xto0}frac{sinleft(1-frac{sin(x)}{x}right)}{x^2}$. Is my approach correct?



Find:
$$
L = \lim_{x\to0}\frac{\sin\left(1-\frac{\sin(x)}{x}\right)}{x^2}
$$



My approach:




Because of the fact that the above limit is evaluated as $\frac{0}{0}$, we might want to try the De L' Hospital rule, but that would lead to a more complex limit which is also of the form $\frac{0}{0}$.



What I tried is:
$$
L = \lim_{x\to0}\frac{\sin\left(1-\frac{\sin(x)}{x}\right)}{1-\frac{\sin(x)}{x}}\frac{1}{x^2}\left(1-\frac{\sin(x)}{x}\right)
$$
Then, if the limits
$$
L_1 = \lim_{x\to0}\frac{\sin\left(1-\frac{\sin(x)}{x}\right)}{1-\frac{\sin(x)}{x}},

$$



$$
L_2 = \lim_{x\to0}\frac{1}{x^2}\left(1-\frac{\sin(x)}{x}\right)
$$
exist, then $L=L_1L_2$.



For the first one, by making the substitution $u=1-\frac{\sin(x)}{x}$, we have
$$
L_1 = \lim_{u\to u_0}\frac{\sin(u)}{u},

$$
where
$$
u_0 = \lim_{x\to0}\left(1-\frac{\sin(x)}{x}\right)=0.
$$
Consequently,
$$
L_1 = \lim_{u\to0}\frac{\sin(u)}{u}=1.
$$




Moreover, for the second limit, we apply the De L' Hospital rule twice and we find $L_2=\frac{1}{6}$.



Finally, $L=1\frac{1}{6}=\frac{1}{6}$.



Is this correct?


Answer



In a slightly different way, using the Taylor expansion, as $x \to 0$,
$$
\sin x=x-\frac{x^3}6+O(x^5)
$$ gives

$$
1-\frac{\sin x}x=\frac{x^2}6+O(x^4)
$$ then
$$
\sin \left( 1-\frac{\sin x}x\right)=\frac{x^2}6+O(x^4)
$$ and




$$
\frac{\sin \left( 1-\frac{\sin x}x\right)}{x^2}=\frac16+O(x^2)

$$




from which one may conclude easily.


algebra precalculus - how many days it would take to complete the work for the first time working in alternate days given the following conditions?


If A can complete a wall in 21 days and B can destroy a wall in 28 days. Then how many days it would take to complete the work for the first time working in alternate days i.e, if A works one day then second day in a week B will work?





My Approach:



Work Done by A in 1 day = 1/21



Work Done by B in 1 day = -1/28



Net work done together in 2 days = 1/21 - 1/28 = 1/84



In one day = 1/168





Q Is my approach right.Please correct me if I am wrong?


number theory - Finding the last 4 digits of a huge power




I know this is more of a 'aops' type of question but here we go, I went to this math competition last year and there was this one problem that clearly I didn't solve but it recently came back to my mind and I want to know how to go about such problem:



Find the last 4 digits of the number: $$2^{{10}^{2018}}$$




My intuition is that one should probably use modular arithmetic on this one, the first things that came to my mind when I saw this one where: Chinese remainder Theorem and Binomial sums, I wasn't able to do much unfortunately...
I've read through the "How do I compute $a^b$ (mod c) by hand?" question but most of the answers rely on a and c being coprime which in my case $(2,10^4)=2$ is not true, the answers cover a few cases when a and c are not coprime but nothing very similar to my case...


Answer



You have $10000 = 2^45^4 = 16\cdot 625.$ You need to find $2^{10^{2018}} \pmod{10000}$ and the Chinese Remainder Theorem will do this nicely. First



$$2^{10^{2018}} \equiv 0 \pmod{16}.$$



Second, note that $\phi(625)=500$, so since $500$ divides $10^{2018},$



$$2^{10^{2018}} \equiv 1 \pmod{625}.$$




Then CRT gives $9376$ for the final answer.


calculus - Simple limit with asymptotic approach. Where's the error?



Simply calculus question about a limit.



I don't understand why I'm wrong, I have to calculate
$$ \lim_{x \rightarrow 0} \frac{2x - \sqrt[3]{8 - x^2}\sin x}{1 - \cos\sqrt{x^3}}

$$



Using asymptotics, limits and De l'Hospital rule I would write these passages...



$$ = \lim_{x \rightarrow 0} \frac{x \, (2 - \sqrt[3]{8 - x^2})}{x^3/2}
= \lim_{x \rightarrow 0} \frac{\frac{2}{3}\frac{\sqrt[3]{8 - x^2}}{8-x^2}x}{x}
= \frac{1}{6}
$$



But the answers should be $\frac{5}{6}$. Thank you for your help.



Answer



The mistake lies at the beginning :
$$ \lim_{x \rightarrow 0} \frac{2x - \sqrt[3]{8 - x^2}(x-\frac{x^3}{6})}{1 - \cos\sqrt{x^3}} =\frac56$$



$$ \lim_{x \rightarrow 0} \frac{2x - \sqrt[3]{8 - x^2}\:x}{1 - \cos\sqrt{x^3}} =\frac16$$



At denominator $1-\cos(x^{3/2})$ is equivalent to $\frac12 x^3$. Thus one cannot neglect the $x^3$ terms in the numerator. So, the equivalent of $\sin(x)$ must not be $x$ but $x-\frac{x^3}{6}$ . This was the trap of the exercise.


Saturday 20 August 2016

combinatorics - Proving $sum_{t=1}^j sum_{r=1}^{t} (-1)^{r+t} binom{j-1}{t-1} binom{t-1}{r-1} f(r) = f(j)$



I've run into the following identity and trying to prove it:





Let $j \in \mathbb{N}$ and $f:\mathbb{N} \to \mathbb{R}$, then
$$
\sum_{t=1}^j \sum_{r=1}^{t} (-1)^{r+t} \binom{j-1}{t-1} \binom{t-1}{r-1} f(r) = f(j)
$$




I've so far tried to find some connection with the multinomial theorem, since the product of the two binomials in the expression is just multinomial $\binom{j-1}{k1,k2,k3}$. So perhaps something like $(f(j)-1+1)^{j-1}$, but it does not quite fit. I am missing something, any ideas how to proceed?



Edit: Further attempt, trying to collect "coefficients" of $f(r)$ yields
$$

\sum_{t=r}^j(-1)^{r+t}\binom{j-1}{t-1}\binom{t-1}{r-1} = \sum_{t=r}^j (-1)^{r+t}\binom{j-1}{t-1}
$$
It should be enough to show that this is $1$ when $j=r$ and $0$ otherwise. First one is simple
$$
\sum_{t=r}^r (-1)^{r+t}\binom{r-1}{t-1} = (-1)^{2r} \binom{r-1}{r-1} = 1
$$ but how to show it is equal to $0$ in other cases ($j\neq r $) ...


Answer



We seek to verify that



$$\sum_{k=1}^n \sum_{q=1}^k (-1)^{k+q}

{n-1\choose k-1} {k-1\choose q-1} f(q) = f(n).$$



Exchange sums to obtain



$$\sum_{q=1}^n \sum_{k=q}^n (-1)^{k+q}
{n-1\choose k-1} {k-1\choose q-1} f(q)
\\ = \sum_{q=1}^n f(q) \sum_{k=q}^n (-1)^{k+q}
{n-1\choose k-1} {k-1\choose q-1}.$$



Now we get for the inner coefficient




$${n-1\choose k-1} {k-1\choose q-1}
= \frac{(n-1)!}{(n-k)! (q-1)! (k-q)!}
= {n-1\choose q-1} {n-q\choose n-k}.$$



This yields for the sum



$$\sum_{q=1}^n f(q) {n-1\choose q-1} (-1)^q
\sum_{k=q}^n {n-q\choose n-k} (-1)^k
\\ = \sum_{q=1}^n f(q) {n-1\choose q-1}

\sum_{k=0}^{n-q} {n-q\choose n-q-k} (-1)^k
\\ = \sum_{q=1}^n f(q) {n-1\choose q-1}
\sum_{k=0}^{n-q} {n-q\choose k} (-1)^k.$$



We get



$$\sum_{q=1}^n f(q) {n-1\choose q-1} [[n-q = 0]]
= f(n) {n-1\choose n-1} = f(n).$$


elementary number theory - Solve for huge linear congruence



How to solve a linear congruence with a very huge number.
For example, 47^27 congruent to x (mod 55)



My idea is to first break this into
47^27 congruent to x (mod 5)
and 47^27 congruent to x (mod 11)
then, by FlT, I can reduce this to:
47^3 congruent to x(mod 5)

and 47^5 congruent to x(mod 11)



However, I don't know how to continue.


Answer



We can do a series of reductions to simplify the problem. So, we have:




  • $47^1 \pmod{55} = 47$

  • $47^2 \pmod{55} = 9$

  • $47^3 \pmod{55} = 9 \times 47 \pmod{55} = 38$


  • $47^4 \pmod{55} = (47^2)^2 \pmod{55} = 9^2 \pmod{55} = 26$

  • $47^5 \pmod{55} = 47^2 \times 47^3 \pmod{55} = 9 \times 38 \pmod{55} = 12$



Now, using this approach, how can we reduce the problem for $47^{27} \pmod{55}$?



You can see additional approaches, like the Modular Exponentiation and other approaches (Montgomery and many others).


real analysis - Finite positive measure




I currently don't find any approach to this question. In particular, how can I related a finite positive measure to a regular integration?



Suppose $\mu$ be a finite positive measure. Let $f$ be measurable, and $g$ is an increasing function in $C^1$. I am trying to prove the following property: $\int g(f)\,d\mu = \int_0^\infty \mu\{x:\ |f(x)|>t\} g'(t)\,dt$.


Answer



Just use Fubini:
\begin{align*}
\int_0^\infty \mu\{x: |f(x)| > t\} g'(t) dt &= \int_0^\infty \int 1_{\{|f| > t\}}(x) \, g'(t) d\mu(x) dt \\
&= \int \int_0^\infty 1_{\{|f| > t\}}(x) \, g'(t) dt d\mu(x)\\
&= \int \int_0^{|f(x)|} g'(t) dt d\mu(x)\\
&= \int g(|f(x)|) - g(0) d\mu(x)

\end{align*}



So if $g(0) = 0$ and you were searching for the property with $g(|f|)$ instead of $g(f)$ this is ok.


limits - Evaluate $ lim_{xto pi/2} frac{sqrt{1+cos(2x)}}{sqrt{pi}-sqrt{2x}}$


Evaluate

$$ \lim_{x\to \pi/2} \frac{\sqrt{1+\cos(2x)}}{\sqrt{\pi}-\sqrt{2x}}$$




I tried to solve this by L'Hospital's rule..but that doesn't give a solution..appreciate if you can give a clue.

Convergent Sequence Terminology



What is the following sequence classified as? I don't want to make anybody solve it, I just need to know where to begin looking to solve it.
$$\alpha_1 = \sqrt{20}$$
$$\alpha_{n+1} = \sqrt{20 + \alpha_n}$$



I am suppose to prove that it converges to 5, however if I could just get a little terminology help it is more then appreciated!




Note: I updated the terminology, as well as give the initial value.



Thanks!


Answer



First, it's not a series, it's a sequence. Fixed in the original.



Second, it's a recursively defined sequence.



A sequence is "recursively defined" if you specify some specific values and then you explain how to get the "next value" from the previous one; much like induction. Here, you are saying how to get the "next term", $\alpha_{n+1}$, if you already know the value of the $n$th term, $\alpha_n$.




Once you know the first value, then the sequence is completely determined by that first value and the "recurrence rule" $\alpha_{n+1}=\sqrt{20+\alpha_n}$.



Now some hints:




  • Show the sequence is increasing.

  • Show the sequence is bounded.

  • Conclude the sequence converges.

  • Once you know it converges, take limits on both sides of the recursion to try to figure out what it converges to.



Friday 19 August 2016

limits - Showing the sequence converges to the square root




For any $a > 0$, I have to show the sequence $x_{n+1}$ $=$ $ \frac 12$($x_n+ $ $ \frac {a} {x_n}$)



converges to the square root of $a$ for any $x_1>0$.



If I assume the limit exists ( denoted by $x$) then,



$x$ $=$ $ \frac 12$($x+ $ $ \frac {a} {x}$) can be solved to $x^2 = a$



How could I show that it does exist?


Answer




As mentioned in the comments, we need to show that the sequence is monotonic and bounded.



First, we observe that
$$
x_n-x_{n+1}=x_n-\frac12\Bigl(x_n+\frac a{x_n}\Bigr)=\frac1{2x_n}(x_n^2-a).
$$
Secondly, we obtain that
\begin{align*}
x_n^2-a
&=\frac14\Bigl(x_{n-1}+\frac a{x_{n-1}}\Bigr)^2-a\\

&=\frac{x_{n-1}^2}4-\frac a2+\frac{a^2}{4x_{n-1}^2}\\
&=\frac14\Bigl(x_{n-1}^2-2a+\frac{a^2}{x_{n-1}^2}\Bigr)\\
&=\frac{1}{4}\Bigl(x_{n-1}-\frac a{x_{n-1}}\Bigr)^2\\
&\ge0.
\end{align*}
Hence, $x_n\ge x_{n+1}$ and $x_n$ is bounded from below since $x_n^2\ge a$ for each $n\ge2$.



Monotonic and bounded sequence converges. Denote the limit of the sequence $x=\lim_{n\to\infty}x_n$. Then we have that
$$
x=\frac12\Bigl(x+\frac ax\Bigr)\quad\iff\quad x=\sqrt a.

$$


calculus - Let $f$ be a function such that $f(ab)=f(a)+f(b) $ with $f(1)=0$ and derivative of $ f$ at $1 $ is $1$.

Let $f$ be a function such that $f(ab)=f(a)+f(b)$ with $f(1)=0$ and derivative of $f$ at $1$ is $1$



How can I show that $f$ is continuous on every positive number and



derivative of $f$ is $\frac{1}{x}$?

The steps of simplifying a fraction?

So I'm in an Adult Education class for my GED and I'm trying hard to study on my Math which is the only subject I have trouble with. I only have "barely" a 6th grade education to Math so I'm having a lot of trouble totally understanding it all.



I'm trying to learn how to simplify fractions on Khan Academy which is the current task we're learning in the class. I am not getting any of it. I'm trying to answer the question



Simplify to lowest terms.

36/60


I tried to Google the answer to see if I can find an explaination how to do this and found one on Y/A but the question was how to simplify 60/36 instead. I assume it was the same steps.





You're looking to get rid of the common factors, and inspection is
probably easiest here.



First try 6 as a factor, so



(60/6) /(36/6) = 10/6



10/6 is even, so divide by 2




(10/2) / (6/2) = 5/3



OK, that's pretty simple, and both sides are prime, so the only thing
you could do further is change



5/3 = (3 + 2)/3 = 3/3 + 2/3 = 1 2/3



5/3 or 1 2/3 - take your pick as to the simplest!





I'm still not understanding it.



I understand that:




  • Find the number that goes into both 36 and 60

  • Multiply the number and count how many times it goes into 36/60

  • ???




I'm just not sure what to do after that. What is the next step?

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...