Saturday, 30 November 2013

sequences and series - Why does the sum of inverse squares equal pi2/6?




I've heard that 1+1/4+1/9+1/16+1/25+... converges to π2/6. This was very surprising to me and I was wondering if there was a reason that it converges to this number?




I am also confused why π would be involved. If someone could provide a proof of this or a intuitive reason/ derivation I would appreciate it. I am only able to understand high school maths however (year 12).


Answer



Some of the proofs in the given link are somewhat technical and I'll try to vulgarize a variant of one of them.



Consider the function f(x):=sinπxπx.



This function has roots for every perfect square x=n2, and it can be shown to equal the infinite product of the binomials for the corresponding roots



p(x):=(1x12)(1x22)(1x32)(1x42)




(obviously, p(0)=f(0)=1 and p(n2)=f(n2)=0.)



If we expand this product to the first degree, we get



1(112+122+132+142+)x+



On the other hand, the Taylor development to the first order is



f(0)+f(0)x+=1π26x+ hence the claim by identification.




The plot shows the function f in blue, the linear approximation in red, and the product of the first 4,5 and 6 binomials, showing that they coincide better and better.



enter image description here


linear algebra - Eigenvalues of a Matrix Using Diagonal Entries



I just started learning about complex eigenvalues and eigenvalues and one example in the book I am using says that the matrix A=[0110]. The book then says that the eigenvalues are the roots to the characteristic equation λ2+1=0. But from an earlier section I learned that the eigenvalues of a triangular matrix is the entries on the main diagonal. A is triangular when I use the row interchange operation on the matrix and becomes A=[1001]. The diagonal entries are 1 and 1 but according to the book, the eigenvalues are i and i.



When given a matrix A, can I not use row operations to get it into a row equivalent matrix which is in triangular form and list the diagonal entries as eigenvalues?


Answer



Consider the matrix product
Av=(a1a2...an)v=(a1va2vanv)=λv
compared to

(011011)Av=(a2a1...an)v=(a2va1vanv)qv,q



so you cannot re-use any eigen vectors.






So what about the eigen values. We have
det
and if B equals A with two rows interchanged we have
\det(B-\lambda I)=0
but the rows of \lambda I has NOT been interchanged so the \lambda's has basically been attached to different positions in different row vectors. I know none of this is very deep or general in nature, but it explains what is going on in your specific example.


derivatives - Why are the local extrema of a log-transformed function equal to local extrema of the original function?



I am studying maximum likelihood and to simplify taking the derivative of the likelihood function, it is often transformed by the natural log before taking the derivative.



I have read in other posts that this is because the logarithm is a monotonic function, so its extrema will be the same as the original function. However, I do not understand why this is the case. Can someone explain intuitively why the transformation does not affect the local extrema?


Answer



Let f(x) be a positive function and suppose that x_0 is the local maximum of f(x) in the interval [a,b]. This means that for any y\in[a,b], f(y)\le f(x_0).



Logarithms is a monotonically increasing function: if z\le w, then \log z\le \log w. So for any y\in[a,b], since f(y)\le f(x_0), we have \log f(y) \le \log f(x_0). Hence, x_0 is also the local maximum for \log f(x).




The other direction can be proven by noting that the inverse of log is also monotonically increasing.


Friday, 29 November 2013

ordinary differential equations - If F=mdfrac{dv}{dt} why is it incorrect to write F,dt=m,dv?




My university lecturer told me that:





If F=m\dfrac{dv}{dt} it's incorrect to write F\,dt=m\,dv\tag{1} but it is okay to write \int F\,dt=\int m\,dv for Newtons' second law.




But never explained why (1) is mathematically incorrect.



My high school teacher told me that:




Derivatives with respect to one independent variable can be treated as fractions.





So this implies that (1) is valid.



This is clearly a contradiction as my high school teacher and university lecturer cannot both be correct. Or can they?



Another example of this misuse of derivatives uses the specific heat capacity c which is defined to be c=\frac{1}{m}\frac{\delta Q}{dT}\tag{2}



Now in the same vain another lecturer wrote that \delta Q=mc\,dT by rearranging (2).



Another contraction to the first lecturer. I this really allowed or if it's invalid then which mathematical 'rule' has been violated here?







EDIT:



In my question here I have used formulae that belong to Physics but these were just simple examples to illustrate the point. My question is much more general and applies to any differential equation in mathematics involving the treatment of derivatives with respect to one independent variable as fractions.



Specifically; Why is it 'strictly' incorrect to rearrange them without taking the integral of both sides?


Answer



It is possible that your lecturer is telling you that, on its own, the expression dt is meaningless, whereas \int...dt does mean something quite specific, i.e. an operator or instruction to integrate with respect to t.




In contrast, \delta t does mean something specific, i.e. a small increment in the value of t.



However, most people are fairly casual about this sort of thing.


probability - Finding the distribution of a n tossed fair coin



I am trying to solve the problem:




Consider a sequence of n tosses of a fair coin. Let X denote the number of heads, and Y denote the
number of isolated heads, that come up. (A head is an “isolated” head if it is immediately preceded

and followed by a tail, except in: position 1 where a head is only followed by a tail, and position n
where the head is only preceded by a tail.) Additionally, let X_i = 1 if the ith coin toss results in
heads, and X_i = 0 otherwise, i = 1, . . . , n. Similarly, Y_j = 1 if the jth coin toss results in an isolated
head, and Y_j = 0\ otherwise, j = 1, . . . , n.



How to find the distribution of Y |X = 2.




Here I think X \sim Bin(n,1/2) \ and\ Y \sim Bin(n,1/8). I couldn't figure out the solution of this problem. Any help would be highly appreciated.


Answer




You either have 0 isolated heads or 2 isolated heads.



There are \begin{pmatrix} n \\ 2 \end{pmatrix} ways to place exactly 2 heads among the coin tosses.



Of which n-1 of these choices gives you non-isolated heads..



Hence



Pr(Y=0|X=2)=\frac{n-1}{\begin{pmatrix} n \\ 2 \end{pmatrix}}=\frac{2}{n}




Pr(Y=2|X=2)=1-Pr(Y=0|X=2)=\frac{n-2}{n}


integration - Elementary way to calculate the series sumlimits_{n=1}^{infty}frac{H_n}{n2^n}

I want to calculate the series of the Basel problem \displaystyle{\sum_{n=1}^{\infty}\frac{1}{n^2}} by applying the Euler series transformation. With some effort I got that




\displaystyle{\frac{\zeta (2)}{2}=\sum_{n=1}^{\infty}\frac{H_n}{n2^n}}.



I know that series like the \displaystyle{\sum_{n=1}^{\infty}\frac{H_n}{n2^n}} are evaluated here, but the evaluations end up with some values of the \zeta function, like \zeta (2),\zeta(3).



First approach: Using the generating function of the harmonic numbers and integrating term by term, I concluded that



\displaystyle{\sum_{n=1}^{\infty}\frac{H_n}{n2^n}=\int_{0}^{\frac{1}{2}}\frac{\ln (1-x)}{x(x-1)}dx},



but I can't evaluate this integral with any real-analytic way.




First question: Do you have any hints or ideas to evaluate it with real-analytic methods?



Second approach: I used the fact that \displaystyle{\frac{H_n}{n}=\sum_{k=1}^{n}\frac{1}{k(n+k)}} and then, I changed the order of summation to obtain



\displaystyle{\sum_{n=1}^{\infty}\frac{H_n}{n2^n}=\sum_{k=1}^{\infty}\frac{2^k}{k}\left(\sum_{m=2k}^{\infty}\frac{1}{m2^m}\right)}.



To proceed I need to evaluate the



\int_{0}^{\frac{1}{2}}\frac{x^{2k-1}}{1-x}dx,




since \displaystyle{\sum_{m=2k}^{\infty}\frac{1}{m2^m}=\int_{0}^{\frac{1}{2}}\frac{x^{2k-1}}{1-x}dx}.



Second question: How can I calculate this integral?



Thanks in advance for your help.

Thursday, 28 November 2013

number theory - Prove that for any integer k ne 0, gcd(k, k+1) = 1



I'm learning to do proofs, and I'm a bit stuck on this one.
The question asks to prove for any positive integer k \ne 0, \gcd(k, k+1) = 1.



First I tried: \gcd(k,k+1) = 1 = kx + (k+1)y : But I couldn't get anywhere.



Then I tried assuming that \gcd(k,k+1) \ne 1 , therefore k and k+1 are not relatively prime, i.e. they have a common divisor d s.t. d \mid k and d \mid k+1 \implies d \mid 2k + 1




Actually, it feels obvious that two integers next to each other, k and k+1, could not have a common divisor. I don't know, any help would be greatly appreciated.


Answer



Let d be the gcd(k, k+1) then k=rd, k+1=sd, so 1=(s-r)d, so d |1.


real analysis - How to define the 0^0?











According to Wolfram Alpha:




0^0 is indeterminate.





According to google:
0^0=1



According to my calculator: 0^0 is undefined



Is there consensus regarding 0^0? And what makes 0^0 so problematic?


Answer



This question will probably be closed as a duplicate, but here is the way I used to explain it to my students:



Since x^0=1 for all non-zero x, we would like to define 0^0 to be 1. but ...




since 0^x = 0 for all positive x, we would like to define 0^0 to be 0.



The end result is that we can't have all the "rules" of indices playing nicely with each other if we decide to chose one of the above options, it might be better if we decided that 0^0 should just be left as "undefined".


Wednesday, 27 November 2013

linear algebra - What is the number of real solutions of the equation | x - 3 | ^ { 3x^2 -10x + 3 } = 1 ?

I did solve, I got four solutions, but the book says there are only 3.



I considered the cases | x - 3 | = 1 or 3x^2 -10x + 3 = 0.



I got for x\leq 0: ~2 , 3 , \frac13



I got for x > 0: ~4




Am I wrong? Is 0^0 = 1 or NOT?



Considering the fact that : 2^2 = 2 \cdot2\cdot 1



2^1 = 2\cdot 1



2^0 = 1



0^0 should be 1 right?

elementary number theory - Solutions for x^2+y^2=2007 and reference request.



I have never formally studied number theory, it is not a part of my course work, and what I have learnt is reading Wikipedia or the answers here. This question was on a test and I tried to use quadratic residues to solve this.




Find the number of solutions in integers to:



x^2+y^2=2007



We can observe that 2007\equiv 0\pmod3



x^2\equiv0,1\pmod{3}, y^2\equiv0,1\pmod{3}



Since these can never add up to 3,we conclude that x, y are divisible by 3




Let x=3x', y=3y'



(x')^2+(y')^2=223



Since 223\equiv 3\pmod4 but x^2\equiv 0,1\pmod{4} we conclude there are no solutions.



Is this argument correct? Also how else could this question have been solved? (Considering the fact that we don't have modular arithmetic in our syllabus, and therefore the teacher was probably expecting something else)



Also where can I study number theory?



Answer



2007\equiv3\equiv-1\pmod4



But \displaystyle a\equiv0,\pm1,2\pmod4\implies a^2\equiv0,1\pmod4



So, what are possible values of \displaystyle x^2+y^2\pmod 4


Tuesday, 26 November 2013

power series - Calculate sumlimits_{n=0}^{infty} frac{x^{3n}}{(3n)!}




\sum\limits_{n=0}^{\infty} \frac{x^{3n}}{(3n)!} should be calculated using complex numbers I think, the Wolfram answer is :




\frac{1}{3} (e^x + 2 e^{-x/2} \cos(\frac{\sqrt{3}x}{2}))



How to approach this problem?


Answer



We have that by f(x)=\sum\limits_{n=0}^{\infty} \frac{x^{3n}}{(3n)!}



f'(x)=\frac{d}{dx}\sum\limits_{n=0}^{\infty} \frac{x^{3n}}{(3n)!}=\sum\limits_{n=1}^{\infty} \frac{x^{3n-1}}{(3n-1)!}



f''(x)=\frac{d}{dx}\sum\limits_{n=1}^{\infty} \frac{x^{3n-1}}{(3n-1)!}=\sum\limits_{n=1}^{\infty} \frac{x^{3n-2}}{(3n-2)!}




f'''(x)=\frac{d}{dx}\sum\limits_{n=1}^{\infty} \frac{x^{3n-2}}{(3n-2)!}=\sum\limits_{n=1}^{\infty} \frac{x^{3n-3}}{(3n-3)!}=f(x)



and f'''(x)=f(x) has solution



f(x)=c_1e^x+c_2e^{-x/2}\cos\left(\frac{\sqrt 3 x}{2}\right)+c_3e^{-x/2}\sin\left(\frac{\sqrt 3 x}{2}\right)



with the initial conditions f(0)=1, f'(0)=0, f''(0)=0.


Sum of consecutive numbers



I was wondering if there a way to figure out the number of ways to express an integer by adding up consecutive natural numbers.



For example for N=3 there is one way to express it
1+2 = 3



I have no idea where to start so any help would be appreciated



Answer



The sum of the integers from 1 to n is \dfrac{n(n+1)}{2}. Hence, the sum of the integers from m+1 to n is simply \dfrac{n(n+1)}{2}-\dfrac{m(m+1)}{2}. So, if the sum of the integers from m+1 to n is N, then



\dfrac{n(n+1)}{2}-\dfrac{m(m+1)}{2} = N



(n^2+n) - (m^2+m) = 2N



(n-m)(n+m+1) = 2N



Hence, n-m and n+m+1 are complementary factors of 2N. Clearly, n-m is smaller than n+m+1, and since (n+m+1)-(n-m) = 2m+1, the factors have opposite pairity.




For any f_1 and f_2 such that 2N = f_1f_2, f_1 > f_2 and f_1, f_2 have opposite parity, we can solve n+m+1 = f_1 and n-m = f_2 to get n = \dfrac{f_1+f_2-1}{2} and m = \dfrac{f_1-f_2-1}{2}.



Therefore, the number of ways to write N = (m+1)+(m+2)+\cdots+(n-1)+n is simply the number of ways to factor 2N into two distinct positive integers with opposite parity.



Suppose 2N = 2^{k_0+1}p_1^{k_1}p_2^{k_2}\cdots p_r^{k_r} where p_1,p_2,\ldots,p_r are distinct odd primes. There are (k_1+1)(k_2+1)\cdots(k_r+1) ways to divide the odd primes between f_1 and f_2. There are 2 ways to give all 2's to one of the factors f_1, f_2. However, we need to divide by 2 since this overcounts cases in which f_1 < f_2. Also, we need to subtract out the one trivial solution n = N and m = N-1. This leaves us with (k_1+1)(k_2+1)\cdots(k_r+1)-1 ways to factor 2N into two distinct positive integers with opposite parity.



Therefore, if N has prime factorization N = 2^{k_0}p_1^{k_1}p_2^{k_2}\cdots p_r^{k_r}, then there are (k_1+1)(k_2+1)\cdots(k_r+1)-1 ways to write N as the sum of two or more consecutive positive integers.


integration - Improper integral of sin(x)/x from zero to infinity




I was having trouble with the following integral:

\int_{0}^\infty \frac{\sin(x)}{x}dx. My question is, how does one go about evaluating this, since its existence seems fairly intuitive, while its solution, at least to me, does not seem particularly obvious.


Answer



Let I(s) be given by



I(s)=\int_0^\infty \frac{e^{-sx}\sin(x)}{x}\,dx \tag1



for s\ge 0. Note that \lim_{s\to \infty}I(s)=0 and that I(0)=\int_0^\infty \frac{\sin(x)}{x}\,dx is the integral of interest.







Differentiating I(s) as given by (1) (this is justified by uniform convergence of the "differentiated" integral for s\ge \delta>0) reveals



\begin{align} I'(s)&=-\int_0^\infty e^{-sx}\sin(x)\,dx\\\\ &=-\frac{1}{1+s^2} \tag 2 \end{align}



Integrating (2), we find that I(s)=\pi/2-\arctan(s) whence setting s=0 yields the coveted result



\int_0^\infty \frac{\sin(x)}{x}\,dx=\frac\pi2



elementary number theory - A simple modular arithmetic query.

Given a,b,c\in\Bbb N with \mathsf{gcd}(a,b)=\mathsf{gcd}(b,c)=\mathsf{gcd}(c,a)=1 we know that there are m_1,m_2,m_3\in\Bbb N such that a\equiv m_1a^2\bmod abc, ab\equiv m_2ab\bmod abc and b\equiv m_3b^2\bmod abc holds.



It is also easy to see there is a single m such that a\equiv ma\bmod ab,\quad b\equiv mb\bmod ab holds.




However how to find a single m coprime to c such that 1\equiv ma\bmod abc,\quad 1\equiv mb\bmod abc holds?



At least how to find a single m such that \ell_1 a\equiv ma^2\bmod abc, \quad \ell_2 ab\equiv mab\bmod abc,\quad\ell_3b\equiv mb^2\bmod abc holds where 0<\ell_1,\ell_2,\ell_3<\log abc holds and at least one of \ell_1,\ell_2,\ell_3 is distinct?



If not how small can we make \max(\ell_1,\ell_2,\ell_3) where a\nmid\ell_1 and b\nmid\ell_3 holds?

Proof derivative equals zero?



I know this must be wrong, but I am confused as to where the mathematical fallacy lies.




Here is the 'proof':



f '(x) = \lim_{ h\to0}\frac{f(x+h)-f(x)}{h}



L'Hôpital's Rule (The previous limit was \frac{0}{0}):



f '(x) = \lim_{ h\to 0}\frac{f '(x+h)-f '(x)} {1}




Plugging in h:



f '(x) = f '(x+0)-f '(x)



Simplifying:



f '(x) = 0



I'm assuming my application of L'Hôpital's rule is fallacious, but it evaluates to an indeterminate form so isn't L'Hôpital's rule still valid?


Answer




Taking the derivative with respect to h gives:



f'(x) = \lim_{h \rightarrow 0} \frac{f'(x + h)}{1}



Since f(x) is constant with respect to h.


integration - How to evaluate int sin(x)arcsin(x)dx

I need to evaluate following integral



\int \sin(x)\arcsin(x) \ dx



Can anyone please help me? Thanks.

field theory - How many products can a K-vector space v have?



If I have an abelian group (V,+) and a field K, then I can make a vector space if I define a product ·\space\colon K \times V \longrightarrow V which is associative, distributive w.r.t. both the sum of V and the sum of K, and satisfies 1·\vec{v}=\vec{v} for any \vec{v}\in V.



If I'm given an abelian group and a field, how many vector spaces can I make with them?, which is to say, how many products make V a vector space? I have no clue of how should I approach this.




I know, though, that if we know of a valid product \lambda·\vec{v}, then \lambda * \vec{v} = \sigma(\lambda)·\vec{v} is also a valid product for any field automorphism \sigma on K... so maybe I should count all products "up to field isomorphism of K".


Answer



For prime fields, a finite dimensional vector space is uniquely determined by its abelian group structure. This includes finite fields of prime order and the rational numbers.



However, every finite dimensional vector space over the real numbers is isomorphic as an abelian group to any other, and is also isomorphic to a real vector space of countably infinite dimension. Also, a vector space of dimension mn over a field with p elements is isomorphic as an abelian group to a vector space of dimension n over a field with p^m elements. In particular, any vector space over a field F with a subfield E is also a vector space over E.



As you observed, field automorphisms also mess with uniqueness, but this is actually well studied. A map between vector spaces that is linear except possibly twisted by a field automorphism is called sesquilinear, and you can search for info on that on the internet.


Monday, 25 November 2013

real analysis - Can we identify absolutely continuous functions on (a,b) with values in X with W^{1,2}((a,b);X)?



Function f:(a,b)\to \mathbb{R} is measurable and absolutely continuous if and only if there exists the weak derivative \frac{df}{dx}\in L(a,b). The weak derivative coincides with the classical derivative almost everywhere.



Can we prove the same theorem for f:(a,b)\to X, where X is a Banach space?



In order to have differentiabilty almost everywhere of absolutely continuous function f with the values in an abstract Banach space, we have to assume that X is reflexive. I tried to copy the proof from the very first case but I stop when it comes to integrating by parts. Does the classical integrating by parts has the same form for absolutely continuous functions with values in Banach spaces?


Answer



Yes it is still true. To have integration by parts you have to show that the fundamental theorem of calculus continues to hold. You first prove that for T\in X^{\prime} the function w:=T\circ f belongs to AC([a,b]). In turn,
w is differentiable for \mathcal{L}^{1}-a.e. x\in\lbrack a,b]. It follows from the differentiability of f and the linearity of T that for
\mathcal{L}^{1}-a.e. x\in\lbrack a,b], \frac{w(x+h)-w(x)}{h}=T\Bigl(\frac{f(x+h)-f(x)}{h}\Bigr)\rightarrow T(f'(x)),
which shows that w^{\prime}(x)=T(f'(x)) for \mathcal{L}^{1}-a.e.
x\in\lbrack a,b].



Hence, by the fundamental theorem of calculus applied to the function w, we
have that
\begin{align*} T(f(x))-T(f(x_{0})) & =w(x)-w(x_{0})=\int_{x_{0}}^{x}w^{\prime}% (t)\,dt=\int_{x_{0}}^{x}T(f'(t))\,dt\\ & =T{\Bigl(}\int_{x_{0}}^{x}f'(t)\,dt{\Bigr)}, \end{align*}
where you have to use the fact that the Bochner integral commutes with continuous linear functionals T. Hence,
T{\Bigl(}f(x)-f(x_{0})-\int_{x_{0}}^{x}f'(t)\,dt{\Bigr)}=0.
Taking the supremum over all T with \Vert T\Vert_{X^{\prime}}\leq1 and
using the fact that for x\in X, \Vert x\Vert=\sup_{\Vert T\Vert_{X^{\prime }}\leq1}T(x), we get
f(x)-f(x_{0})-\int_{x_{0}}^{x}f'(t)\,dt=0.


trigonometry - Showing that $0 < cos (theta)

0 < \cos (\theta)<\frac {\sin (\theta)}{\theta}<\frac {1}{\cos(\theta)}
for \theta\in(0,\pi/2).

complex numbers - find z that satisfies z^2=3+4i



Super basic question but some reason either I'm not doing this right or something is wrong.



The best route usually with these questions is to transform 3+4i to re^{it} representation.




Ok, so r^2=3^2+4^2 = 25, so r=5. And \frac{4}{3}=\tan(t) so that means t \approx 0.3217 and I'm not going to get an exact answer like that.



Another method would be to solve quadratic formula z^2-3-4i=0 that means z_0=\frac{\sqrt{12+16i}}{2} and z_1=\frac{-\sqrt{12+16i}}{2}



But now I have the same problem, 12+16i doesn't have a "pretty" polar representation so its difficult to find \sqrt{12+16i}



I want to find an exact solution, not approximate, and it should be easy since the answers are 2+i and -2-i



Edit:




Also, something else is weird here. I know that if z_0 is some root of a polynomial then it's conjugate is also a root.but 2+i and -2-i are not conjugates.


Answer



Hint:



z^2=3+4i=5e^{it+2k\pi i}\;,\;\;k\in\Bbb Z\;,\;\;t=\arctan\frac43\implies



z=\sqrt[2]5e^{\frac{it+2k\pi i}2}\;,\;\;k=0,1\;\;\text{(Why it is enough to take only these vales of}\;\;k\;?)



A more basic approach: put\;z=a+bi\;,\;\;a,b\in\Bbb R\; , so that




3+4i=(a+bi)^2=(a^2-b^2)+2abi\implies\begin{cases}a^2-b^2=3\\{}\\2ab=4\implies b=\frac2a\end{cases}\;\;\implies



a^2-\frac4{a^2}=3\implies 0=a^4-3a^2-4=(a^2-4)(a^2+1)\implies a=\pm2



and thus



\;b=\pm\frac22=\pm1\;\implies a+bi=\begin{cases}\;\;\;2+i\\{}\\-2-i\end{cases}


discrete mathematics - Proving an inequality by mathematical induction



I'm trying to solve a problem with inequalities using mathematical induction but I am stuck halfway through the process.

The problem: Use mathematical induction to establish the inequality -
(1 + \frac{1}{2})^n \ge 1 + \frac{n}{2} for n \in \mathbb{N}



Steps



1) n = 1, (1 + \frac{1}{2})^1 \ge 1 + \frac{1}{2} is TRUE



2) n = k, assume that (1 + \frac{1}{2})^k \ge 1 + \frac{k}{2} for n \in \mathbb{N}



3) Show the statement is true for k + 1




(1 + \frac{1}{2})^{k+1} = (1 + \frac{1}{2})^k * (1 + \frac{1}{2})



\ge (1 + \frac{k}{2}) * (1 + \frac{1}{2}) - using the assumption in step 2



My question is, how do I continue this problem? Or did I go wrong somewhere? I just can't figure out what the next step is.


Answer



Continue with:



(1 + \frac{k}{2}) * (1 + \frac{1}{2}) =




1 + \frac{k}{2} + \frac{1}{2} + \frac{k}{4} >



1 + \frac{k}{2} + \frac{1}{2}=



1 + \frac{k+1}{2}


calculus - Limits without L'Hopitals Rule

Evaluate the limit without using L'hopital's rule



a)\lim_{x \to 0} \frac {(1+2x)^{1/3}-1}{x}



I got the answer as l=\frac 23... but I used L'hopitals rule for that... How can I do it another way?




b)\lim_{x \to 5^-} \frac {e^x}{(x-5)^3}



l=-\infty



c)\lim_{x \to \frac {\pi} 2} \frac{\sin x}{\cos^2x} - \tan^2 x



I don't know how to work with this at all



So basically I was able to find most of the limits through L'Hopitals Rule... BUT how do I find the limits without using his rule?

Sunday, 24 November 2013

Prove the following logarithm inequality.





If x, y \in (0, 1) and x+y=1, prove that x\log(x)+y\log(y) \geq \frac {\log(x)+\log(y)} {2}.




I transformed the LHS to \log(x^xy^y) and the RHS to \log(\sqrt{xy}), from where we get that x^xy^y \ge \sqrt{xy} beacuse the logarithm is a monotonically increasing function. From there we can transfrom the inequality into x^{x-y}y^{y-x} \ge 1. So here I am stuck.



I could have started from some known inequalities too, like the inequalities between means.


Answer



Since x-y and \log x-\log y have the same sign, we have
(x-y)(\log x-\log y)\ge 0 or equivalently
x\log x+y\log y\ge y\log x+x\log y. Hence it holds that
2x\log x+2y\log y\ge (x+y)\log x+(x+y)\log y=\log x+\log y. This proves
y\log y+x\log x\ge \frac{\log x+\log y}{2}.




Note: As @Martin R pointed out, the result can be generalized to x+y=1\Longrightarrow\; xf(x)+yf(y)\ge \frac{f(x)+f(y)}{2} for any increasing function f:(0,1)\to\Bbb R.


analysis - Is the support of a compactly supported function on Omega a proper subset of Omega?

Let d\in\mathbb N and \Omega\subseteq\mathbb R^d be open. Is there some continuous \phi:\Omega\to\mathbb R with compact support \operatorname{supp} and \operatorname{supp}\phi=\overline{\Omega}?

complex analysis - Evaluating int_{-infty}^{infty} frac{x^6}{(4+x^4)^2} dx using residues



I need help to solve the next improper integral using complex analysis:



\int_{-\infty}^{\infty} \frac{x^6}{(4+x^4)^2} dx




I have problems when I try to find residues for the function f = \displaystyle \frac{1}{(z^4+4)^2}.



This is what I tried.



\displaystyle \text{res}(f,\sqrt{2}e^{i\left(\frac{\pi}{4}+k\frac{\pi}{2} \right)}) = \lim_{z\to \sqrt{2}e^{i\left(\frac{\pi}{4}+k\frac{\pi}{2} \right)}} \left( \frac{\left(z-\sqrt{2}e^{i\left(\frac{\pi}{4}+k\frac{\pi}{2} \right)}\right)^2}{(z^4-4)^2}\right)'



with k\in\{0,1,2,3\}. What do you think about it?



I know there is a little more general problem involving this integral; for all a>0




\int_{-\infty}^{\infty} \frac{x^6}{(a^4+x^4)^2} dx= \frac{3\pi\sqrt{2}}{8a}



Edit.






I've had an idea: from the integration by parts
\int u dv = uv - \int vdu



and if we let

dv = \frac{4x^3 }{(4+x^4)^2}, \, u = \frac{x^3}{4}



with



dv = -\frac{d}{dx} \frac{1}{4+x^4} = \frac{4x^3 }{(4+x^4)^2}



we get finally



\int_{-\infty}^{\infty} \frac{x^6}{(4+x^4)^2} dx = 0 + \frac{3}{4} \int_{-\infty}^{\infty} \frac{x^2}{1+x^4} dx




which I think is more easy to solve.



Anyway, if you know another idea or how to complete my first try will be welcome.


Answer



If you are right, then \int_{-\infty}^{\infty} \frac{x^2}{1+x^4} dx can be easily done by a semi-circle contour, computing the residues at (1+i)/\sqrt{2} and (-1+i)/\sqrt{2}. It is easy to check that the integral of \frac{z^2}{1+z^4} on the arc of the semi-circle goes to 0.



\frac{z^2}{1+z^4}=\frac{z^2}{(z-e^{i\pi/4})(z-e^{i3\pi/4})(z-e^{i5\pi/4})(z-e^{i7\pi/4})} shows that it is a simple pole at those residues.



\text{ Res}(\frac{z^2}{1+z^4},e^{i\pi/4})=\lim_{z\to e^{i\pi/4}}\frac{z^2(z-e^{i\pi/4})}{1+z^4}=\lim_{z\to e^{i\pi/4}}\frac{3z^2-2ze^{i\pi/4}}{4z^3}=(\frac{3}{4}-\frac{1}{2})e^{-i\pi/4}.




\text{ Res}(\frac{z^2}{1+z^4},e^{i3\pi/4})=\lim_{z\to e^{i3\pi/4}}\frac{z^2(z-e^{i3\pi/4})}{1+z^4}=\lim_{z\to e^{i3\pi/4}}\frac{3z^2-2ze^{i3\pi/4}}{4z^3}=(\frac{3}{4}-\frac{1}{2})e^{-i3\pi/4}.



I used L'Hospital's rule above because it seems simpler.



So the answer is \int_{-\infty}^{\infty} \frac{x^2}{1+x^4} dx=2\pi i\frac{1}{4}\left(\frac{1-i}{\sqrt{2}}+\frac{-1-i}{\sqrt{2}}\right)=\frac{\sqrt{2}}2\pi.



Edit: Something's wrong with the integration by parts; we do not seem to get the right answer. It is a 4 in the bottom and not 1. But we can do a further substitution to get it right.



\int_{-\infty}^{\infty} \frac{x^2}{4+x^4} dx=\frac{1}{\sqrt{2}} \int_{-\infty}^{\infty} \frac{1}{\sqrt{2}}\frac{(\frac{x}{\sqrt{2}})^2}{1+(\frac{x}{\sqrt{2}})^4} dx=\frac{1}{\sqrt{2}}\int_{-\infty}^{\infty} \frac{x^2}{1+x^4} dx.




So the final answer is \int_{-\infty}^{\infty} \frac{x^6}{(4+x^4)^2} dx=\frac{3}{4}\int_{-\infty}^{\infty} \frac{x^2}{4+x^4} dx=\frac{3}{8}\pi. You also have to argue that you are limiting r\to\infty in \left[\frac{x^3}{4(4+x^4)}\right]_{-r}^r which is why it is 0.


combinatorics - 6 cards are distributed between 3 people among other cards, what is the probability that someone will have at least 3 cards AND a specific card



I play a card game called "Whist". In this game, all four players get 13 cards each, and you have to estimate how many tricks you will win with one trump color.



Here is my scenario:

In my hand, I have 7 hearts : Ace, King, Jack, 10, ... (the rest doesn't matter).
My question is : what is the probability that I will win those 7 cards if the trump is Heart? The only way for me to lose the third trick is if somebody has 3 hearts or more AND the queen. This is because if the player who has the Queen has only one or two cards he will lose it when I play my Ace and my King (people have to follow your color).



So my question is:



I know there are 6 remaining hearts in the game, they are separated into 3 players, what is the probability that one player has 3 cards or more containing the Queen?



I tried to solve it doing a probability tree but it quickly became complicated so I wonder which formula we can use.



Thanks in advance for your response!




(PS: I know that if for example I have Ace, King, Jack, 10, 8, etc, I could lose one trick if someone has 5 hearts and the 9 of hearts but I think if I am not mistaken that the probability of this is quite low (correct me if I am wrong))


Answer



One of the 3 opponents has the queen of hearts.



The question is: what is the probability that this opponent has at least 2 other hearts.



\sum_{k=2}^5\frac{\binom{12}{k}\binom{26}{5-k}}{\binom{38}{5}}



Here we are practizing hypergeometric distribution and term \frac{\binom{12}{k}\binom{26}{5-k}}{\binom{38}{5}} is the probability that the opponent in possession of the queen of hearts received exactly k other hearts.




Observe that this opponent received 12 cards next to the queen of hearts and the other two opponents together received 26 cars. Among these cards there are 5 hearts.



You think of the opponent in possession of the queen of hearts as drawing 12 balls out of an urn containing 5 blue balls and 33 red balls. Then what is the probability that he draws at least 2 blue balls.



Here balls are cards and blue balls are hearts.


integration - Evaluating int_0^pi frac{cos{ntheta}}{1 -2rcos{theta}+r^2} dtheta




How do I go about evaluating the following integral:



\int_0^\pi \frac{\cos{n\theta}}{1 -2r\cos{\theta}+r^2} d\theta



for n \in \mathbb{N}, and r \in (0,1)?



My tack has been to rewrite, using the exponential form of \cos, as:



\int_0^\pi \frac{e^{i\theta} + e^{-i\theta}}{2(1 - re^{i\theta})(1-re^{-i\theta})} d\theta




and letting z = e^{i\theta} and d\theta = \frac{1}{iz}dz, we get



\frac{1}{2i}\int_{|z|=1} \frac{z^n + \frac{1}{z}^n}{(1 - rz)(z-r)} d\theta



This would have a singularity when z= e^{i\theta} = r or when zr = re^{i \theta} = 1. Since r \in (0,1) neither of these can occur. So we just need to integrate the above, but I can't see any way about this?



Is my approach valid or along the right lines, and how can I proceed?


Answer



The contour integration way:




Residue theorem reveals within a second that



\int_0^\pi \frac{\cos{n\theta}}{1 -2r\cos{\theta}+r^2} d\theta=\pi\frac{r^n}{1-r^2}.



Note: To make everything simpler (compared with what you tried in your post) it is enough to use in the numerator e^{i n\theta}.



The real method way:



Exploit carefully the well-known series result (which can be proved by real methods)




\sum_{n=1}^{\infty} p^n \sin(n x)=\frac{p\sin(x)}{1-2 p \cos(x)+p^2}, |p|<1


calculus - Dirichlet integral.





I want to prove \displaystyle\int_0^{\infty} \frac{\sin x}x \,\mathrm{d}x = \frac \pi 2, and \displaystyle\int_0^{\infty} \frac{|\sin x|}x \,\mathrm{d}x \to \infty.



And I found in wikipedia, but I don't know, can't understand. I didn't learn differential equation, laplace transform, and even inverse trigonometric functions.



So tell me easy, please.


Answer



About the second integral: Set x_n = 2\pi n + \pi / 2. Since \sin(x_n) = 1 and
\sin is continuous in the vicinity of x_n, there exists \epsilon, \delta > 0 so that \sin(x) \ge 1 - \epsilon for |x-x_n| \le \delta. Thus we have:
\int_0^{+\infty} \frac{|\sin x|}{x} dx \ge 2\delta\sum_{n = 0}^{+\infty} \frac{1 - \epsilon}{x_n} = \frac{2\delta(1-\epsilon)}{2\pi}\sum_{n=0}^{+\infty} \frac{1}{n + 1/4} \rightarrow \infty


Difference between the support of a discrete random variable and the atoms of its probability distribution



I'm confused about the difference between the support of a discrete random variable and the atoms of its probability distribution.



Suppose I have a discrete random variable X defined on the probability space (\Omega, \mathcal{F}, P) with support \mathcal{A}\subseteq \mathbb{R}. I am told that the distribution of X is H with c atoms, a_1,\ldots,a_c.



Question: Is \{a_1,\ldots,a_c\}=\mathcal{A}?



My thought is the following: they are different because, while \mathcal{A} could contain also a point b at which P(X=b)=0, P(X=a_i)>0 for i=1,\ldots,c.




Is this correct?


Answer



Below I describe a discrete probability distribution whose support is bigger than its set of atoms.



A point x is a member of the "support" of the probability distribution of X precisely if for every open neighborhood G of x, we have \Pr(X\in G)>0.



Now suppose
X = \begin{cases} 1/2 & \text{with probability }1/2, \\[6pt] 1/3 \text{ or } 2/3 & \text{with equal probabilities totalling }1/4, \\[6pt] 1/4 \text{ or } 3/4 & \text{with equal probabilities totalling }1/8, \\ & (\text{We skipped $2/4$ since it's not in lowest terms.}) \\[6pt] 1/5,\ 2/5,\ 3/5,\text{ or }4/5 & \text{with equal probabilities totalling }1/16, \\[6pt] 1/6\text{ or }5/6 & \text{with equal probabilities totalling }1/32, \\[6pt] 1/7,\ 2/7,\ 3/7,\ 4/7,\ 5/7,\text{ or }6/7 & \text{with equal probabilities totalling }1/64, \\[6pt] \text{and so on.} \end{cases}




Then the probability distribution of X is discrete since the probabilities of the atoms add up to 1, i.e. 100 percent of the probability is in point masses. The set of atoms is just the set of all rational numbers between 0 and 1.



But the support is the whole set [0,1], which contains 0 and 1 (which are not atoms) and every irrational number between 0 and 1. The reason for that is for that every open interval about each such number, the probability that X falls in that interval is positive.


calculus - Limit of a Recursive Sequence

I'm having a really hard time finding the limit of a recursive sequence -



\begin{align*} &a(1)=2,\\ &a(2)=5,\\ &a(n+2)=\frac12 \cdot \big(a(n)+a(n+1)\big). \end{align*}




I proved that the sequence is made up from a monotonically increasing sequence and a monotonically decreasing sequence, and I proved that the limits of the difference of these sequences is zero, so by Cantor's Lemma the above sequence does converge. I manually found out that it converges to 4, but I can't seem to find any way to prove it.



Any help would be much appreciated!
Thank you.

Saturday, 23 November 2013

functions - Proving a bijection with absolute value




let



g(x) = \frac{x}{1-|x|} for all x \in (-1,1)



a) show that g is a bijection from (-1,1) to \mathbb{R}



Here is what I have done.



Say g(x)=y so y = \frac{x}{1-|x|}
iff y(1-|x|)=x

Therefore, for each y, there exists an x such that f(x)=y). So f$ is a surjection.



Suppose g(x)= g(z) for some x,z \in (-1,1), then
\frac{x}{1-|x|}=\frac{z}{1-|z|}
iff
x-z=x|z|-z|x|



Then there are four cases, one x,z>0 and x,z<0 where obviously x=z, but for the other two cases, I am only concerned with figuring out one, because obviously the other one follows, but I have gotten to the point where
x-z=x(2z) for x>0, y<0. Am I correct thus far? Where do I go next? Thank you for any help.


Answer




For the injection part, is \dfrac{x}{1-|x|}=\dfrac{z}{1-|z|} really possible if x and z have different signs ? One of the two terms is \geq0 and the other \leq 0.



For the surjection part, I would advise you to separate the cases where y is positive, and y is negative.



If y is positive then x must be positive, so |x| = x. You have y = \dfrac{x}{1-x}, so y(1-x)-x = 0 thus -x(1+y)+y = 0 and x = \dfrac{y}{1+y}. You have know found a x that works. Note that the final step is legit because as y is positive, (1+y) can't be 0.



Don't forget the case y < 0 now !


Complex integration parametric form




Evaluate\int_{\gamma(0;1)} \frac{\cos z}{z}dz. Write in parametric form and deduce that\int^{2\pi}_0 cos(\cos\theta)\cosh(\sin\theta)d\theta=2\pi



By Cauchy's integral formula, \int_{\gamma(0;1)} \frac{\cos z}{z}dz=2\pi i(\cos0) = 2\pi i
, but could anyone help with parametrization and deducing the above integral?


Answer



The circle is a parametrization over \theta \in [0,2 \pi]. Now let z=e^{i \theta} \implies dz = i z d\theta. Also note that



\cos{z} = \cos{(\cos{\theta}+i \sin{\theta})} = \cos{(\cos{\theta})} \cos{(i \sin{\theta})} - \sin{(\cos{\theta})} \sin{(i \sin{\theta})}




Use the fact that \cos{i x} = \cosh{x} and \sin{i x} = i \sinh{x} to get



\cos{z} = \cos{(\cos{\theta})} \cosh{(\sin{\theta})} - i \sin{(\cos{\theta})} \sinh{( \sin{\theta})}



Thus,



\oint_{\gamma(0,1)} dz \frac{\cos{z}}{z} = i \int_0^{2 \pi} d\theta \left [ \cos{(\cos{\theta})} \cosh{(\sin{\theta})} - i \sin{(\cos{\theta})} \sinh{( \sin{\theta})} \right ] = i 2 \pi



Equating real and imaginary parts, the sought-after result follows.


Friday, 22 November 2013

calculus - Can lim_{hto 0}frac{b^h - 1}{h} be solved without circular reasoning?

In many places I have read that \lim_{h\to 0}\frac{b^h - 1}{h} is by definition \ln(b). Does that mean that this is unsolvable without using that fact or a related/derived one?



I can of course solve it with L'Hospitals rule, but that uses the derivative of the exponential function which is exactly what I want to derive by solving this limit.




Since the derivative of the logarithm can be derived from the derivative of the exponential, using the fact that they are inverses, means that deferring this limit to something that can be solved using the derivative of log seams also cheating.



Others have asked that before, but all the "non-L'Hospital"-solutions seem to defer it to some other limit that they claim obvious. For example two solutions from Proving that \lim_{h\to 0 } \frac{b^{h}-1}{h} = \ln{b} use \lim_{x\to 0}\frac{\log_a(1+x)}{x}=\log_a e and \lim_{x\to 0}\frac{e^x-1}x=1
none of which is more obvious (to me) as the original.



On that same page, the 1st of the above 2 is "derived" using a Taylor expansion (if I am not mistaken) which (if I remember correctly) is based on derivatives (in this case of the logarithm), which is related to the derivative of exp as I meantioned above. So this seems to be circular reasoning too. (a very large circle though)



So is this limit not solvable at all without using smth that is based on something this meant to prove? Can this only be defined to equal \ln(b); and numerically determined to some precision?

calculus - Finding continuity and differentiability of a multivariate function



Determine whether the following functions are differentiable, continuous, and whether its partial derivatives exists at point (0,0):



(a) f(x, y) = \sin x \sin(x + y) \sin(x − y)



(b)f(x,y)=\sqrt{|xy|}



(c)f(x, y) = 1 − \sin\sqrt{x^2 + y^2}




(d) f(x,y) = \begin{cases} \dfrac{xy}{x^2+y^2} & \text{if $x^2+y^2>0 $} \\ 0 & \text{if $x=y=0$} \end{cases}



(e) f(x,y) = \begin{cases} 1 & \text{if $x y \ne 0$} \\ 0 & \text{if $xy=0$} \end{cases}



(f)f(x,y) = \begin{cases} \dfrac{x^2-y^2}{x^2+y^2} & \text{if $x^2+y^2>0$} \\ 0 & \text{if $x=y=0$} \end{cases}



My try:




For (a), using the definition of the derivative for a multivariate function, the limit tends to 0, hence it's differentiable and its partial derivative exists and it's continuous.



For (b) I mentioned that is not differentiable as using the definition of the derivative for a multivariate function, the limit does not tend to 0. While it's continuous, as the limit of the function f(x,y) tends to 0. To determine whether its partial derivative exists, this part is tricky because of the modulus sign in the function hence I'm unsure whether a modulus is differentiable for this case.



For part (c) it should be continuous but not differentiable at (0,0) because its partial derivative does not exists at (0,0). As to why its partial derivatives does not exists, lets say to find f_x, we let y=0, the expression f(x,y) becomes 1-\sin(|x|) which is not differentable at (0,0), hence partial derivatives cannot exists at (0,0).



For (d), this question is also tricky, as although initially I though its partial derivatives exists, now I think likewise. Because if I want to differentiate the function with respect to x for example, I would sub in the value of y=0 making the numerator a zero and hence assume the derivative is 0. However, on closer look, there is still the denominator of x^2 and if x^2=0 the denominator becomes 0 and since 0/0 is undefined, the partial derivatives do not exist. As for continuity, it is not continuous and hence not differentiable.



For (e) Not differentiable, Discontinuous, Partial derivatives defined
(because "not continuous" will mean it's not differentiable, but I'm unsure of the Partial derivatives portion though because it appears the the partial derivatives are 0 but I also have the feeling that the partial derivatives do not exist.)




For (f) Not differentiable, Continuous, Partial derivatives defined. This question appears to be similar to question (d)



I have already attempted these questions many times, but I keep answering these questions incorrectly. I know that I must be missing out on some parts especially when these are tricky questions which are not as simple it might seem. Could anyone help me please? Thanks!


Answer



(a) it is a compostion of diferentiable functions, then it is differentiable, and contious and the partial derivative exist in (0,0).



(b) It is continuous,and we have that the partial derivative are
f_x(0,0)=\displaystyle\lim_{t\to 0}\frac{f(t,0)-f(0,0)}{t}= \lim_{t\to 0}\frac{\sqrt{|t0|}-\sqrt{|0\times0|}}{t}=0, and

f_y(0,0)=\displaystyle\lim_{t\to 0}\frac{f(0,t)-f(0,0)}{t}= \lim_{t\to 0}\frac{\sqrt{|t0|}-\sqrt{|0\times0|}}{t}=0. however it is not differentiable since \lim_{t\to0^+}\frac{f(t,t)-f(0,0)}{t}=1 and \lim_{t\to0^-}\frac{f(t,t)-f(0,0)}{t}=-1,then it is not diferentiable.



(c) It is continuous because it is composition of continuous functions,but



f_x(0,0)=\displaystyle\lim_{t\to 0^+}\frac{f(t,0)-f(0,0)}{t}= \lim_{t\to 0^+}\frac{1 − \sin\sqrt{t^2 + 0^2}- (1 − \sin\sqrt{0})}{t}=-1 but
f_x(0,0)=\displaystyle\lim_{t\to 0^-}\frac{f(t,0)-f(0,0)}{t}= \lim_{t\to 0^-}\frac{1 − \sin\sqrt{t^2 + 0^2}- (1 − \sin\sqrt{0})}{t}=1




and the partial f_x is not defined in (0,0), analogously for f_y(0,0), both does not exist.
Then it is not differentiable, because a differentiable function the elimites above should exist.



(d) it is not continuous, because t>0 then (t,t)\to 0 then f(t,t)=1/2\neq 0=f(0,0). And it is not differentiable since it is not continuous. However
f_x(0,0)=\displaystyle\lim_{t\to 0}\frac{f(t,0)-f(0,0)}{t}= \lim_{t\to 0^+}\frac{\dfrac{t0}{t^2+0^2}-0}{t}=0 and
f_y(0,0)=\displaystyle\lim_{t\to 0}\frac{f(0,t)-f(0,0)}{t}= \lim_{t\to 0^+}\frac{\dfrac{t0}{t^2+0^2}-0}{t}=0.



(e) It is clearly not continuous, hence not differentiable at (0,0), but

f_x=\displaystyle\lim_{t\to0}\frac{f(x+t,y)-f(x,t)}{t}=0 and
f_y=\displaystyle\lim_{t\to0}\frac{f(x,y+t)-f(x,t)}{t}=0, are defined in (0,0)



(f)It is not continuous since \lim_{t\to 0}f(2t,t)=\lim_{t\to0}\dfrac{4t^2-t^2}{4t^2+t^2}=\frac{3}{5}\neq f(0,0), hence it is not differentiable in (0,0).
f_x(0,0)=\displaystyle\lim_{t\to0^+}\frac{f(x+t,y)-f(x,t)}{t} =\lim_{t\to 0^+}\frac{\dfrac{t^2-0^2}{t^2+0^2}-0}{t}=+\infty analogously for f_y(0,0), both are note defined in (0,0).


sequences and series - prove lim_ {n rightarrow infty} frac{b^n}{n^k}=infty












I have this sequence with b>1 and k a natural, which diverges:
\lim_{n \rightarrow \infty} \frac{b^n}{n^k}=\infty
I need to prove this, with what i have learnt till now from my textbook, my simple step is this:



Since n^2\leq2^n for n>3, i said b^n\geq n^k, so it diverges. Is it right?



I am asking here not just to get the right answer, but to learn more wonderful steps and properties.


Answer



\lim_{n \rightarrow \infty} \frac{b^n}{n^k}=\infty




You can use the root test, too: \lim_{ n\to \infty}\sqrt[\large n]{\frac{b^n}{n^k}} = b>1



Therefore, the limit diverges.






The root test takes the \lim of the n-th root of the term: \lim_{n \to \infty} \sqrt[\large n]{|a_n|} = \alpha.



If \alpha < 1 the sum/limit converges.




If \alpha > 1 the sum/limit diverges.



If \alpha = 1, the root test is inconclusive.


Integration by substitution, but I cannot see how

I am trying to find an indefinite integral. The question suggests that it can be solved with integration by substitution, but I cannot see how. Multiplying out the brackets and integrating gives an eight-order result. Can anyone help here?




\int \left(x+4\right)\left(\frac{1}{3}x+8\right)^6\:dx

Discrete Mathematics - Direct Proof

I am given



Prove the statement, for all integers n if 5n is odd, then n is odd




We know that this is not true for n=4 simply by building a table for values for n and 5n...



Is that all I need to do? I've shown a table up to n=4 and, by contradiction of the original statement, n is even and 5n is even.

fake proofs - Why is i^3 (the complex number "i") equal to -i instead of i?




i^3=iii=\sqrt{-1}\sqrt{-1}\sqrt{-1}=\sqrt{(-1)(-1)(-1)}=\sqrt{-1}=i



Please take a look at the equation above. What am I doing wrong to understand i^3 = i, not -i?


Answer



We cannot say that \sqrt{a}\sqrt{b}=\sqrt{ab} for negative a and b. If this were true, then 1=\sqrt{1}=\sqrt{\left(-1\right)\cdot\left(-1\right)} = \sqrt{-1}\sqrt{-1}=i\cdot i=-1. Since this is false, we have to say that \sqrt{a}\sqrt{b}\neq\sqrt{ab} in general when we extend it to accept negative numbers.


Thursday, 21 November 2013

Complex Functions Concept Questions

My Engineering Mathematics teacher have a very novel method of teaching. He thinks that while it's all good and well to learn the analytical side of math, he stresses more on the theory, than the application. Thus he gave us these questions that we must answer without a single equation. It's new and I sure enjoy the concept, but I'm not sure if I got it right. Could someone check my answers for me? Thanks!



1. Why there are n solutions to the nth root of a complex number?




Because if the solution is some number w to the power of n (a series of natural numbers) then for each value of w there is one and only one solution value.



2. If a complex function is analytic, the function needs to satisfy the Cauchy-Riemann equation. Outline the proof process of the necessary part of this argument.



If a complex function is to be considered analytic, it must have a derivative at all points in its domain. Thus, if we assume a function has a derivative, then, according to the fundamental theorem of calculus, it can be expressed in terms of a limit. We can then divide the limit into real and imaginary components, since it is on a complex plane. And since the derivative exists, the limit also exists. This means that we can take the limit in any direction. If we approach it in a horizontal direction and in a vertical direction, we get two different equations for the derivative of the function. And since they are the meant to be equal, if we equate the real and imaginary terms of both equations, we get the two Cauchy-Riemann equations, thus showing that for a function to be analytic, it must satisfy the Cauchy-Riemann equations. This is the necessary condition for a function to be considered analytic.



3. Outline the proof process of the sufficient part of the argument in problem 2.



The sufficient part of the argument outlined in problem 13-2 is that the partial derivative of the real part u of the function with respect to x is equal to the partial derivative of the imaginary part v of the function with respect to y. Additionally, the partial derivative of u with respect to y should be equal to the negative partial derivative of v with respect to x. Also, all partial derivatives of the function must be continuous. If we supply these substitutions in the Taylor series representation of the function and rearrange, then we can get the fundamental calculus definition of the derivative of the function with respect to a path z. If we substituted so that all partial derivatives are in respect of x, then we get the derivative of the function in the x-direction. The opposite is true if we substituted so that all partial derivatives are in respect of y. And since all the partial derivatives are continuous, the derivative of the function in both cases must exist. Thus, the function is differentiable, thus proving it’s analytic quality. Note that although the Cauchy-Riemann equations are satisfied, it is not compulsory that the function is analytic until the partial derivatives are continuous.




4. Describe the overall procedure of defining various complex functions: exponential, trigonometric, hyperbolic, logarithm, and general power. Discuss differences and similarities of the complex functions compared to real functions.



From what I can observe, the general method of defining a complex function is to replace the x in the function’s real counterpart with a complex number. This method, armed with Euler’s equation, it becomes a simple matter or algebraic manipulation to get the complex functions. The complex exponential function is very similar to the real exponential function in that its derivative is itself and is even equal to the real function if the complex number has no imaginary part. Complex trigonometric functions, along with the hyperbolic functions, are also similar in that if the complex number has no imaginary part, the complex counterparts work exactly the same. Also, the complex trigonometric functions and the hyperbolic functions are entire, have the same derivative as their real counterparts and even hold the same general formulas as the real counterparts. However, the complex logarithmic and general power functions vary from their real counterparts. This is due to the fact that a complex exponential function has an infinite number of solutions, not allowing us to define a complex logarithmic function as we would a real logarithmic function. Thus, complex logarithmic functions differ in that while positive real values yield the same results, negative numbers do not. The same concept applies to complex general power functions (except that in this case the complex logarithmic function has infinitely many solutions) and thus some exponential laws cannot be carried out with complex power functions.

algebra precalculus - Factorising equation with power 3



When you have polynomial for example x^2 + 10 x + 25 and you are asked to factorise, I know that two numbers that multiply to make 25 and add to make 10 are 5 and 5 and you take out the x.




So it becomes (x + 5) (x + 5).



However when you have a polynomial like x^3 - 4x^2 + 3x what is a rule you can use for this ? I know that this becomes x(x-1) (x-3) = 0 to find x but I would not know how to do this and would put in unnecessary effort. So is there a rule to factorise these types of polynomials ?


Answer



In general, the answer is yes, but extremely complicated. For most problems you'd face, the rational roots theorem should suffice if you can't see a clear way to factor.



In this case, we see x=0 is a possible root, and it is, thus



x^3 - 4 x^2 + 3 x = x(x^2 - 4 x + 3)




And the rest is a quadratic.


linear algebra - Change of basis matrix - part of a proof



I'm trying to understand a proof from Comprehensive Introduction to Linear Algebra (page 244)



I can't really figure out what steps have been taken to get from eq. 1. to eq. 2. It's just overcomplicated there. Different proofs of that theorem are very easy and obvious for me, multiplication of matrix by its inverse produces an identity matrix etc. But this particular case here is causing problems for me. Now I'm not trying to understand why this theorem is true, because I already do. The thing I want is to understand step from eq. 1. to eq. 2. Thank you for help.




Here's what I'm talking about


Answer



The collection of equations



a_i = \sum_{j=1}^np_{ij}b_j\qquad i = 1, \dots, n



is equivalent to the matrix equation a = Pb where a^T = (a_1, \dots, a_n), b^T = (b_1, \dots, b_n) and P is the n\times n matrix with (i, j)^{\text{th}} element p_{ij}.



If P is invertible, then we can rewrite this as b = P^{-1}a. Denoting the (i, j)^{th} element of P^{-1} by p^{-1}_{ij} (not great notation in my opinion), the matrix equation b = P^{-1}a is equivalent to the collection of equations




b_i = \sum_{j=1}^np^{-1}_{ij}a_j\qquad i = 1, \dots, n.



Swapping the roles of the indicies i and j, you get the following collection of equations:



b_j = \sum_{i=1}^np^{-1}_{ji}a_i\qquad j = 1, \dots, n.


limits - Is the derivative of a exponential function a^x always greater than the derivative of a polynomial x^n as x approaches infinity



with n and a being any constants > than 1.



I have tried taking the \lim\limits_{x \to \infty} a^x / x^n, and l'hopitals is telling me than x^n can always be reduced to 1 with multiple iterations, so the limit is always infinity, and a^x always grows faster than x^n


Answer



Your argument using L'Hopilat rule is correct you need just to add the condition a>1



For a> 1 it's true that a^x is very larger then x^n to see this you compose with a logarithm:

\lim_{x\to \infty} \frac{a^x}{x^n}=\lim_{x\to \infty} e^{\displaystyle x\ln(a)-n\ln(x)} =e^{+\infty}=+\infty



because the linear functions are always larger than logarithmic functions.


arithmetic - (-32)^{frac{2}{10}}neq(-32)^{frac{1}{5}}?

Is exponentiation by rational numbers defined only for simple fractions?




(-32)^{\frac{2}{10}}=\sqrt[10]{(-32)^2}=\sqrt[10]{1024}=\pm2 (and 8 other complex roots)



(-32)^{\frac{1}{5}}=\sqrt[5]{(-32)^1}=\sqrt[5]{-32}=-2 (and 4 other complex roots)



How do you settle this conflict?

elementary number theory - Compute gcd(a+b, 2a+3b) if gcd(a,b) = 1

A question from a problem set is asking to compute the value of \gcd(a+b, 2a+3b) if \gcd(a+b) = 1, or if it isn't possible, prove why.



Here's how I ended up doing it:



\gcd(a,b) = 1 implies that for some integers x, and y, that ax+by = 1.



Let d = gcd(a+b, 2a+3b). This implies:



\implies \text{d is divisible into }2(2a+3b) - 4(a+b) = 2b\cdots (1)




\implies \text{d is divisible into} 6(a+b) - 2(2a+3b) = 2a\cdots (2)



Statement (1) implies that d divides 2by for some integer y



Statement (2) implies that d divides 2ax for some integer x



This implies that d is divisible into 2(ax+by), which implies:



\gcd(a+b, 2a+3b) =\text{ either 1 or 2}




Thus the result is not generally determinable as it takes 2 possible values.



Are my assumptions and logic correct? If not, where are the errors?



Thank you!

calculus - Prove Bernoulli inequality if h>-1




Qi) Prove Bernoulli's inequality If h> -1,



then (1+h)^n \geq 1+nh



Qii) why is this Trivial is h>0



Something i have always been lucky with is having a lot of intuition to go on with most theorems and lemmas that i see stated but this one is a bit different. For one looking at the Equality sign makes my head hurt i would like to hope that there is only equality when h=0 or when n=1 or when n=2 and h = - \dfrac {1} {2} but it really makes my head hurt thinking about greater values of n where h is negative. i feel like there should be one value h that makes it true for each n \in \mathbb{N} Can anyone tell if i am missing values for n and h that result in an equality occurring? Perhaps it wouldn't even be all that difficult to prove there existence using contraction and induction; it may even be possible to define them using the binomial theorem.



As for Qii) This is fairly obvious expanding the left side we get crap +nh +1 but unlike when h<0 the sum of crap > 0 so subtracting 1+nh from both side we get crap >0 \geq 0 which is obviously true.




Im not a big fan of approaching things inductively unless i absolutely have to especially on an inequality.



Although i wouldn't mind someone pointing out why my induction proof went a little wrong; i only made an attempt at proving it inductively below so you guys won't think i am lazy/ so someone won't write down a proof by induction.
If its possible could someone show me how to prove Qi) without using induction.



Proving Qi) inductively is fairly easy base case n=1 we have (1+h) \geq (1+h) thus base case is true assume true for all k want to show true for all (k+1)



Now we want to show that (1+h) (1+h)^k \geq 1+(k+1)h is true.



This reduces to (1+h) (1+h)^k \geq (1+hk) +h using the k case we can reduce this to be true whenever the inequality (1+h) \geq +h which is always true for all h this implies that the inequality holds for all n\in \mathbb{N} and \forall h \in \mathbb{R} but that's actually not true and would only be true if n were even since n can be odd the left side can be bigger in magnitude but actually be negative if (1+h) <0 luckily for me h> -1 is a condition.



Answer



I'll assume you mean n is an integer.



Here's how one can easily go about a proof by induction. The proof for n=1 is obvious.



Assume the case is established for n then, (1+h)^{n+1}=(1+h)^n(1+h)\geq(1+nh)(1+h)=1+(n+1)h+nh^2\geq 1+(n+1)h



The part where h\geq-1 is in using the \geq where we assume that (1+h) is positive.



There is a way I know to prove it without induction that extends to real numbers instead of only integers.




Let f(x)=x^\alpha-\alpha x+\alpha-1.



f'(x)=\alpha*(x^{\alpha-1}-1)



f'(x)=0 at x=1 and it is obvious that f(x) has a local minimum when \alpha<0 or 1<\alpha and x>0



At x=1, f(x)=0 so f(x)\geq0



Set x=(1+h)




Now, you have:



(1+h)^\alpha-\alpha-\alpha h+\alpha-1=(1+h)^\alpha-\alpha h-1\geq 0 and we can deduce Bernoulli's Inequality as a special case.


number systems - Proof that every repeating decimal is rational



Wikipedia claims that every repeating decimal represents a rational number.



According to the following definition, how can we prove that fact?



Definition: A number is rational if it can be written as \frac{p}{q}, where p and q are integers and q \neq 0.


Answer




Suppose that the decimal is x=a.d_1d_2\ldots d_m\overline{d_{m+1}\dots d_{m+p}}, where the d_k are digits, a is the integer part of the number, and the vinculum (overline) indicates the repeating part of the decimal. Then



10^mx=10^ma+d_1d_2\dots d_m.\overline{d_{m+1}\dots d_{m+p}}\;,\tag{1} and



10^{m+p}x=10^{m+p}a+d_1d_2\dots d_md_{m+1}\dots d_{m+p}.\overline{d_{m+1}\dots d_{m+p}}\tag{2}\;.



Subtract (1) from (2):



10^{m+p}x-10^mx=(10^{m+p}a+d_1d_2\dots d_md_{m+1}\dots d_{m+p})-(10^ma+d_1d_2\dots d_m)\;.\tag{3}




The righthand side of (3) is the difference of two integers, so it’s an integer; call it N. The lefthand side is \left(10^{m+p}-10^m\right)x, so



x=\frac{N}{10^{m+p}-10^m}=\frac{N}{10^m(10^p-1)}\;,



a quotient of two integers.



Example: x=2.34\overline{567}. Then 100x=234.\overline{567} and 100000x=234567.\overline{567}, so



99900x=100000x-100x=234567-234=234333\;, and




x=\frac{234333}{99900}=\frac{26037}{11100}\;.


calculus - How can something be proved unsolvable?




My question specifically deals with certain real indefinite integrals such as \int e^{-x^2} {dx} \ \ \text{and} \ \ \int \sqrt{1+x^3} {dx} Books and articles online have only ever said that these cannot be expressed in terms of elementary functions. I was wondering how this could be proved? I know this is a naive way of thinking, but it seems to me like these are just unsolved problems, not unsolvable ones.


Answer



The trick is to make precise the meaning of "elementary": essentially, these are functions, which are expressible as finite combinations of polynomials, exponentials and logarithms. It is then possible to show (by algebraically tedious disposition of cases, though not necessarily invoking much of differential Galois theory - see e.g. Rosenthal's paper on the Liouville-Ostrowski theorem) that functions admitting elementary derivatives can always be written as the sum of a simple derivative and a linear combination of logarithmic derivatives. One consequence of this is the notable criterion that a (real or complex) function of the form x\mapsto f(x)e^{g(x)}, where f, g are rational functions, admits an elementary antiderivative in the above sense if and only if the differential equation y'+g'y=f admits a rational solution. The problem of showing that e^{x^2} and the lot have no elementary indefinite integrals is then reduced to simple algebra. In any case, this isn't an unsolved problem and there is not much mystery to it once you've seen the material.


Wednesday, 20 November 2013

Locating possible complex solutions to an equation involving a square root



Note after reviewing answers. This question illustrates in a non-trivial way how the choice of how to compute the square root in the complex plane can make real difference to the answer you get.




In this question finding all the solutions came down to solving:



ab-1=\sqrt{a^2+b^2+1} subject to (a+1)(b+1)=1



The condition implies a^2+b^2+1=(ab-1)^2, but this leaves open the possibility of \sqrt {a^2+b^2+1}=1-ab rather than ab-1.



The solution and discussion on the original question was getting rather long, and dealt mainly with real roots, so I've extracted this sub-question on complex roots.



Could someone please explain what constraints can be put on the complex solutions of this equation - the values of a and b - which would be consistent with the square root operation. It would help if answers were elementary and didn't assume a prior knowledge of complex square roots.



Answer



Every nonzero complex number z has two "square roots". You might call one \sqrt{z} and the other -\sqrt{z}, but which is which? A particular choice (for all z in some region of the complex plane) is called a branch of the function \sqrt{z}. There is no single branch that is continuous everywhere: if you start at some point with a particular choice of \sqrt{z} and travel in a loop around 0, keeping \sqrt{z} continuous along your path, when you come back to the start \sqrt{z} will have - its original value. A curve \Gamma in whose complement your branch is continuous is called a branch cut.



One popular choice, called the "principal branch", is to make the real part of \sqrt{z} always \ge 0. The branch cut in this case is the negative real axis.


calculus - Integrating int_0^1 sqrt{frac{log(1/t)}{t}} ,mathrm{d}t = sqrt{2pi}



I'd like to evaluate the integral



\int_0^1 \sqrt{\frac{\log(1/t)}{t}} \,\mathrm{d}t.



I know that the value is \sqrt{2\pi} but I'm not sure how to get there.




I've tried a substitution of u = \log(1/t), which transforms the integral into



\int_0^\infty \sqrt{u e^{-u}} \,\mathrm{d}u.



This seems easier to deal with. But where do I go from here? I'm not sure.


Answer



The function \Gamma(x) is defined as



\Gamma(x) = \int_0^\infty t^{x-1} e^{-t} \,\mathrm{d}t.




This general integral below on the left can be transformed in terms of the gamma function with a substitution like so:



\int_0^\infty t^{x-1} e^{-bt} \,\mathrm{d}t = \int_0^\infty \left( \frac{u}{b} \right)^{x-1} \frac{e^{-u}}{b} \,\mathrm{d}u = b^{-x} \Gamma(x).



This is in the form of the integral in the question. Plugging in the values yields the desired result, \sqrt{2\pi}.


real analysis - lim_{prightarrowinfty}||x||_p = ||x||_infty given ||x||_infty = max(|x_1|,|x_2|)

I have seen the proof done different ways, but none using the norm definitions provided.



Given:
||x||_p = (|x_1|^p+|x_2|^p)^{1/p} and ||x||_\infty = max(|x_1|,|x_2|)



Prove:
\lim_{p\rightarrow\infty}\|x\|_p = \|x\|_\infty




I have looked at the similar questions:
The l^{\infty} -norm is equal to the limit of the l^{p} -norms. and Limit of \|x\|_p as p\rightarrow\infty but they both seem to use quite different approaches (we have not covered homogeneity so that is out of the question, and the other uses a different definition for the infity norm).

sequences and series - Different methods to compute sumlimits_{k=1}^infty frac{1}{k^2} (Basel problem)



As I have heard people did not trust Euler when he first discovered the formula (solution of the Basel problem)
\zeta(2)=\sum_{k=1}^\infty \frac{1}{k^2}=\frac{\pi^2}{6}.
However, Euler was Euler and he gave other proofs.



I believe many of you know some nice proofs of this, can you please share it with us?



Answer



OK, here's my favorite. I thought of this after reading a proof from the book "Proofs from the book" by Aigner & Ziegler, but later I found more or less the same proof as mine in a paper published a few years earlier by Josef Hofbauer. On Robin's list, the proof most similar to this is number 9
(EDIT: ...which is actually the proof that I read in Aigner & Ziegler).



When 0 < x < \pi/2 we have 0<\sin x < x < \tan x and thus
\frac{1}{\tan^2 x} < \frac{1}{x^2} < \frac{1}{\sin^2 x}.
Note that 1/\tan^2 x = 1/\sin^2 x - 1.
Split the interval (0,\pi/2) into 2^n equal parts, and sum
the inequality over the (inner) "gridpoints" x_k=(\pi/2) \cdot (k/2^n):
\sum_{k=1}^{2^n-1} \frac{1}{\sin^2 x_k} - \sum_{k=1}^{2^n-1} 1 < \sum_{k=1}^{2^n-1} \frac{1}{x_k^2} < \sum_{k=1}^{2^n-1} \frac{1}{\sin^2 x_k}.

Denoting the sum on the right-hand side by S_n, we can write this as
S_n - (2^n - 1) < \sum_{k=1}^{2^n-1} \left( \frac{2 \cdot 2^n}{\pi} \right)^2 \frac{1}{k^2} < S_n.



Although S_n looks like a complicated sum, it can actually be computed fairly easily. To begin with,
\frac{1}{\sin^2 x} + \frac{1}{\sin^2 (\frac{\pi}{2}-x)} = \frac{\cos^2 x + \sin^2 x}{\cos^2 x \cdot \sin^2 x} = \frac{4}{\sin^2 2x}.
Therefore, if we pair up the terms in the sum S_n except the midpoint \pi/4 (take the point x_k in the left half of the interval (0,\pi/2) together with the point \pi/2-x_k in the right half) we get 4 times a sum of the same form, but taking twice as big steps so that we only sum over every other gridpoint; that is, over those gridpoints that correspond to splitting the interval into 2^{n-1} parts. And the midpoint \pi/4 contributes with 1/\sin^2(\pi/4)=2 to the sum. In short,
S_n = 4 S_{n-1} + 2.
Since S_1=2, the solution of this recurrence is
S_n = \frac{2(4^n-1)}{3}.
(For example like this: the particular (constant) solution (S_p)_n = -2/3 plus the general solution to the homogeneous equation (S_h)_n = A \cdot 4^n, with the constant A determined by the initial condition S_1=(S_p)_1+(S_h)_1=2.)




We now have
\frac{2(4^n-1)}{3} - (2^n-1) \leq \frac{4^{n+1}}{\pi^2} \sum_{k=1}^{2^n-1} \frac{1}{k^2} \leq \frac{2(4^n-1)}{3}.
Multiply by \pi^2/4^{n+1} and let n\to\infty. This squeezes the partial sums between two sequences both tending to \pi^2/6. Voilà!


multivariable calculus - Change linear plot to 100% plot in Wolfram Alpha

Recently I have used this input for WolframAlpha:



Plot (forumla1), (formula2), (formula3), {a, 0, 50}



It's generating "Linear Plot" like on picture on left. Is there way to make it generate "100%" plot like on picture on right? So that it will sum up, look of how big percentage formula generates and match it?

Generated using Microsoft Word 2013



This isn't WolframAlpha, I used another software for generation.



I tried region function, but it doesn't work with 3 formula's apparently.

Tuesday, 19 November 2013

Mathematical induction inequality involving sines

Let $0


The inequality holds true for n=1. I assumed that it holds true for n=k. But I was unable to prove that it holds true for n=k+1. Please help me.

linear algebra - How to prove two diagonalizable matrices are similar iff they have same eigenvalue with same multiplicity.

\RightarrowTo start with, according to the similarity theorem, they have the same eigenvalues so is the multiplicity.



\LeftarrowIf two matrices have the same eigenvalues and multiplicity, that implies they are similar to the same diagonalizable matrix D and by equivalent relation, they are similar to each other.



However for the second part, since we are not sure the dim(E_{\lambda n})=multiplicity for \lambda_n, how are we making sure that the two matrices are diagonalizable. If diagonalizability is not assured, my second part is wrong.



Could you offer me any hint?

calculus - Show that lim_{ntoinfty}frac{ln(n!)}{n} = +infty





Show that
\lim_{n\to\infty}\frac{\ln(n!)}{n} = +\infty




The only way i've been able to show that is using Stirling's approximation:
n! \sim\sqrt{2\pi n}\left(\frac{n}{e}\right)^n




Let:
\begin{cases} x_n = \frac{\ln(n!)}{n}\\ n \in \Bbb N \end{cases}



So we may rewrite x_n as:

x_n \sim \frac{\ln(2\pi n)}{2n} + \frac{n\ln(\frac{n}{e})}{n}



Now using the fact that \lim(x_n + y_n) = \lim x_n + \lim y_n :
\lim_{n\to\infty}x_n = \lim_{n\to\infty}\frac{\ln(2\pi n)}{2n} + \lim_{n\to\infty}\frac{n\ln(\frac{n}{e})}{n} = 0 + \infty



I'm looking for another way to show this, since Stirling's approximation has not been introduced at the point where i took the exercise from yet.



Answer



Another way to show \lim_{n\to\infty}\frac{\ln(n!)}{n}=\infty is to consider the following property of logarithms: \log(n!)=\log(n)+\log(n-1)+\cdots+\log(2)>\frac{n}{2}\log\left(\frac n2\right). Now \frac{\log (n!)}{n}>\frac{\log(n/2)}{2}. As n\to\infty, this clearly diverges to +\infty.


random - How to get probability of getting only those items from box which you did not pick before?



I have a box which contains n unused items (all items in the box are unused). From them I randomly pick k, where k < n items. Those k items became used when I picked them, and then I put them back into the box. How to find the probability that second time all k items I pick are going to be unused?(not those which I already picked before)



I already know that all possible ways of picking k from n is: \frac{n!}{k!(n-k)!}
and that possible ways of picking k from (n-k) is : \frac{(n-k)!}{k!((n-k)-k)!}


Answer



There are indeed \binom{n}{k} ways of picking k items out of n.



The number of possible ways of picking k from n-k is \binom{n-k}{k} (so not \frac{n!}{k!(n-2k)!}).




These "ways" are equiprobable so the event of picking the second time k items that are not picked the first times has probability:\frac{\binom{n-k}{k}}{\binom{n}{k}}


continuity - Can unbounded discontinuous functions be locally bounded?




Consider the function f(x) = \frac{x^3}{1+x^3}enter image description here



Obviously this function is discontinuous at x = -1 therefore discontinuous on \mathbb{R}. Moreover, it is unbounded at the same point. Now, I would not say that this function is locally bounded either, as not all sets f(A) are bounded for a neighbourhood A about any x_0 \in \mathbb{R}. Is this reasoning correct? Can an unbounded discontinuous function be locally bounded?


Answer



If I understand your definition correctly, the function f(x) = \begin{cases}x - 1 & x < 0\\ 0 & x = 0\\ x+1 & x>0\end{cases}



fits the bill.




If you want a rational function, then no can do, because if p, q are polynomials with no common factors, then f(x)=\frac{p(x)}{q(x)} is continuous if q(x)\neq 0 and is unbounded around x if q(x)=0.


trigonometry - Limit of lim limits_{x to frac{5π}{2}^+} frac{5x - tan x}{cos x}

So I have the following problem:

\lim \limits_{x \to \frac{5π}{2}^+} \frac{5x - \tan x}{\cos x}




I can't figure out how to get the limit. I tried splitting it up to:



\lim \limits_{x \to \frac{5π}{2}^+} \Big(\frac{5x}{\cos x} - \frac{\tan x}{\cos x}\Big)



I'm lost and unsure of what to do next. I'm taking a Calc 1 class and we have not yet gotten to L'hopitals and other methods yet (and also I am not sure how I could incorporate those ideas either).

linear algebra - Find a basis of the subspaces of mathbb{R}^4 generated by the vectors



Find a basis of the subspaces of \mathbb{R}^4 generated by the vectors v_1=(1,1,2,0),v_2=(-1,0,1,0),v_3=(2,-2,0,0),v_4=(0,0,-1,2)



First of all I wrote these vectors as rows of a matrix then applied the following transformations to reduce the matrix in row echelon form:-
R_2+R_1 & R_3-2R_1

Then R_3+4R_2 and finally \frac{1}{10}R_3
Then I wrote the non zero rows in row echelon form as B={(1,1,2,0),(0,1,3,0),(0,0,1,0)} which forms basis .
Am I right here?


Answer



Using the vectors given in the problem, we can define a set: A = \{v_1, v_2, v_3, v_4\}



Remember the definition of a Basis set:



Given: V is a vector space and B is a subset of V, we say that B is a basis of V IFF V = span(B) and B is linearly independent.




Row-Reducing the matrix form of the set A yields:
\begin{bmatrix}1&-1&2&0\\1&0&-2&0\\2&1&0&-1\\0&0&0&2\end{bmatrix}
<=>
\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{bmatrix}
Since each column contains a leading "1", we can say that the set A is linearly independent.



This result implies that the set A itself satisfies the definition of a basis.



B = \{v_1,v_2,v_3,v_4\}


Monday, 18 November 2013

Functional equation f(x+y)-f(x)-f(y)=alpha(f(xy)-f(x)f(y)) is solvable without regularity conditions




I was reviewing this question and got motivated to solve this general problem:




Find all functions f:\mathbb R\to\mathbb R such that for all real numbers x and y,
f(x+y)-f(x)-f(y)=\alpha(f(xy)-f(x)f(y))\tag0
where \alpha is a nonzero real constant.




I found out that it can be solved in a similar way to what I did in my own answer to that question. It was interesting for me that we don't need any regularity conditions like continuity. As I didn't find any questions about the same functional equation on the site, I thought it might be useful to post my own answer to it.




I appreciate any other ways to solve the problem or any insights helping me understand why there's no need of regularities while some functional equations like Cauchy's need such additional assumptions to have regular solutions.


Answer



First, letting y=1 in (0) and rearranging the terms, we get:
f(x+1)=f(1)+(1+\alpha-\alpha f(1))f(x)\tag1
\therefore\:f(2)=(2+\alpha-\alpha f(1))f(1)\tag2
Next, substituting x+1 for x in (1) and using (1) and (2) we have:
f(x+2)=f(2)+(1+\alpha-\alpha f(1))^2f(x)\tag3
Next, letting y=2 in (0) and using (3) we get:
\left((1+\alpha-\alpha f(1))^2-1\right)f(x)=\alpha(f(2x)-f(2)f(x))

\therefore\:f(2x)=\left(2(1-f(1))+\alpha(1-f(1))^2+f(2)\right)f(x)
Thus using (2) we have:
f(2x)=(2+\alpha-\alpha f(1))f(x)\tag4
Now, substituting 2x for x and 2y for y in (0) and using (4), we'll get:
\beta(f(x+y)-f(x)-f(y))=\alpha\beta^2(f(xy)-f(x)f(y))
where \beta=2+\alpha-\alpha f(1). Mutiplying (0) by \beta and subtracting the last equation, we'll have:
\beta(\beta-1)(f(xy)-f(x)f(y))=0\tag5
If \beta=0 then by (4) we conclude that f is the constant zero function. So this case can only happen when \alpha=-2.



If \beta=1 then by (1) we conclude that f is the constant 1+\frac1\alpha function.




If \beta\neq0 and \beta\neq1 then by (0) and (5) we have:
f(xy)=f(x)f(y)\tag6
f(x+y)=f(x)+f(y)\tag7
By letting y=x in (6) we conclude that f(x) is nonnegative for nonnegative x. Combining with (7) we find out that f is increasing. An increasing additive function is of the form f(x)=kx. So by (6) we conclude that f is the constant zero function or the identity function.


real analysis - If fcolonmathbb Rtomathbb R is continuous and satisfies f(x+y) = f(x) + f(y) and f(1)=1, then f(x)=x for all x.




Let
f: \mathbb{R} \rightarrow \mathbb{R} be a continuous function such that:
f(x+y) = f(x) + f(y)

f(1) = 1 Show that f(x) = x I have been having trouble approaching this problem. I have shown, through a system of equations, that f(x+y) = x + y, but that's about as far as I can get. Appreciate any help anyone has to offer!


Answer



See that



f(0)=f(0)+f(0)=2f(0)\implies f(0)=0



\begin{align}f(a+b+c+\dots+z) & =f(a)+f(b+c+\dots+z) \\&=f(a)+f(b)+ f(c+\dots+z)\\&=\dots\\&=f(a)+f(b)+f(c)+\dots+f(z)\end{align}







For natural numbers x, we have



f(x)=f(\underbrace{1+1+1+\dots+1}_x)=\underbrace{f(1)+f(1)+\dots+f(1)}_x=1+1+1+\dots+1=x



f(x)=x\ \forall\ x\in\mathbb N






For positive rational numbers,




1=f(1)=f(\underbrace{\frac1x+\frac1x+\frac1x+\dots+\frac1x}_x)=f(\frac1x)+f(\frac1x)+\dots+f(\frac1x)=xf(\frac1x)



1=xf(\frac1x)\implies f(\frac1x)=\frac1x\ \forall\ x\in\mathbb N



f(\frac yx)=f(\frac1x+\frac1x+\dots+\frac1x)=f(\frac1x)+f(\frac1x)+\dots+f(\frac1x)=yf(\frac1x)=\frac yx\\f(\frac yx)=\frac yx\ \forall\ (x,y)\in\mathbb N\times\mathbb N



f(x)=x\ \forall\ x\in\mathbb Q^+







For positive real numbers, every real number is the cauchy sequence of rational numbers, which, since f(x) is continuous, gives us



f(x)=x\ \forall\ x\in\mathbb R^+






Finally, for all negative numbers, we have



0=f(0)=f(x-x)=f(x)+f(-x)




0=f(x)+f(-x)\implies f(-x)=-f(x)=-x


real analysis - Does there exist a scalar function g({bf{x}}) that satisfies g({bf{x}} +,{bf{f}}({bf{x}}))= g ({bf{x}})det(I+,{bf{f}}'({bf{x}}))?

Given a vector valued function \bf{f}:\mathbb{R}^n\rightarrow\mathbb{R}^n, what is a scalar function g:\mathbb{R}^n\rightarrow\mathbb{R} that satisfies the following g({\bf x} +\mathbf{f}(\mathbf{x}))=g(\mathbf{x})\det(I+\mathbf{f}'(\mathbf{x})), where \bf{f}'(\bf{x}) is the Jacobian matrix of \bf{f}(\bf{x}) and I is the n\times n identity matrix.



If a solution can't be found for arbitrary \bf{f}, what structure can one impose on \bf{f} for there to exist a particular function g that satisfies the above condition.




One particular example of a function g that satisfies the above is all I'm after (i.e., I don't need the most general solution).



Alternatively: Can one prove that there does not exist a function g that satisfies the above?

estimation - a limit about exponential function





\lim_{n\rightarrow\infty}\frac{1+\frac{n}{1!}+\cdot+\frac{n^n}{n!}}{e^n}=\frac12



Taking the first n terms of the Taylor series of e^n as the numerator, the limit is true or false? How to prove?


Answer



Assuming that we work

a_n=e^{-n}\sum_{k=0}^n \frac{n^k}{k!} by the definition of the incomplete gamma function
a_n=\frac{\Gamma (n+1,n)}{n \Gamma (n)}
We have the relation \Gamma (n+1,n)=n \,\Gamma (n,n)+e^{-n}\, n^n which makes
a_n=\frac{ n^{n-1}}{e^n\,\Gamma (n)}+\frac{\Gamma (n,n)}{\Gamma (n)} The first term tends to 0 when n becomes large; to prove it, take its logarithm and use Stirling approximation to get
\log\left(\frac{ n^{n-1}}{e^n\,\Gamma (n)} \right)=-\frac{1}{2} \log \left({2 \pi n}\right)-\frac{1}{12 n}+O\left(\frac{1}{n^{5/2}}\right)



For the second term, if you look here, you will notice the asymptotics
\Gamma(n,n) \sim n^n e^{-n} \sqrt{\frac{\pi}{2 n}} So, neglecting the first term, we have, for large n
a_n\sim \frac{ n^n e^{-n} }{\Gamma(n)}\sqrt{\frac{\pi}{2 n}} Take logarithms and use Stirling approximation to get

\log(a_n)=-\log (2)-\frac{1}{12 n}+O\left(\frac{1}{n^{5/2}}\right) Continue with Taylor
a_n=e^{\log(a_n)}=\frac{1}{2}-\frac{1}{24 n}+O\left(\frac{1}{n^{2}}\right)
If you use the better asymptotics given in the link \Gamma(n,n) = n^n e^{-n} \left [ \sqrt{\frac{\pi}{2 n}} - \frac{1}{3 n} + O\left ( \frac{1}{n^{3/2}} \right ) \right ] doing the same, you should end with
a_n=\frac 12-\frac{1}{3 \sqrt{2 \pi n} }-\frac{1}{24 n}+O\left(\frac{1}{n^{3/2}}\right)


real analysis - How to find lim_{hrightarrow 0}frac{sin(ha)}{h}

How to find \lim_{h\rightarrow 0}\frac{\sin(ha)}{h} without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...