Tuesday 31 December 2019

real analysis - Epsilon-Delta Differentiability



Say we have the function $$f: \mathbb{R} \rightarrow \mathbb{R} ,\, with \, x \mapsto x^2$$



I understand how to prove f is differentiable using $$ f'(c) = \lim_{h \rightarrow 0} \tfrac{f(c+h) - f(c)}{h}$$
by substitution. But how would you prove differentiability using the epsilon-delta definition of limits:
$$\forall \epsilon>0 \,\, \exists \delta>0 \:s.t. |x-c|< \delta \implies |\tfrac{f(x) - f(c)}{x-c} - L | < \epsilon$$

Then $$f'(c) = L$$


Answer



Hint:
$$\left|\frac{f(x)-f(c)}{x-c}-2c\right|=\left|\frac{x^2-c^2}{x-c}-2c\right|=\left|\frac{(x+c)(x-c)}{x-c}-2c\right|=\left|x-c\right|$$


calculus - limit of Holder norms: $suplimits_{xin [a,b]} f(x) = limlimits_{nrightarrowinfty} left(int_a^b (f(x))^n ;dxright)^{frac{1}{n}}$





Show that $$\sup_{x\in [a,b]} f(x) = \lim_{n\rightarrow\infty} \left(\int_a^b (f(x))^n \;dx\right)^{\frac{1}{n}}$$ for f continuous and positive on [a,b]. I can show that LHS is greater than or equal to RHS but I can't show the other direction.


Answer



Let $F = \sup_{\xi\in[a,b]} f(\xi)$. If $F=0$, the equality is trivial. So we may suppose suppose $F>0$.



Choose $\epsilon \in (0,F)$, and let $A_\epsilon = \{x | f(x) \geq F-\epsilon\}$. Since $f$ is continuous, we have $mA_\epsilon > 0$ ($m$ is the Lebesgue measure). Then the following is true (work from the middle towards the left or right to get the relevant bound):



$$mA_\epsilon (F-\epsilon)^n \leq \int_{A_\epsilon} (F-\epsilon)^n dx \leq I^n_n = \int_a^b (f(x))^n dx \leq \int_a^b F^n dx \leq (b-a) F^n$$



This gives $\sqrt[n]{mA_\epsilon} (F-\epsilon) \leq I_n \leq \sqrt[n]{b-a}F$.




We have $\lim_{n \to \infty}\sqrt[n]{mA_\epsilon} = 1$ and $\lim_{n \to \infty}\sqrt[n]{b-a} = 1$, hence $\limsup_{n \to \infty} I_n \leq F$ and $\liminf_{n \to \infty} I_n \geq F-\epsilon$. Since this is true for $\epsilon$ arbitrarily close to $0$, we have $\liminf_{n \to \infty} I_n \geq F$, from which the desired result follows.



(Note: If you wish to avoid the Lebesgue measure, note that $A_\epsilon$ could be taken to be any non-empty interval such that $f(x) \geq F-\epsilon$ for $x$ in this interval. Then $mA_\epsilon$ would be replaced by the length of this interval. The same reasoning still applies.)


calculus - Solving a trig limit without L'Hopital: $lim_{x to 0} frac{tan^2 (pi x)}{2(pi x)^2}$



I'm supposed to solve this limit without using L'Hopitals rule.



I find the indeterminate form of $\frac{0}{0}$ which tells me that L'Hopitals is an option but since we haven't seen derivatives yet I'm not allowed to used it.




Previously I already tried substituting $\pi x$ for $t$ and dividing numerator and denominator by $\pi x$ both without success.



$$\lim_{x \to 0} \frac{\tan^2 (\pi x)}{2(\pi x)^2}$$


Answer



If one knows that
$$
\lim_{x\to 0} \frac{\sin x}{x}=1
$$ then one may write, as $x \to 0$,
$$
\frac{\tan^2(\pi x)}{2\pi^2 x^2}=\frac1{2}\cdot\left(\frac{\sin (\pi x)}{\pi x}\right)^2\cdot \frac1{\cos^2 (\pi x)}

$$ and conclude easily since $\displaystyle \frac1{\cos^2 (\pi x)} \to 1$ as $x \to 0$.


multivariable calculus - Double integral in polar coordinates



Use polar coordinates to find the volume of the given solid inside the sphere $$x^2+y^2+z^2=16$$ and outside the cylinder $$x^2+y^2=4$$
When I try to solve the problem, I keep getting the wrong answer, so I don't know if it's an arithmetic error or if I'm setting it up incorrectly. I've been setting the integral like so: $$\int_{0}^{2\pi}\int_{2}^4\sqrt{16-r^2}rdrd\theta$$
Is that the right set up? If so then I must have made an arithmetic error, if it's not correct, could someone help explain to me why it's not that? Thanks so much!


Answer



It's almost correct. Recall that the integrand is usually of the form $z_\text{upper}−z_\text{lower}$, where each $z$ defines the lower and upper boundaries of the solid. As it is currently set up, you are treating the sphere as a hemisphere, where your lower boundary is the $xy$-plane. Hence, you need to multiply by $2$, since we are technically doing:

$$
\left(\sqrt{16-r^2} \right) - \left(-\sqrt{16-r^2} \right)
$$


Slightly equal functions

Can there exist two elementary functions $f(x)$ and $g(x)$ defined everywhere on the real axis such that,
\begin{align} f(x)&=g(x)\qquad \text{if} \quad a\le x\le b\\
f(x)&\neq g(x)\qquad \text{if} \quad xb\end{align}
where f(x) and g(x) are not piecewise defined functions. And $a\ne b$.



If yes, give example. If no, give proof.



Also, would it make any difference if the functions need not be elementary?



Edit : It seems there is a lot of confusion due to my inability of putting the question precisely. Please refer to the links.
Elementary functions http://en.wikipedia.org/wiki/Elementary_function
Piecewise defined function http://en.wikipedia.org/wiki/Piecewise




I have also added the 'defined everywhere' condition.

linear algebra - Eigenvalues without any calculations

Question is from Intro to Linear Algebra (5th Ed) by Gilbert Strang, Chapter 6-39.



Without writing down any calculations, can you find the eigenvalues of this matrix? Also find $A^{2017}$.



$$ A = \begin{Bmatrix}

110 & 55 & -164\\
42 & 21 & -62\\
88 & 44 & -131
\end{Bmatrix} $$



Obviously one of the eigenvalues is $0$. Not sure how to find the rest without calculation.

Monday 30 December 2019

calculus - Differential Notation Magic in Integration by u-Substitution





I'm really confused now. I always thought that the differential notation $\frac{df}{dx}$ was just that, a notation.



But somehow when doing integration by u-substitution I'm told that you can turn something like this $\frac{du}{dx} = 2x\;$ into this $\;du = 2x\ dx$.



But how is that even possible? I understand that the notation comes from the fact that $\frac{du}{dx}$ actually means the limit of the difference in $u$ over the difference in $x$, with $\Delta x$ approaching $0$.




$$u'(x) = \frac{du}{dx} = \frac{du(x)}{dx} = \lim_{\Delta x\to 0} \frac{u(x+\Delta x)\ -\ u(x)}{(x+\Delta x) - x} = \lim_{\Delta x\to 0} \frac{u(x+\Delta x)\ -\ u(x)}{\Delta x}$$



So if $\frac{df}{dx}$ is just a notation for the limit mentioned above, then what is the underlying argument to say that you can treat $\frac{du}{dx}$ as if it were an actual fraction?



Appreciate the help =)


Answer



It is really just a notation. And the trick with the substitution e.g. $du = 2xdx$ does not have any mathematical meaning, it is just a convenient way of memorizing the integration by substitution rule/law/theorem:



$$\int_a^b f(\phi(t)) \phi'(t) dt = \int_{\phi(a)}^{\phi(b)} f(x)dx $$




Going from left to right you might want to make the substitution $x=\phi(t)$. Our mnemonic tells us to $\frac{dx}{dt} = \phi'(t)$ or in other words that you have to replace $\phi'(t)dt$ with $dx$ if you replace $\phi(t)$ with $x$. If you look again at the equation above you see that this mnemonic does a nice job, so we do not have to memorize this whole equation.



I do use the mnemonic but still I always keep this equation in mind when doing so.


Sunday 29 December 2019

real analysis - Prove that if $f$ is differentiable at $x_0$, then $f$ satisfies Lipschitz condition at $x_0$




A function $f:(a,b)\rightarrow \mathbb{R}$ satisfies a Lipschitz condition at $x_0 \in (a,b)$ if $\exists M >0$ and $\mu >0$ for which the following is true:



if $y\in(a,b)$ and $| x_0 -y|<\mu$, then $|f(x_0) -f(y)\leq M|x_0-y|$






ROAD MAP TO PROBLEM (provided by professor)



a) Why must there exists a $\delta >0$ for which the following is true:




if $y\in(a,b)$ and $0<| x_0 -y|<\delta$, then $| \frac{f(x_0) -f(y)}{x_0 -y} -f'(x_0)|<1$



b) show (briefly) that for all $y \in (a,b)$
$$|f(x_0)-f(y)|=|\frac{f(x_0) -f(y)}{x_0 -y}-f'(x_0)+f'(x_0)||x_0 -y|$$



c) Prove that there exists a number $M>0$ for which, if $y\in (a,b)$ and $0<|x_0-y|<\delta$, then $$|f(x_0)-f(y)|\leq M|x_0 -y|$$ Hint: $M$ will depend on the value of f at $x_0$



d) Explain ( briefly), why, if $y=x_0$, then $|f(x_0) - f(y)| \leq M|x_0 -y|$







Given that f is differentiable thus, f is continuous, therefore I can somewhat see how they got to (a), however after (a) I'm confused as to how to look at the problem.


Answer



Assume that $f'(x_0) \neq 0$



For $\epsilon=1$ exists $\delta>0$ such that $\frac{|f(x)-f(x_0)|}{|x-x_0|} <1+|f'(x_0)|,\forall x \in(x_0-\delta,x_0+\delta)$



So for $M=1+|f'(x_0)|$ you have the conclusion.





We used that if: $\lim_{x \to x_0}g(x)=l$ then $\lim_{x \to x_0}|g(x)|=|l|$



definition - How do we define the sum of a bi-infinite series?




If we have a (usual) series of the form
$S = \sum_{n=1}^{\infty} a_n$
then we define $S$ to be the limit as $N$ goes to infinity of the partial sums $S_N = \sum_{n=1}^{N} a_n,$
provided the limit exists. If we instead have a bi-infinite series of the form
$$
S = \sum_{n=-\infty}^{\infty} a_n
$$
then how do we define this sum? Is it
$$
\lim_{N \to \infty} \sum_{n=-N}^N a_n

$$
or
$$
\lim_{N \to \infty} \lim_{M \to \infty} \sum_{n = -M}^N a_n
$$
or something else? Can you also refer me to any standard textbook that deals with bi-infinite series? Thanks!


Answer



Bi-infinite summation is summation over $\Bbb Z$. In general (I think that I read about this in Professor Tao's Analysis book), one defines a sum over an arbitrary set $X$ as follows.



Let $X$ be any set and $F(X)$ be the set of all finite subsets of $X$. Let $f: X \to \Bbb R$ be nonnegative.




$$\sum_{x \in X} f(x) := \sup\{\sum_{x \in A} f(x): A \in F(X)\}$$



Now if $f$ is not necessarily nonnegative, then in case $\sum_{x \in X} |f(x)|$ is finite,



$$\sum_{x \in X} f(x) := \sum_{x \in X} f^+(x) - \sum_{x \in X} f^-(x)$$



Where $f^+(x)= \max\{f(x), 0\}$ and $f^-(x) = \max\{-f(x), 0\}$.



Otherwise $\sum_{x \in X} f(x)$ is undefined.




In the case $X = \Bbb Z$, it's easy to check that when $\sum_{x \in \Bbb Z} f(x)$ is defined, we have that the limit:



$$\lim_{N \to \infty} \sum_{n=-N}^N f(n)$$



exists and equals that thing.


Saturday 28 December 2019

statistics - Mean and mode of a Beta random variable



A continuous random variable is said to have a $Beta(a,b)$ distribution if its density is given by



$f(x) = (1 / \text{B}(a,b))x^{a-1} (1-x)^{b-1}$ if $0 < x < 1$



Find mean , var, mode if $a = 3, b = 5.$




This is throwing me off with the beta distribution. I'm not sure if it changes the way i solve for the mean, var , mode.



My approach is that since mean of a continuous random variable which is basically the Expected value of X (EX), so we can just do $\int x * f(x) dx$. We can find $ f(x) = (105) x^{2} (1-x)^{4} $ and then $\int x *f(x) dx = 1/105 \int_{0}^{1} x^3 * (1-x)^4 dx = 105 * \beta(4,5) = 105 * ((6*24 )/ 40320)$


Answer



Your method is OK, but you have the wrong constant term; your density function does not integrate to 1 over $(0,1).$ Look at Wikipedia
for 'beta distribution'. You should get $E(X) = \alpha/(\alpha + \beta) = 3/8.$



The mode is the value of $x$ (here $x = 1/3$) at at which $f(x)$ achieves its maximum in $(0,1).$ You can find it using
differential calculus.




The figure below shows the density function of this distribution. The mean
is at the solid red line and the mode is at the dotted green line.



enter image description here


Help to prove/disprove a statement for infinite series

Is the following assertion true? Justify your answer.
􏰑􏰑



Suppose that we have series $\sum_{k=p}^∞ a_k$ and $\sum_{k=p}^∞ b_k$. Suppose also that $a_k = b_k$ for all but finitely many k. Then $\sum_{k=p}^∞ a_k$ converges if and only if $\sum_{k=p}^∞ b_k$ converges.




I'm really struggling to either prove/disprove this statement. Could someone please get me on the right track of starting? Is it related to subsequences or am I looking into the wrong section of this topic?

reference request - Overview of basic results on cardinal arithmetic

Are there some good overviews of basic formulas about addition, multiplication and exponentiation of cardinals (preferably available online)?

real analysis - Discontinuous derivative.

Could someone give an example of a ‘very’ discontinuous derivative? I myself can only come up with examples where the derivative is discontinuous at only one point. I am assuming the function is real-valued and defined on a bounded interval.

integration - $lim_{A to infty} int_0^{A} int_0^{infty} sin(x) e^{-xt}dtdx$




I would like to compute the following integral:



$$\lim_{A \to \infty} \int_0^{A} \int_0^{\infty} \sin(x) e^{-xt}dtdx \qquad (1)$$



I would like to swap the order of integration because then the integral becomes easier (I could evaluate it by taking its imaginary part as was done Here or i could do a partial integration two times to get that:



$$\lim_{A \to \infty} \int_0^{A} \int_0^{\infty} \sin(x) e^{-xt}dxdt = \lim_{A \to \infty}\int_0^A\dfrac{1}{1+t^2}dt=\frac{\pi}{2}$$



In order to do that i need to apply Fubini-Lebesgue, because the function $\sin(x)e^{-xt}$ takes also negative values, but i can't show that




$$\lim_{A \to \infty} \int_0^{A} \int_0^{\infty} |\sin(x)| e^{-xt}dtdx < \infty$$



I tried to approximate the function in this way:



$$|\sin(x)e^{-xt}|\le e^{-xt}$$



but



$$\lim_{A \to \infty} \int_0^{A} \int_0^{\infty}e^{-xt}dtdx = \infty$$




How can i use Fubini-Lebesuge to evaluate $(1)$? Any hint would be really appreciated, thank you!



I also tried to bound



$$\lim_{A \to \infty} \int_0^{A} \int_0^{\infty} |\sin(x)| e^{-xt}dtdx \le \lim_{A\to \infty}A$$



but $\lim_{A\to \infty} = \infty$ hence again this argument does not work


Answer



How about, since $x\geq 0$ the following holds for all $t\geq 0$,




$$|\sin(x)e^{-tx}|\leq xe^{-tx}$$
then,
$$\int_0^A\int_0^{\infty}xe^{-tx}dtdx=\int_0^Ax\cdot\frac{1}{x}dx=A$$


Friday 27 December 2019

elementary set theory - Notation on proving injectivity of a function $f:A^{B;cup; C}to A^Btimes A^C$



I'm trying to prove that for any cardinal numbers $a,b,c$, the following holds:
$a ^ {b + c} = a ^ b a ^ c$ i.e. that there exists a bijective function $ f : A ^ {B \:\: \cup \:\: C} \rightarrow A^B \times A^C $




This is only part of the proof sketch I have (proving $f$ is injective), and I'd like to know if is well written, since I believe it has flaws.






Let $f: \{ g \:\:\: | g:B \cup C \rightarrow A \} \rightarrow \{ \langle g,h\rangle | \:\:\: g: B \rightarrow A \wedge h : C \rightarrow A \}$ such that



$f ( g_{b} \cup g_{c}) = \langle g_{b},g_{c}\rangle$.



Now,




$f( g_{b1} \cup g_{c1}) = f ( g_{b2} \cup g_{c2} ) \implies \langle g_{b1},g_{c1}\rangle = \langle g_{b2},g_{c2}\rangle$ and therefore $f$ is injective.






Questions:




  1. Does that prove that $f$ is injective? I think it does not, since
    $f ( g_{b} \cup g_{c} ) = f ( g_{c} \cup g_{b}) \implies \langle g_{b},g_{c}\rangle = \langle g_{c},g_{b}\rangle (\bot)$

  2. Is there an alternative way to define $f$? It's difficult for me to define it in terms of properties of elements of its domain.




Side note:



The title says "kinds" because the function domain and image sets are sets of sets, but I may be mistaken using that word, if so, please edit accordingly.


Answer



You can think of $g : B \cup C \to A$ as a couple of functions $g_B : B \to A$ and $g_C : C \to A$ such that for all $x \in B \cap C$, $g_B(x) = g_C(x)$. Therefore, letting $f : \Gamma \to \Delta$ (with
$$
\Gamma = \{ g \,\, | \,\, g : B \cup C \to A \}, \qquad \Delta = \{ \langle g_B, g_C\rangle \, \, | \, \, g_B : B \to A, \, \, g_C : C \to A \}
$$

obviously), you can see that a natural way to do this is to restrict the domain of $g : B \cup C \to A$ to just $B$ (or $C$), hence the map $f$ could naturally be defined by $g_B(x) = g(x)$ for all $x \in B$, $g_C(x) = g(x)$ for all $x \in C$ and $f(g) = \langle g_B, g_C \rangle$. In this manner the notation is more clear and more obviously well-defined.



This function $f$ is injective because of the following. Suppose that $f(g_1) = \langle g_B^1, g_C^1 \rangle = \langle g_B^2, g_C^2 \rangle =f(g_2)$. Thus for all $x \in B \cup C$, $x$ is either in $B$ or $C$, suppose $B$. Thus $g_1(x) = g_B^1(x) = g_B^2(x) = g_2(x)$. The case $x \in C$ is similar. Therefore $g_1 = g_2$.



I can't think of a natural way to define $ F : \Delta \to \Gamma$ right now, it'll require some thinking, I guess. I am not familiar with the usual rigor abusive context, but my problem is that I don't want to assume $B \cap C = \varnothing$. If we can assume that there is an obvious way back (i.e. you can show that this $f$ is bijective, easily). If someone could comment on this question I'm asking, I'd love to read.


real analysis - Does the series $sum_{nge1}frac{lnleft(frac{n+1}nright)}{sqrt n}$ converge?



Could you please give me some hint how to decide about convergence of the series
$\sum_{n\ge1}\frac{ln\left(\frac{n+1}n\right)}{\sqrt n}$ ?




I tried using comparison test:
$\frac{ln\left(\frac{n+1}n\right)}{\sqrt n}\ge \frac {ln\left(\frac1n\right)}{\sqrt n}=-\frac {ln(n)}{\sqrt n}$.
Series $\sum_{n\ge1}\frac{ln(n)}{\sqrt n}$ diverges by integral test, but for comparison test all compared sequenced must be non-negative and $ln\left(\frac1n\right)\le0$ for all n.



Thanks.


Answer



Yes it does using the asymptotic comparison:



$$\frac{\ln\left(\frac{n+1}n\right)}{\sqrt n}\sim_\infty\frac{1}{n\sqrt n}$$




Remark For the comparison test the general term of the series must have an unchanged sign i.e. not alternating series (no matter positive or negative).


abstract algebra - What do the elements of the field $mathbb{Z}_2[x]/(x^4+x+1)$ look like? What is its order?



Background: I'm looking at old exams in abstract algebra. The factor ring described was described in one question and I'd like to understand it better.



Question: Let $F = \mathbb{Z}_2[x]/(x^4+x+1)$. As the polynomial $x^4+x+1$ is irreducible over $\mathbb{Z}_2$, we know that $F$ is a field. But what does it look like? By that I am asking if there exists some isomorphism from $F$ into a well-known field (or where it is straightforward to represent the elements) and about the order of $F$.



In addition: is there something we can in general say about the order of fields of the type $\mathbb{Z}_2[x]/p(x)$ (with $p(x)$ being irreducible in $\mathbb{Z}_2[x]$)?



Answer



The elements of $F$ are $\{ f(x) + (x^4 + x + 1) \mid f(x) \in \mathbb{Z}_2[x], \deg f < 4 \}$. There are $2^4$ of them. Any field of order $2^4$ is isomorphic to $F$.



In general, if $p(x) \in \mathbb{Z}_2[x]$ is irreducible of degree $k$, then $\mathbb{Z}_2[x]/(p(x))$ is a field of order $2^k$.



There is a notation that makes this field more convenient to work with. Let $\alpha = x + (x^4 + x + 1) \in F$. Then for $f(x) \in \mathbb{Z}_2[x]$, $f(\alpha) = f(x) + (x^4 + x + 1)$. So, for example, we can write the element $x^2 + 1 + (x^4 + x + 1)$ as $\alpha^2 + 1$. In this notation,



$$F = \{ f(\alpha) \mid f(x) \in \mathbb{Z}_2[x], \deg f < 4 \}.$$



An isomorphic field is the nimber field of nimbers less than 16. The representation of the elements is simpler, but I'm finding nim-multiplication to be harder than polynomial multiplication (maybe there's a trick to it that I don't know).



Thursday 26 December 2019

sequences and series - How do I evaluate this sum :$sum_{n=1}^{infty}frac{{(-1)}^{n²}}{{(ipi)}^{n}}$?

I'm interesting to know how do i evaluate this sum :$$\sum_{n=1}^{\infty}\frac{{(-1)}^{n²}}{{(i\pi)}^{n}}$$, I have tried to evaluate it using two partial sum for odd integer $n$ and even integer $n$ ,but i can't since it's alternating series ,and i would like to know if it's well know series also what about it's values :real or complex ? .



Note : wolfram alpha showed that is a convergent series by the root test



Thank you for any help

self learning - Are basic trigonometry functions ( sine, cosine, tangent ) intuitive or memorized?

First, I'm really sorry for this somewhat vague and possibly just silly question. I also apologize if the following context runs a bit long. But please trust me that I'm asking with total sincerity and that my end goal is to find a starting point to grasp the one area of basic high school math that has always been just out of reach.



To begin: I always hated math as a young child (multiplication tables, carry-the-one, borrow-from-left, etc), and it was only later when math actually started to get interesting that I realized I didn't hate math, I hated rote memorization of tables and blind-faith rules/functions/tricks that were easy to forget under pressure.



Somewhere in middle school (pre-algebra, pre-geometry), things started to click. I don't want to suggest I was a math genius by any stretch, but I found that if I was keeping up conceptually with one module, the intro into the next had a nice "Oh, yes, of course! That is the next logical step!" feeling, so that I got to be the obnoxious kid who rolled his eyes at anyone struggling with a specific concept, thinking that it was so obvious that if we already know that the volume of a cylinder is the area of the circle on top times the height, why wouldn't a cone be one-third of that?



As I continued through high school, things got trickier, but overall either things made sense, or eventually made sense if I carefully retraced my steps to see where I got lost, and occasionally, things made no damn sense until there was an awesome pop in my head, like realizing that matrices weren't really crazy and magical, they were just a way of lining up all of the variables in such a way that they could be easily dealt with all at once.



Then we got to trigonometry functions. I basically bowed out at this point, took some honorably low B and C grades, knowing I just wasn't getting something, and never took any formal math classes ever again.




Two things I've learned since then:




  1. You aren't really borrowing or carrying any one's, it's just a handy way of treating the top number as 10 + itself.


  2. Unless you are taking math classes in college meant for math majors, you always use a calculator to get the actual numbers when figuring sine, cosine, and tangent.




That second part is crucial to my question. 20 years after hitting this math wall, I find out that it's not just easier to use a calculator for these functions, it's pretty much required (unless you've got your grandfather's slide rule, but this is basically the same idea, look it up).



So my questions are:





  1. Are the trigonometric functions inherently something you just accept and learn and find a place in your brain for so that further concepts derive intuitively from that starting point, or do these concepts derive in a clear and somewhat intuitive (or at least straightforward) way from lower-level concepts that I've managed to not quite fit together on every 3-5 year skimming I attempt on the topic?


  2. I'm sure, since there are those who can actually provide proofs or calculate the functions without a calculator, that these functions were not just "found" or "dreamt of" or were the ramblings of one insane genius who could only provide the ratios, not the reasoning. So I get that they are arrived at from lower-level math. So what I really want to know is if my struggle to understand it the same way I had come to understand every other math concept is basically where I'm going wrong, like my long overdue discovery that calculators were essential to the process.


  3. (really sub 2) - If it really is just "hard" or "learned" or "work", I can accept that. Learning Latin verbs was hard and I knew I was getting low marks because I wasn't doing the work. But if there is some natural progression, and anyone can take a decent guess at some specific connection/concept that is most likely the "missing piece" (either because it usually is, or because this all sounds too familiar, or because you've got a knack for figuring out what's wrong just from senseless ramblings), I welcome any feedback or suggestions.




Note that a big part of my initial apology (and reason for wording things as I have) is that I'm not looking for a tutor or a drawn-out lesson in trig (not here, at least), only some validation that either I am missing something that should make this easier than I'm making it sound, or that it actually is hard and my mistake is expecting every time to spot that thing I was missing.



Thanks, as always.

Show convergence in C-norm implies convergence in $L_p$-norm



I attempted to start with the $L_p$ norm and raise it to the power of $p$ but got stuck because I realized that I have no idea how to eliminate the integrand.






$L_p$ norm:
$||f||_p = ||f||_{L_p[a,b]} = (\int_{a}^{b}~|f(x)|^p~~dx)^{\frac{1}{p}}$




$\\$






C-norm:
enter image description here


Answer



We have $|f_n(t)-f(t)| \le ||f_n-f||$ for all $f,f_n \in C$ , all $n \in \mathbb N$ and all $t \in [a,b]$. Hence




$|f_n(t)-f(t)|^p \le ||f_n-f||^p$ for all $f,f_n \in C$ , all $n \in \mathbb N$ and all $t \in [a,b]$.



This gives



$ \int_a^b|f_n(t)-f(t)|^p dt \le \int_a^b ||f_n-f||^p dt =(b-a)||f_n-f||^p$.



Therefore



$||f_n-f||_p \le (b-a)^{1/p}||f_n-f||$.




Conclusion: $||f_n-f|| \to 0$ implies $||f_n-f||_p \to 0$ .


elementary set theory - How to show equinumerosity of the powerset of $A$ and the set of functions from $A$ to ${0,1}$ without cardinal arithmetic?

How to show equinumerosity of the powerset of $A$ and the set of functions from $A$ to $\{0,1\}$ without cardinal arithmetic?



Not homework, practice exercise.

integration - Closed form for $ int_0^infty {frac{{{x^n}}}{{1 + {x^m}}}dx }$



I've been looking at




$$\int\limits_0^\infty {\frac{{{x^n}}}{{1 + {x^m}}}dx }$$



It seems that it always evaluates in terms of $\sin X$ and $\pi$, where $X$ is to be determined. For example:



$$\displaystyle \int\limits_0^\infty {\frac{{{x^1}}}{{1 + {x^3}}}dx = } \frac{\pi }{3}\frac{1}{{\sin \frac{\pi }{3}}} = \frac{{2\pi }}{{3\sqrt 3 }}$$



$$\int\limits_0^\infty {\frac{{{x^1}}}{{1 + {x^4}}}dx = } \frac{\pi }{4}$$



$$\int\limits_0^\infty {\frac{{{x^2}}}{{1 + {x^5}}}dx = } \frac{\pi }{5}\frac{1}{{\sin \frac{{2\pi }}{5}}}$$




So I guess there must be a closed form - the use of $\Gamma(x)\Gamma(1-x)$ first comess to my mind because of the $\dfrac{{\pi x}}{{\sin \pi x}}$ appearing. Note that the arguments are always the ratio of the exponents, like $\dfrac{1}{4}$, $\dfrac{1}{3}$ and $\dfrac{2}{5}$. Is there any way of finding it? I'll work on it and update with any ideas.






UPDATE:



The integral reduces to finding



$$\int\limits_{ - \infty }^\infty {\frac{{{e^{a t}}}}{{{e^t} + 1}}dt} $$




With $a =\dfrac{n+1}{m}$ which converges only if



$$0 < a < 1$$



Using series I find the solution is




$$\sum\limits_{k = - \infty }^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a + k}}} $$





Can this be put it terms of the Digamma Function or something of the sort?


Answer



I would like to make a supplementary calculation on BR's answer.



Let us first assume that $0 < \mu < \nu$ so that the integral
$$ \int_{0}^{\infty} \frac{x^{\mu-1}}{1+x^{\nu}} \; dx $$
converges absolutely. By the substitution $x = \tan^{2/\nu} \theta$, we have
$$ \frac{dx}{1+x^{\nu}} = \frac{2}{\nu} \tan^{(2/\nu)-1} \theta \; d\theta. $$
Thus
$$ \begin{align*}

\int_{0}^{\infty} \frac{x^{\mu-1}}{1+x^{\nu}} \; dx
& = \int_{0}^{\frac{\pi}{2}} \frac{2}{\nu} \tan^{\frac{2\mu}{\nu}-1} \theta \; d\theta \\
& = \frac{1}{\nu} \beta \left( \frac{\mu}{\nu}, 1 - \frac{\mu}{\nu} \right) \\
& = \frac{1}{\nu} \Gamma \left( \frac{\mu}{\nu} \right) \Gamma \left( 1 - \frac{\mu}{\nu} \right) \\
& = \frac{\pi}{\nu} \csc \left( \frac{\pi \mu}{\nu} \right),
\end{align*} $$
where the last equality follows from Euler reflexion formula.


Linear independence in construction of Jordan canonical form basis for nilpotent endomorphisms



I am proving by construction that there is some basis in which a nilpotent endomorphism has a jordan canonical form that has only ones over supradiagonal. I'll put what I have already and stop where my problem is in order you can think it in the way I am.



What I want to prove is:




Theorem



Let $T\in\mathcal{L}(V)$ a $r-$nilpotent endomorphism, $V(\mathbb{C})$ a finite-dimensional vector space. There is some basis of $V$ in which the matrix representation of $T$ is a block diagonal matrix, and the blocks have the form
\begin{align*}
\left( \begin{array}{cccccc}
0 &1 &0 &0 &\dots &0\\
0 &0 &1 &0 &\dots &0\\
0 &0 &0 &1 &\dots &0\\
\vdots &\vdots &\vdots &\vdots &\ddots &\vdots\\

0 &0 &0 &0 &\dots &1\\
0 &0 &0 &0 &\dots &0
\end{array}
\right)
\end{align*}
that is, blocks that have null entries except for the ones-filled supradiagonal.



Proof



First we have that if $T$ is a $r-$nilpotent endomorphism then $T^{r}=0_{\mathcal{L}(V)}$, then, since $U_{1}=T(V)\subseteq V=id(V)=T^{0}(V)=U_{0}$ therefore $U_{2}=T^{2}(V)=T(T(V))\subseteq T(V)=U_{1}$ and if we suppose that $U_{k}=T^{k}(V)\subseteq T^{k-1}(V)=U_{k-1}$ we conclude that $U_{k+1}=T^{k+1}(V)=T(T^{k}(V))\subseteq T(T^{k-1}(V))=T^{k}(V)=U_{k}$. Then we have proven by induction over $k$ that $U_{k}=T^{k}(V)\subseteq T^{k-1}(V)=U_{k-1}$, and since $T^{r}=0_{\mathcal{L}(V)}$, and $U_{k}=T(U_{k-1})$ then $\{0_{V}\}=U_{r}\subseteq U_{r-1}\subseteq\dots\subseteq U_{1}\subseteq U_{0}=V$ and we have shown too that the $U_{k}$ are $T-$invariant spaces and $U_{r-1}\subseteq\ker T$.




In the same manner, let $W_{0}=\ker T^{0}=\ker id=\{0_{V}\}$ and $W_{k}=\ker T^{k}$. Is easy to see that $T(W_{0})=T(\{0_{V}\})=\{0_{V}\}$ therefore $W_{0}\subseteq W_{1}$, moreover $T^{2}(W_{1})=T(T(W_{1}))=T(\{0_{V}\})=\{0_{V}\}$ therefore $W_{1}\subseteq W_{2}$. Then, suppose $W_{k-1}\subseteq W_{k}$, and we see that $T^{k+1}(W_{k})=T(T^{k}(W_{k}))=T(\{0_{V}\})=\{0_{V}\}$ and therefore $W_{k}\subseteq W_{k+1}$ and we conclude we have the chain of nested spaces $\{0_{V}\}=W_{0}\subseteq W_{1}\subseteq\dots\subseteq W_{r-1}\subseteq W_{r}=V$ since $W_{r}=\ker T^{r}=\ker 0_{\mathcal{L}(V)}=V$.



Since we have a chain of nested spaces in which the largest is $V$ itself, then if we choose a basis for the smallest non-trivial (Supposing $U_{r}\neq U_{r-1}$) of them (that is $U_{r-1}$) we can climb chain constructing a basis for the larger spaces completing the basis we have already, what is always possible.



Now, since $U_{r-1}\subseteq\ker T$ then every vector in $U_{r-1}$ is a eigenvector for eigenvalue $0$. Then every basis we choose for $U_{r-1}$ is a basis of eigenvectors. To complete this basis $\{u_{i}^{(r-1)}\}$ to a basis of $U_{r-2}$ (Supposing $U_{r-1}\neq U_{r-2}$) we can remember that $T(U_{r-2})=U_{r-1}$, therefore every vector in $U_{r-1}$ has a preimage in $U_{r-2}$. Then there are some $u_{i}^{(r-2)}\in U_{r-2}$ (maybe many for each $i$ since we don't know $T$ is inyective) such that $T(u_{i}^{(r-2)})=u_{i}^{(r-1)}$. It's to be noted that for fixed $i$ is not possible that $u_{i}^{(r-2)}=u_{i}^{(r-1)}$ since $u_{i}^{(r-1)}$ is an eigenvector associated to eigenvalue $0$ and also every vector in $U_{r-1}$ since they are linear combinations of the basis vectors. Since we have stated they are non unique we can choose one and only one for every $i$. It only remains to see they are linearly independent: let take a linear combination of null vector $\alpha_{i}u_{i}^{(r-1)}+\beta_{i}u_{i}^{(r-2)}=0_{V}$ and let apply $T$ on both sides, $\alpha_{i}T(u_{i}^{(r-1)})+\beta_{i}T(u_{i}^{(r-2)})=\sum_{i}\alpha_{i}0_{V}+\beta_{i}u_{i}^{(r-1)}=\beta_{i}u_{i}^{(r-1)}=0_{V}$. Since the last sum is a null linear combination of linearly independent vectors (since they form a basis for $U_{r-1}$), it implies that $\beta_{i}=0$ for every $i$. Therefore the initial expression takes the form $\alpha_{i}u_{i}^{(r-1)}=0_{V}$ and $\alpha_{i}=0$ for every $i$ by the same argument. We conclude that they are linearly independent.



At this moment we have $\{u_{i}^{(r-1)},u_{i}^{(r-2)}\}$ a linearly independent set of vectors in $U_{r-2}$. If $\dim U_{r-2}=2\dim U_{r-1}$, then we have finished the construction, if not ($\dim U_{r-2}\geq 2\dim U_{r-1}+1$) then we have to choose $u_{j}^{(r-2)}$ with $j=\dim U_{r-1}+1,\dots, \dim U_{r-2}$ that complete the set to a basis of $U_{r-2}$. Again, is in construction of the $u_{i}^{(r-2)}$, we remember that $T(U_{r-2})=U_{r-1}$. Therefore, every vector we choose will have, under $T$, the form $T(v_{j}^{(r-2)})=\mu_{ji}u_{i}^{(r-1)}$. But since we want they to be linearly independent from the $u_{i}^{(r-1)}$ and $u_{i}^{(r-2)}$ we can choose them from $\ker T$, that is we can set $u_{j}^{(r-2)}=v_{j}^{(r-2)}-\mu_{ji}u_{i}^{(r-2)}$ and applying $T$ we obtain $T(u_{i}^{(r-2)})=T(v_{j}^{(r-2)})-\mu_{ji}T(u_{i}^{(r-2)})=\mu_{ji}u_{i}^{(r-1)}-\mu_{ji}u_{i}^{(r-1)}=0_{V}$. Then we only need to see they are linearly independent with the others. Let, again, a null linear combination $\alpha_{i}u_{i}^{(r-1)}+\beta_{i}u_{i}^{(r-2)}+\gamma_{j}u_{j}^{(r-2)}=0_{V}$. First we can apply $T$ both sides: $\alpha_{i}T(u_{i}^{(r-1)})+\beta_{i}T(u_{i}^{(r-2)})+\gamma_{j}T(u_{j}^{(r-2)})=\sum_{i}\alpha_{i}0_{V}+\beta_{i}u_{i}^{(r-1)}+\sum_{j}\gamma_{j}0_{V}=\beta_{i}u_{i}^{(r-1)}=0_{V}$ and therefore $\beta_{i}=0$ for every $i$ since $\{u_{i}^{(r-2)}\}$ is a basis. Then the initial expression takes the form $\alpha_{i}u_{i}^{(r-1)}+\gamma_{j}u_{j}^{(r-2)}=0_{V}$. Note that we have to sets of vectors that are in $\ker T$...



This is the point where I don't see a way to say that the $\alpha_{i},\gamma_{i}=0$ for every $i$ in order to say that they are linearly independent. Any kind of help (hints more than everything else) will be good.



Answer



Mostly, you are both on the right track and everything you say is correct, though there are a few spots where a bit more thought could let you be sharper. Let me discuss them first.



You note along the way that "(Supposing $U_r\neq U_{r-1}$)". In fact, we know that for each $i$, $0\leq i\lt r$, $U_{i+1}\neq U_i$. The reason is that if we have $U_{i+1}=U_i$, then that means that $U_{i+2}=T(U_{i+1}) = T(U_i) = U_{i+1}$, and so we have reached a stabilizing point; since we know that the sequence must end with the trivial subspace, that would necessarily imply that $U_i=\{\mathbf{0}\}$. But we are assuming that the degree of nilpotence of $T$ is $r$, so that $U_i\neq\{\mathbf{0}\}$ for any $i\lt r$; hence $U_{i+1}\neq U_i$ is a certainty, not an assumption.



You also comment parenthetically: "(maybe many for each $i$ since we don't know $T$ is injective)". Actually, we know that $T$ is definitely not injective, because $T$ is nilpotent. The only way $T$ could be both nilpotent and injective is if $\mathbf{V}$ is zero dimensional. And since every vector of $U_{r-1}$ is mapped to $0$, it is certainly the case that the restriction of $T$ to $U_i$ is not injective for any $i$, $0\leq i\lt r$.



As to what you are doing: suppose $u_1,\ldots,u_t$ are the basis for $U_{r-1}$, and $v_1,\ldots,v_t$ are vectors in $U_{r-2}$ such that $T(v_i) = u_i$. We want to show that $\{u_1,\ldots,u_t,v_1,\ldots,v_t\}$ is linearly independent; you can do that the way you did before: take a linear combination equal to $\mathbf{0}$,
$$\alpha_1u_1+\cdots+\alpha_tu_t + \beta_1v_1+\cdots+\beta_t v_t = \mathbf{0}.$$
Apply $T$ to get $\beta_1u_1+\cdots + \beta_tu_t=\mathbf{0}$ and conclude the $\beta_j$ are zero; and then use the fact that $u_1,\ldots,u_t$ is linearly independent to conclude that $\alpha_1=\cdots=\alpha_t=0$.




Now, this may not be a basis for $U_{r-2}$, since there may be elements of $\mathrm{ker}(T)\cap U_{r-2}$ that are not in $U_{r-1}$.



The key is to choose what is missing so that they are linearly independent from $u_1,\ldots,u_t$. How can we do that? Note that $U_{r-1}\subseteq \mathrm{ker}(T)$, so in fact $U_{r-1}\subseteq \mathrm{ker}(T)\cap U_{r-2}$.
So we can complete $\{u_1,\ldots,u_t\}$ to a basis for $\mathrm{ker}(T)\cap U_{r-2}$ with some vectors $z_1,\ldots z_s$.



The question is now is how to show that $\{u_1,\ldots,u_t,v_1,\ldots,v_t,z_1,\ldots,z_s\}$ are linearly independent. The answeer is: the same way. Take a linear combination equal to $0$:
$$\alpha_1u_1+\cdots +\alpha_tu_t + \beta_1v_1+\cdots +\beta_tv_t + \gamma_1z_1+\cdots+\gamma_s z_s = \mathbf{0}.$$
Apply $T$ to conclude that the $\beta_i$ are zero; then use the fact that $\{u_1,\ldots,u_t,z_1,\ldots,z_s\}$ is a basis for $\mathrm{ker}(T)\cap U_{r-2}$ to conclude that the $\alpha_i$ and the $\gamma_j$ are all zero as well.




And now you have a basis for $U_{r-2}$. Why? Because by the Rank-Nullity
Theorem applied to the restriction of $T$ to $U_{r-2}$, we know that
$$\dim(U_{r-2}) = \dim(T(U_{r-2})) + \dim(\mathrm{ker}(T)\cap U_{r-2}).$$
But $T(U_{r-2}) = U_{r-1}$, so $\dim(T(U_{r-2})) = \dim(U_{r-1}) = t$; and $\dim{ker}(T)\cap U_{r-2} = t+s$, since $\{u_1,\ldots,u_t,z_1,\ldots,z_s\}$ is a basis for this subspace. Hence, $\dim(U_{r-2}) = t+t+s=2t+s$, which is exactly the number of linearly independent vectors you have.



You want to use the same idea "one step up": you will have that $u_1,\ldots,u_t,z_1,\ldots,z_s$ is a linearly independent subset of $U_{r-3}\cap\mathrm{ker}(T)$, so you will complete it to a basis of that intersection; after adding preimages to $z_1,\ldots,z_s$ and $v_1,\ldots,v_t$, you will get a "nice" basis for $U_{r-3}$. And so on.


Wednesday 25 December 2019

trigonometry - Modulus of tangent of complex number



I need to find real, imaginary parts of $\tan(x+yi)$ and the modulus of it. I have:
$$\operatorname{Re}(\tan(x+yi))={\frac{\sin2x}{\cos2x+\cosh2x}}$$
and
$$\operatorname{Im}(\tan(x+yi))={\frac{\sinh2y}{\cos2x+\cosh2x}}$$



I know that $|Z|={\sqrt{\operatorname{Re}^2+\operatorname{Im}^2}}$. But when I calculate with the results I've got, I don't get the actual answer on the book, which is $${\sqrt{{\frac{\cosh2y-\cos2x}{\cosh2y+\cos2x}}}}$$


Answer




You can observe that
\begin{align}
\sin^22x+\sinh^22y
&=1-\cos^22x+\cosh^22y-1\\
&=\cosh^22y-\cos^22x\\
&=(\cosh2y+\cos2x)(\cosh2y-\cos2x)
\end{align}


linear algebra - Matrix with zeros on diagonal and ones in other places is invertible



Hi guys I am working with this and I am trying to prove to myself that n by n matrices of the type zero on the diagonal and 1 everywhere else are invertible.



I ran some cases and looked at the determinant and came to the conclusion that we can easily find the determinant by using the following $\det(A)=(-1)^{n+1}(n-1)$. To prove this I do induction




n=2 we have the $A=\begin{bmatrix}
0 & 1\\
1 & 0
\end{bmatrix}$ $\det(A)=-1$ and my formula gives me the same thing (-1)(2-1)=-1



Now assume if for $n \times n$ and $\det(A)=(-1)^{n+1}(n-1)$



Now to show for a matrix B of size $n+1 \times n+1$. I am not sure I was thinking to take the determinant of the $n \times n$ minors but I am maybe someone can help me. Also is there an easier way to see this is invertible other than the determinant? I am curious.


Answer




This is easy to calculate by row reduction:



Add all rows to first:
$$\det(A) =\det \begin{bmatrix}
0 & 1 & 1 &...&1 \\
1 & 0 & 1 &...&1 \\
1 & 1 & 0 &...&1 \\
... & ... & ... &...&... \\
1 & 1 & 1 &...&0 \\
\end{bmatrix}=\det \begin{bmatrix}

n-1 & n-1 & n-1 &...&n-1 \\
1 & 0 & 1 &...&1 \\
1 & 1 & 0 &...&1 \\
... & ... & ... &...&... \\
1 & 1 & 1 &...&0 \\
\end{bmatrix} \\
=(n-1)\det \begin{bmatrix}
1 & 1 & 1 &...&1 \\
1 & 0 & 1 &...&1 \\
1 & 1 & 0 &...&1 \\

... & ... & ... &...&... \\
1 & 1 & 1 &...&0 \\
\end{bmatrix}=(n-1)\det \begin{bmatrix}
1 & 1 & 1 &...&1 \\
0 & -1 & 0 &...&0 \\
0 & 0 & -1 &...&0 \\
... & ... & ... &...&... \\
0 & 0 & 0 &...&-1 \\
\end{bmatrix}$$




where in the last row operation I subtracted the first row from each other row.



This shows
$$\det(A)=(n-1)(-1)^{n-1}$$


elementary number theory - Congruence relationships.

I need to prove (or disprove but I don't think that's the case) that:




if $ab \equiv 0$ (mod $n$), then $ a\equiv 0$ (mod $n$) or $b\equiv0$ (mod $n$)





I know that $ab\equiv 0$ (mod $n$) $\Longleftrightarrow n|ab$, so if that's true then n must divide either a or b but I don't know how to prove it.



Any assistance is much appreciated.

calculus - What would $ab$ be when $f(x)$ has a limit at $+infty$



Reviewing Calculus, I am facing the problem:




if



$$f(x)= \begin{cases} \sqrt[3]{x^2-8x^3}+ax+b,& \text{if } x\in\mathbb Q\\\\x\sin\big(\frac{1}{x}\big),& x\in \mathbb R-\mathbb Q \end{cases}$$
has a limit at $ +\infty$, what would $ab$ be?





I doubted if I could treat this function as other piecewise function with some known domains (like $-7is my question.




Let the functions $f_1(x)$ and $f_2(x)$ have limits on $\mathbb R$ when $x\to +\infty$ so the function:



$$f(x)= \begin{cases} f_1(x),& x\in\mathbb Q\\\\f_2(x),& x\in \mathbb R-\mathbb Q \end{cases}$$
has limit at $+\infty$ if $\lim_{x\to +\infty}f_1(x)=\lim_{x\to +\infty}f_2(x)$





May I ask someone explain this hint? Thanks.


Answer



It's easy to compute that



$$\lim_{x\to\infty} x\sin \frac 1x = \lim_{x\to\infty} \frac{\sin \frac1x}{\frac 1x} = \lim_{t\to 0^+} \frac{\sin t}t = 1. \tag{1}$$



So for $\lim_{x\to \infty} f(x)$ to exist we must have that



$$\lim_{x\to\infty} \sqrt[3]{-8x^3+x^2} + ax + b = 1. \tag{2} $$




You'll see that you'll have to pick $a$ such that the limit in $(2)$ even exists and $b$ such that it has the right value. Think about what happens if the limit in $(2)$ exists but doesn't equal $1$. Can you see why $\lim_{x\to\infty}f(x)$ doesn't exist then?


Tuesday 24 December 2019

Elementary question on modular arithmetic



I know, this is very simple and dumb question, i just cannot come to understand, the problem is:




Why and how happens this in mathematics?



$$-5 \pmod 4 = 3$$



I know how to get this for positive numbers, but how does it work for negative ones.



I need the explanation of what happens in the background when solving this. Is it the distance from $0$ to $4$?


Answer



Since you seem to be using "mod" as a binary operator rather than as the context for a congruence relation, let's define "mod" precisely: assuming $b > 0$,
$$ a \bmod b = a - b\lfloor a/b \rfloor$$




That is, $a \bmod b$ denotes the distance to $a$, from the largest multiple of $b$ that is not greater than $a$. If you imagine the "number line" with the multiples of $b$ all marked out, then $a \bmod b$ is the distance to the point $a$ from the closest marked point on its left.



In your particular case, of $-5 \bmod 4$, note that the list of all integer multiples of $4$ is: $$\dots, -20, -16, -12, -8, -4, 0, 4, 8, 12, 16, 20, 24, \dots$$
In this list, the largest number (multiple of $4$) that is to the left of $-5$ is $-8$. And the distance from $-8$ to $-5$ is $3$; that is why we say that $-5 \bmod 4 = 3$.
(This is exactly the same way we would calculate $5 \bmod 4$: in the list, the largest number that is to the left of $5$ is $4$, and the distance from $4$ to $5$ is $1$, so we say $5 \bmod 4 = 1$.)


integration - On $int_0^1left(int_0^inftyfrac{operatorname{gd}(x+y)}{exp(x+y)}dxright)dy$, being $operatorname{gd}(u)$ the Gudermannian function



While I was playing with Wolfram Alpha online calculator, to create double integrals involving negative exponentials and the so-called Gudermannian function, denoted in this post as $\operatorname{gd}(u)$, I wondered that should be possible to get the closed-form of $$\int_0^1\left(\int_0^\infty\frac{\operatorname{gd}(x+y)}{e^{x+y}}dx\right)dy.\tag{1}$$
I believe that $(1)$ hasn't a very nice closed-form (I was trying to define integrals involving these functions with a nice closed-form).




Question. Can you justify/calculate the closed-form of $(1)$? Many thanks.




Answer



I used Wolfram Cloud Sandbox



In[1] := Integrate[Integrate[Gudermannian[x+y]/Exp[x+y],{x,0,Infinity}],{y,0,1}]//Simplify//InputForm
Out[1]//InputForm= 1 - Pi^2/24 - Gudermannian[1]/E + Log[2/(1 + E^2)] - PolyLog[2, -E^2]/2

Monday 23 December 2019

sequences and series - prove $lim_ {n rightarrow infty} frac{b^n}{n^k}=infty$











I have this sequence with $b>1$ and $k$ a natural, which diverges:
$$\lim_{n \rightarrow \infty} \frac{b^n}{n^k}=\infty$$
I need to prove this, with what i have learnt till now from my textbook, my simple step is this:



Since $n^2\leq2^n$ for $n>3$, i said $b^n\geq n^k$, so it diverges. Is it right?



I am asking here not just to get the right answer, but to learn more wonderful steps and properties.


Answer




$$\lim_{n \rightarrow \infty} \frac{b^n}{n^k}=\infty$$



You can use the root test, too: $$\lim_{ n\to \infty}\sqrt[\large n]{\frac{b^n}{n^k}} = b>1$$



Therefore, the limit diverges.






The root test takes the $\lim$ of the $n$-th root of the term: $$\lim_{n \to \infty} \sqrt[\large n]{|a_n|} = \alpha.$$




If $\alpha < 1$ the sum/limit converges.



If $\alpha > 1$ the sum/limit diverges.



If $\alpha = 1$, the root test is inconclusive.


Sum of series: $1^k+2^k+3^k+...+n^k =?$

Is there any better algorithm to solve this equation other than brute force.
$1^k+2^k+3^k+...+n^k=$ formula?
Here $k$ is a natural number,
$n$ is a natural number.

real analysis - How do people sense that Cauchy-Schwarz inequality will be used in this question?



I was reading this question on this website.




Let $\{ a_n\} $ be a sequence of non-negative real numbers such that
the series $ \sum_{n=1}^{\infty}a_n $ is convergent.




If $p $ is a real number such that the series $ \sum{\frac{\sqrt
a_n}{n^p}} $ diverges, then



(A) $p$ must be strictly less than $\frac{1}{2} $



(B) $p$ must be strictly less than or equal to $\frac{1}{2} $



(C) $p$ must be strictly less than or equal to 1 but can be greater
than $\frac{1}{2} $




(D) $p$ must be strictly less than 1 but can be greater than or equal
to $\frac{1}{2} $.




I spent a lot of time thinking about the problem but after lot of mental struggle I gave up. I looked at the answers posted and it was a simple application of Cauchy-Schwarz inequality.




Being a bit more explicit, the Cauchy-Schwarz inequality and the
assumptions imply




$$\infty = \sum_{n=1}^\infty \frac{a_n^{1/2}}{n^p} \leq \left (
\sum_{n=1}^\infty a_n \right )^{1/2} \left ( \sum_{n=1}^\infty
\frac{1}{n^{2p}} \right )^{1/2}.$$




In short my question is what are some red flags to notice which signals that Cauchy-Schwarz inequality can be used to solve the problem.



I know the theorem and proof as well.



P.S. Sorry for my English. Feel free to edit.



Answer



You've got an arbitrary convergence series $\sum a_n$, and you're trying to figure out the convergence of $\sum \frac{\sqrt{a_n}}{n^p}$. Two things strike me about that series.



Firstly, the terms are related to $\sqrt{a_n}$, rather than just $a_n$. That will change convergence, in particular, it will make convergence slower (or make convergence not happen at all). If we want to relate this series back to $\sum a_n$, it'd be really handy if there was some inequality that naturally involved looking at the sum of squared terms, such as the Cauchy-Schwarz inequality.



Secondly, we're taking two simple series, and multiplying them term-by-term, sort of like an infinite dot product. Working with that is hard. If there was an inequality that could separate the term-by-term product of series, we might have something to work with. The Cauchy-Schwarz inequality might help there.


Sunday 22 December 2019

calculus - does the infinite series $sum^{infty}_{n=1} (-1)^n frac {log(n)}n$ converge?



does the infinite series $\sum^{\infty}_{n=1} (-1)^n \frac {\log(n)}n$ converge?



For this one I tried absolute convergence then I applied the integral test but I realized that $\log^2(x)/2$ does not converge so I know that that won't work. Any help? Also I know the limit of $a_n$ as n approaches $\infty=0$ however I am not sure if it is non decreasing


Answer



The sequence $\log n\over n$ is decreasing for $n>2$ because the function $x\mapsto{\log x\over x}$ is decreasing in $(e,\infty)$:
$$f'(x)={1-\log x\over x^2}.$$



Rational approximation for irrational number



Sometimes I go into some subject in the class (high school level) and I have to explain to my students how approximate an irrational number by a sequence of rationals. The problem is that I should explain that in a high school level. What I usually do is take $\pi$ as a example and take the sequence:
\begin{align}
& 3,1=31/10\\

& 3,14=314/100 \\
& 3,141=3141/1000 \\
& 3,1415=31415/10000\\
&\vdots
\end{align}
I think that approach is intuitive and the students feel satisfacted with that. I was trying to figure out another way to explain the rational approximation but I coudn't find any. My questions is, does anyone know another way to explain that approximation in a high school level?



Thanks in advance.


Answer



Newton's method for square roots can be a good candidate due to its simplicity.




For example for approximating $\sqrt2$ in few iterations. You can ask your students where do they think $\sqrt2$ is located. Between $1.41$ and $1.42$, then you can start with $x_0=1.41$



$g(x) = x - \frac{(x^2 - 2)}{2x}$



$g(1.41) = 1.41 - \frac{(1.41^2 - 2)}{2\cdot1.41} = 1.4142198581...$



$g(1.4142198581) = 1.4142135623 \approx \sqrt2$


elementary number theory - If $p not = 5$ is an odd prime, prove that either $p^2+1$ or $p^2-1$ is divisible by $10$?



I was able to find a solution to this problem, but only by using a couple extra tools that are later in the book$^{1}$. So far the book only covered basic divisibility, $gcd$, and the fundamental theorem of arithmetic; it did not cover modular arithmetic, and altough we did cover the division algorithm, we did not cover divisibility rules (i.e. if a number ends in $5$ or $0$, then it is divisible by $5$). Is there any way of proving this with only the above tools? (I will point out what I used from future chapters in my solution)




My Solution



Suppose $10 \nmid p^2-1 = (p+1)(p-1)$. Then $5 \nmid (p+1)$ and $5 \nmid (p-1)$.



Odd primes can only have last digits of $1, 3, 7, 9$ (I used the divisibility rule that a number ending in $0$ or $5$ is divisible by $5$, which is in the next chapter). Since $5 \nmid (p+1)$ and $5 \nmid (p-1)$, the last digit of $p$ is either $3$ or $7$. If we write $p$ as $10n+3$ or $10n+7$, then square and add $1$, we get a multiple of $10$. (The fact that any integer with a last digit of $k$ can be written as $10n+k$ is also something from a future chapter)






Elemntary Nuber Theory by David Burton 6th ed., Section 3.1 # 10


Answer




If $p\neq 5$ is an odd prime, its square $p^2$ is also odd, thus $p^2-1$ and $p^2+1$ are both even.



Now, since an odd prime $p\neq 5$ must (as you mention in your post) be: $$p\equiv1,3,7 \textrm{ or }9 \mod 10$$ its square will be
$$
p^2\equiv1,-1,-1\textrm{ or }1 \mod 10
$$
which answers your question.


Saturday 21 December 2019

How to solve this store-prize probability problem?



The probability that a store will have exactly k customers on any given day is
$$P_K(k)=\frac{1}{5}\left(\frac{4}{5}\right)^k, \quad k=0,1,2,3....$$
Every day, out of all the customers who purchased something from the store that day, one
is randomly chosen to win a prize. Every customer that day has an equal chance of being
chosen. Assume that no customer visits this store more than once a day, and further assume
that the store can handle an infinite number of customers.





  • (c). Given a customer who has won a prize, what is the probability that he was in the store
    on a day when it had a total of exactly $k$ customers?


  • (d).Since the store owner’s birthday is in July, he decides to celebrate by giving two prizes each
    day in the month of July. At the end of each day, after one customer is randomly chosen to
    win the first prize, another winner is randomly chosen from the remaining $k − 1$ customers,
    and given the second prize. No one can win both prizes.
    Let $X$ denote the customer number of the first winner, and let $Y$ denote the customer
    number of the second winner.

    Determine the joint pdf, $P(x,y|k), for 1 ≤ x, y ≤ k$.




--
My answer for (c) is $P=\frac{1}{5}(\frac{4}{5})^k\frac{1}{k}$,not sure if it is right.
For the part (d),don't really know how to do that.


Answer



Let's call the event of a price $pr$.



You're right that $P(pr \land K=k) = \frac{1}{k}\frac{1}{5}(\frac{4}{5})^k$ for $k > 0$:

first we need we need $k$ customers and then there is a $\frac{1}{k}$ chance that this customer won it. Also $P(pr | K=k) = \frac{1}{k}$.



But the asked for probability is $P(X=k|pr)$ so this reeks of Bayes' law, which comes down to (in this case, where the different $K=i$ are the mutually disjoint events):



$$P(K=k|pr) = \frac{P(pr|K=k)P(K=k)}{\sum_{i=0}^\infty P(pr|K=i)P(K=i)}$$



and the numerator is your $\frac{1}{k}\frac{1}{5}(\frac{4}{5})^k$



and the denominator is (the term for $i=0$ equals $0$, no customers, no price)
$$ \sum_{i=1}^{\infty} \frac{1}{i} \frac{1}{5}(\frac{4}{5})^i$$.




Now it's just calculus. Remember the series $$\sum_{n\ge 1} \frac{1}{n} x^n = \sum_{n \ge 0} \left(\int (x^{n} dx\right) =\int \left(\sum_{n \ge 0} x^n\right) dx$$ etc.



As to (d), this seems simpler. $P(X=x, Y=y | K=k)$ is only non-zero for $k \ge 2$ and $x \neq y$ and $1 \le x,y \le k$ and it that case equals $\frac{1}{k}\frac{1}{k-1}$ (there is a $1$ in $k$ chance that $x$ will be chosen as winner $1$, and then a $1$ over $k-1$ chance that $y \neq x$ will be chosen as winner nr $2$. Note that given $k$ there are $k(k-1)$ possible winner pairs all
of which have the same chance.


Friday 20 December 2019

definite integrals - Real-Analysis Methods to Evaluate $int_0^infty frac{x^a}{1+x^2},dx$, $|a|





In THIS ANSWER, I used straightforward contour integration to evaluate the integral $$\bbox[5px,border:2px solid #C0A000]{\int_0^\infty \frac{x^a}{1+x^2}\,dx=\frac{\pi}{2}\sec\left(\frac{\pi a}{2}\right)}$$for $|a|<1$.




An alternative approach is to enforce the substitution $x\to e^x$ to obtain



$$\begin{align}
\int_0^\infty \frac{x^a}{1+x^2}\,dx&=\int_{-\infty}^\infty \frac{e^{(a+1)x}}{1+e^{2x}}\,dx\\\\
&=\int_{-\infty}^0\frac{e^{(a+1)x}}{1+e^{2x}}\,dx+\int_{0}^\infty\frac{e^{(a-1)x}}{1+e^{-2x}}\,dx\\\\
&=\sum_{n=0}^\infty (-1)^n\left(\int_{-\infty}^0 e^{(2n+1+a)x}\,dx+\int_{0}^\infty e^{-(2n+1-a)x}\,dx\right)\\\\
&=\sum_{n=0}^\infty (-1)^n \left(\frac{1}{2n+1+a}+\frac{1}{2n+1-a}\right)\\\\

&=2\sum_{n=0}^\infty (-1)^n\left(\frac{2n+1}{(2n+1)^2-a^2}\right) \tag 1\\\\
&=\frac{\pi}{2}\sec\left(\frac{\pi a}{2}\right)\tag 2
\end{align}$$



Other possible ways forward include writing the integral of interest as



$$\begin{align}
\int_0^\infty \frac{x^a}{1+x^2}\,dx&=\int_{0}^1 \frac{x^{a}+x^{-a}}{1+x^2}\,dx
\end{align}$$




and proceeding similarly, using $\frac{1}{1+x^2}=\sum_{n=0}^\infty (-1)^nx^{2n}$.




Without appealing to complex analysis, what are other approaches one can use to evaluate this very standard integral?




EDIT:




Note that we can show that $(1)$ is the partial fraction representation of $(2)$ using Fourier series analysis. I've included this development for completeness in the appendix of the solution I posted on THIS PAGE.




Answer



I'll assume $\lvert a\rvert < 1$. Letting $x = \tan \theta$, we have



$$\int_0^\infty \frac{x^a}{1 + x^2}\, dx = \int_0^{\pi/2}\tan^a\theta\, d\theta = \int_0^{\pi/2} \sin^a\theta \cos^{-a}\theta\, d\theta$$



The last integral is half the beta integral $B((a + 1)/2, (1 - a)/2)$, Thus



$$\int_0^{\pi/2}\sin^a\theta\, \cos^{-a}\theta\, d\theta = \frac{1}{2}\frac{\Gamma\left(\frac{a+1}{2}\right)\Gamma\left(\frac{1-a}{2}\right)}{\Gamma\left(\frac{a+1}{2} + \frac{1-a}{2}\right)} = \frac{1}{2}\Gamma\left(\frac{a+1}{2}\right)\Gamma\left(\frac{1-a}{2}\right)$$




By Euler reflection,



$$\Gamma\left(\frac{a+1}{2}\right)\Gamma\left(\frac{1-a}{2}\right) = \pi \csc\left[\pi\left(\frac{1+a}{2}\right)\right] = \pi \sec\left(\frac{\pi a}{2}\right)$$



and the result follows.



Edit: For a proof of Euler reflection without contour integration, start with the integral function $f(x) = \int_0^\infty u^{x-1}(1 + u)^{-1}\, du$, and show that $f$ solves the differential equation $y''y - (y')^2 = y^4$, $y(1/2) = \pi$, $y'(1/2) = 0$. The solution is $\pi \csc \pi x$. On the other hand, $f(x)$ is the beta integral $B(1+x,1-x)$, which is equal to $\Gamma(x)\Gamma(1-x)$. I believe this method is due to Dedekind.


number theory - Prime factor of $A=14^7+14^2+1$




Find a prime factor of $A=14^7+14^2+1$. Obviously without just computing it.


Answer



Hint: I've seen the 3rd cyclotomic polynomial too many times.




$$
\begin{aligned}
x^7+x^2+1&=(x^7-x^4)+(x^4+x^2+1)\\
&=x^4(x^3-1)+\frac{x^6-1}{x^2-1}\\
&=x^4(x+1)(x^2+x+1)+\frac{(x^3-1)(x^3+1)}{(x-1)(x+1)}\\
&=x^4(x+1)(x^2+x+1)+(x^2+x+1)(x^2-x+1)
\end{aligned}
$$


Thursday 19 December 2019

Is this proof of $x



Claim: $xEdit: $x,y>0$




Proof:
Since that which is to be proved is biconditional we must prove both that $x

First we prove that $x

Let us assume that the statement holds for some $n=k$. We seek to show that it is also true for $n=k+1$.



We can say that $$x^{k+1} - y^{k+1} \equiv (x-y)(x^k+x^{k-1}y +...+xy^{k-1} + y^k)$$ and since $(x-y)<0$, and $(x^k+x^{k-1}y +...+xy^{k-1} + y^k) > 0$, we see that $x^{k+1} - y^{k+1}$ must be negative. Therefore we arrive at the desired result:



$$x^{k+1} - y^{k+1}<0.$$

Thus for all $n\in \mathbb N$, $x

Now we must prove that $x

$$(x$$\iff (y

Since $x$ and $y$ are arbitrary numbers, we have already proven this. $$\tag*{$\blacksquare$}$$



--




The above is my attempt at the proof, I would like for someone to confirm whether it is correct or not. For the line where I state the equivalence $x^{k+1} - y^{k+1} \equiv (x-y)(x^k+x^{k-1}y +...+xy^{k-1} + y^k)$, I must make clear that I have had to use this as a given and would appreciate if someone could point out why this is obvious or how to go about proving this, almost as a lemma for the proof. My final request is that if there is any problem with my actual proofwriting, I ask that you point it out (e.g is this a usual style, or is it unsual and hard to follow, etc).



Edit: It was pointed out that I forgot to include that $x$ and $y$ are positive, so I have included this.


Answer



Hint:



For $n=1$, $x

Now assume that for some $n$,




$$x

Then by the rule of multiplication of inequalities,



$$x

so that



$$x


Now try the contrapositive.


abstract algebra - If $G to G'$ are isomorphic groups, then $|Aut(G)| = |$ all isomophisms $G to G'|$.

I'm having trouble proving this.



If $G$ is finite (which is not given) with $|G| = N$, then $Aut(G)$ can be seen as a symmetry group with order $N!$, so there are $N!$ possible isomorphisms $G \to G'$. So intuitively I understand why $|Aut(G)| = |$all isomorphisms $G \to G'|$.



My first thought was, try to find a bijection between $Aut(G)$ and $H = \{$all isomorphism $G \to G'\}$, by using a fixed $f: G \to G'$. But now I'm stuck.



I'm not looking for a proof, but rather some more insight/help with the thought process.

Wednesday 18 December 2019

algebra precalculus - Help creating a product scoring algorithm based on reviews?

I'm working on a review site and I need to create a scoring algorithm to give each product a score, similar to trustpilots trustscore, out of 10 based on a number of factors. The factors I need to take into account are:




  • The star ratings between 1 and 5 left on the product

  • The age of the star ratings (ratings should hold less weight the older they are)

  • The type of rating (verified reviews should have higher scoring)

  • A bayesian average should be used to prevent bias for low vote counts




I'm by no means a mathematician so I really don't know where to start with this, so if anyone could offer any guidance or examples, that would be great.



Update with answers to some questions




  1. A verified review should score twice as high as an unverified review

  2. A product with a single 5 star review on it's own would have a score of 5, however as per the last constraint, an initial average should be applied to all reviews to prevent this bias for new products. I would do this by adding the score of 7 additional reviews to the calculation with an average rating (so on the scale of 1 to 5, would be a rating of 3) so everyone essentially starts "average".

  3. For the age factor, I see the worth of a score diminishing by say 10% per year (would be handy to be pluggable, ie change to 15% if we found this was too slow).


  4. The score will get recalculated when a new review is left so based on that reviews date (but if this could be plug-able to change to a recalculate per day system, that would be handy)

  5. Old verified reviews should have more worth than a new unverified review, but should probably be affected by the age sliding scale so eventually there will come a tipping point where a new unverified review will weigh more than the old verified review because it's so old.

real analysis - $int^infty_0 frac{cos(x)}{sqrt{x}},dx$ Evaluate using Fresnel Integrals



$\int^\infty_0 \frac{\cos(x)}{\sqrt{x}}\,dx$ Evaluate using Fresnel Integrals



(For reference the $\cos$ Fresnel integral is $\int^\infty_0 \cos(x^2)\, dx = \frac{\sqrt{2 \pi}}{4}$)




I've tried integration by parts but just ended up getting $-x\cos(x)$ for my final integration which doesn't help.



I suppose we want to some how get $\cos(u^2)$ into the integrand, but I'm stupid and can't figure out how.



Mathematica says the answer is $\frac{\sqrt{2\pi}}{2}$



Any help would be appreciated!


Answer



Using Fresnel Integrals




Substituting $x=u^2$, we get
$$
\int_0^\infty\frac{\cos(x)}{\sqrt{x}}\mathrm{d}x
=2\int_0^\infty\cos(u^2)\,\mathrm{d}u
$$
As shown in this answer,
$$
\int_0^\infty\cos(u^2)\,\mathrm{d}u=\sqrt{\frac\pi8}
$$
Therefore,

$$
\int_0^\infty\frac{\cos(x)}{\sqrt{x}}\mathrm{d}x=\sqrt{\frac\pi2}
$$






Alternate Approach



As a check, we can use contour integration to show that since $\frac{e^{iz}}{\sqrt{z}}$ has no singularities in the plane minus the negative real axis, we have
$$

\begin{align}
\int_0^\infty\frac{\cos(x)}{\sqrt{x}}\mathrm{d}x
&=\mathrm{Re}\left(\int_0^\infty\frac{e^{ix}}{\sqrt{x}}\mathrm{d}x\right)\\
&=\mathrm{Re}\left(\frac{1+i}{\sqrt2}\int_0^\infty\frac{e^{-x}}{\sqrt{x}}\mathrm{d}x\right)\\
&=\frac1{\sqrt2}\Gamma\left(\frac12\right)\\
&=\sqrt{\frac\pi2}
\end{align}
$$


Tuesday 17 December 2019

functional equations - Real Analysis Proofs: Additive Functions

I'm new here and could really use some help please:




Let $f$ be an additive function. So for all $x,y \in \mathbb{R}$, $f(x+y) = f(x)+f(y)$.




  1. Prove that if there are $M>0$ and $a>0$ such that if $x \in [-a,a]$, then $|f(x)|\leq M$, then $f$ has a limit at every $x\in \mathbb{R}$ and $\lim_{t\rightarrow x} f(t) = f(x)$.


  2. Prove that if $f$ has a limit at each $x\in \mathbb{R}$, then there are $M>0$ and $a>0$ such that if $x\in [-a,a]$, then $|f(x)| \leq M$.




if necessary the proofs should involve the $\delta - \varepsilon$ definition of a limit.







The problem had two previous portions to it that I already know how to do. However, you can reference them to do the posted portions of the problem. Here they are:



(a) Show that for each positive integer $n$ and each real number $x$, $f(nx)=nf(x)$.



(b) Suppose $f$ is such that there are $M>0$ and $a>0$ such that if $x\in [−a,a]$, then $|f(x)|\le M$. Choose $\varepsilon > 0$. There is a positive integer $N$ such that $M/N < \varepsilon$. Show that if $|x-y|

Series of infinite terms where individual terms are multiplied by the order of the term



I would like to know what the equation is for as series of infinite terms which are multiplied by the order of the terms:
$$
\sum_{i=0}^{\infty} \sum_{j=0}^{\infty}(ij)
a^ib^j
$$

$a$ and $b$ are both fractions.
Thanks to the answers provided on the question " Simple approximation to a series of infinite terms ", I assume that the this simplifies to:
$$
\sum_{i=0}^{\infty} ia^i \cdot \sum_{j=0}^{\infty}jb^j
$$
A simple formula similar to the answers provided in the previous question would be much appreciated.


Answer



Assuming that $a$ and $b$ are constants with an absolute value less than 1.



Looking at each summation individually we know that from the Neumann series




$\displaystyle \sum_{i = 0}^{\infty} a^i = \dfrac{1}{1-a} $



Assuming that the derivative of the above series can be portrayed as



$\displaystyle f'(a) = \sum_{i = 0}^{\infty} ia^{i-1} = \dfrac{1}{(1-a)^2} $



After multiplying by $a$ on each side we get



$af'(a) = \displaystyle \sum_{i = 0}^{\infty} ia^i = \dfrac{a}{(1-a)^2}$




We can do the same with



$bf'(b) = \displaystyle \sum_{j = 0}^{\infty} jb^j = \dfrac{b}{(1-b)^2}$



Thus



$\displaystyle \sum_{i = 0}^{\infty} \sum_{j = 0}^{\infty} (ij)a^ib^j = \dfrac{ab}{(1-a)^2(1-b)^2}$


Monday 16 December 2019

real analysis - How to construct a bijection from $(0, 1)$ to $[0, 1]$?












I wonder if I can cut the interval $(0,1)$ into three pieces:
$(0, \frac{1}{3})\cup(\frac{1}{3},\frac{2}{3})\cup(\frac{2}{3},1)$, in which I'm able to map point $\frac{1}{3}$ and $\frac{2}{3}$ to $0$ and $1$ respectively.
Now the question remained is how to build a bijection mapping from those three intervels to $(0,1)$.



Or, my method just goes in a wrong direction. Any correct approaches?


Answer



Consider the sequence
$$
\frac 12, \frac 13, \frac 14, \frac 15, \dots, \frac 1n, \dots
$$

Map every other point $f : (0,1) \to [0,1]$ that is not in this sequence to itself, and then map the above sequence to the corresponding points in this one :
$$
0, 1, \frac 12, \frac 13, \frac 14, \dots.
$$
In other words, map $\frac 12$ to $0$, $\frac 13$ to $1$, and then map $\frac 1n$ to $\frac 1{n-2}$ for $n \ge 4$.



The reason why you can map some set into some bigger set bijectively is precisely because they are infinite, so you must exploit this fact. If you don't, you have no chance.



Now to answer your actual question, the trick you try to use doesn't feel relevant to me ; I'm not saying there is absolutely no way it could work, because I actually know there is, since your set (the union of the three intervals) and the interval $(0,1)$ have the same cardinality. The problem with your idea is that I don't think a construction will naturally come out of it. In general, to map bijectively a set into a bigger one you must be "moving things around", so I expect any fairly understandable construction involving your idea to be similar to the one I've shown you.




Hope that helps,


probability theory - Expected value of visits in a state of a discrete Markov chain





Let




  • $X=(X_n)_{n\in\mathbb N_0}$ be a Markov chain with values in a at most countable Polish space $E$ and $\mathcal E$ be the Borel $\sigma$-algebra on $E$

  • $(\operatorname P_x)_{x\in E}$ be the distributions of $X$


  • $N(y)=\sum_{n\in\mathbb N_0}1_{\left\{X_n=y\right\}}$ be the number of visits of $X$ in $y\in E$



Clearly, $$\operatorname E_x[N(y)]=\sum_{n\in\mathbb N_0}\operatorname P_x[X_n=y]\;.$$



I've read that it holds $$\operatorname E_x[N(y)]=\sum_{k\in\mathbb N}\operatorname P_x[N(y)\ge k]\;,$$ but I don't understand why this is true. Is it a typo and what's really meant is "=" instead of "\ge"?


Answer



Here's a formula with uses in lots of places: If $N$ is a non-negative integer-valued random variable, then $E[N] =\sum_{k=1}^\infty P[N\ge k]$. To see this write
$$
\sum_{k=1}^\infty P[N\ge k]=\sum_{k=1}^\infty \sum_{j=k}^\infty P[N=j]=\sum_{j=1}^\infty \sum_{k=1}^j P[N=j]=\sum_{j=1}^\infty j\cdot P[N=j]=E[N]

$$


Sunday 15 December 2019

calculus - Is it possible for a continuous function to have a nowhere-continuous derivative?

This is motivated by a question I saw elsewhere that asks whether there is a real-valued function on an interval that contains no monotone subintervals.



Edit: Note that I am asking for a function whose derivative exists but is not continuous anywhere.

Saturday 14 December 2019

proof verification - Prove Exponential series from Binomial Expansion

I try to prove the Exponential series :



$$\exp(x) = \sum_{k=0}^{\infty} \dfrac{x^k}{k!}$$




From the definition of the exponential function $$\exp(x) \stackrel{\mathrm{def}}{=} \lim_{n\to\infty} \left(1+\dfrac{x}{n}\right)^n$$



I've tried a Binomial expansion of $\exp(x)$ like :
$$\begin{split}
\exp(x) &= \lim_{n\to\infty} \sum_{k=0}^{n} \binom{n}{k}\dfrac{x^k}{n^k}\\
&= 1 + \lim_{n\to\infty} \sum_{k=1}^{n}\left(\dfrac{x^k}{k!}\times \dfrac{n!}{(n-k)!\times n^k}\right)\\
&= 1 + \lim_{n\to\infty} \sum_{k=1}^{n}\dfrac{x^k}{k!}\prod_{j=1}^{k}\left(\dfrac{n-(j-1)}{n}\right)\\
&= 1 + \lim_{n\to\infty} \sum_{k=1}^{n}\dfrac{x^k}{k!}\prod_{j=1}^{k}\left(1-\dfrac{j-1}{n}\right)\\
\end{split}$$




Here is my problem. If I apply the limit, obtain :
$$\lim_{n\to\infty} \dfrac{j-1}{n} = (j-1) \times \lim_{n\to\infty}\dfrac{1}{n} = 0$$



But $j$ approaches $k$ which approaches $n$, so $j$ approaches the infinity... and the limit is indeterminate : $\infty \times 0 = \,?$



How to evaluate this indeterminate form?



Thanks in advance.

Friday 13 December 2019

find sum of n terms of series $sum cos(ntheta)$




Use the result $1 + z + z^2...+z^n=\frac{z^{n+1}-1}{z-1}$ to sum the series to n terms



$1+\cos\theta+\cos2\theta+...$



also show that partial sums of series $\sum \cos (n\theta)$ is bounded when $0<\theta<\pi/2$



My attempt




so z can be written as $e^{i\theta}$ which means:



$1+ \cos \theta + \cos 2\theta ....+\cos n\theta + i(\sin \theta+\sin 2\theta+....+\sin n\theta)=\frac{z^{n+1}-1}{z-1}$



after this.. i dont know


Answer



Remember that
$$
e^{it}=\cos t+i\sin t\;\;\;\;\forall t\in\Bbb C

$$
and that
$$
\sum_{j=0}^{n}z^j=\frac{1-z^{n+1}}{1-z}\;\;\forall z\in\Bbb C,\;\;|z|<1.
$$
Thus
$$
\sum_{j=0}^{n}\cos(j\theta)=
\sum_{j=0}^{n}\Re{(e^{ij\theta})}=
\Re\left(\sum_{j=0}^{n}(e^{ij\theta})\right)=

\Re\left(\frac{1-e^{i\theta(n+1)}}{1-e^{i\theta}}\right)
$$
The last term I wrote can be handled easily in order to be written explicitly and get the results you wanted.


Are there any bases which represent all rationals in a finite number of digits?



In base 10, 1/3 cannot be represented in a finite number of digits. Examples exist in many other bases (notably base 2, as it's relevant to computing). I'm wondering: does there exist any base in which every rational number can be represented in a finite number of digits? My intuition is that the answer is no. If so, what is a proof of this?


Answer




Your intuition is correct for instance for all $b > 2$, $\frac{1}{b-1}$ is not going to have a finite representation, and will have the representation
$\frac{1}{b-1} = 0.1111111...._b.$ Eg, $\frac{1}{9} = .11111111$ in base 10.


real analysis - Proving limit relations with the exponential function

Prove the following limit relations:


$$\lim_{x\to0} (1+x)^{1/x} = e$$

$$\lim_{n\to\infty} \left(1 + \frac{x}{n}\right)^n = e^x$$


I'm not sure how to prove this as I'm not really sure what tools I have to prove it. I know by definition that the two limit relations are true, but any advice as to how to solve this specific problem/similar problems would be very appreciated!

Thursday 12 December 2019

algebra precalculus - Why does $2+2=5$ in this example?




I stumbled across the following computation proving $2+2=5$



calculation proving 2+2=5



Clearly it doesn't, but where is the mistake? I expect that it's a simple one, but I'm even simpler and don't really understand the application of the binomial form to this...


Answer



The error is in the step where the derivation goes from
$$\left(4-\frac{9}{2}\right)^2 = \left(5-\frac{9}{2}\right)^2$$

to
$$\left(4-\frac{9}{2}\right) = \left(5-\frac{9}{2}\right)$$



In general, if $a^2=b^2$ it is not necessarily true that $a=b$; all you can conclude is that either $a=b$ or $a=-b$. In this case, the latter is true, because $\left(4-\frac{9}{2}\right) = -\frac{1}{2}$ and $\left(5-\frac{9}{2}\right) = \frac{1}{2}$. Once you have written down the (false) equation $-\frac{1}{2} = \frac{1}{2}$ it is easy to derive any false conclusion you want.


sequences and series - Prove: If $sum a_n$ converges then $sum frac{1}{a_n}$ diverges




Prove: If $\sum a_n$ converges, then $\sum \frac{1}{a_n}$ diverges.




I want to prove this statement. I've been trying to find a way but I couldn't.
let's say $a_n$ converges to $a$ then what should i do? Can i prove it like a sequence. For example $\forall\epsilon >0, \exists N>0 \ if \ n>N \ then \ |a-L|< \epsilon$ . I don't think i can apply this to series.


Answer




If $\sum a_n$ converges, we must have that $\lim\limits_{n\to\infty}a_n=0$ (otherwise it would have been divergent when we checked the limit by the divergence/limit test). Therefore, if we considered $\sum\frac{1}{a_n}$, when we take the limit, we see that $$\lim_{n\to\infty}\frac{1}{a_n}\left[\to\frac{1}{0}\right]\to\infty$$


linear algebra - Determinant of a specially structured matrix ($a$'s on the diagonal, all other entries equal to $b$)




I have the following $n\times n$ matrix:



$$A=\begin{bmatrix} a & b & \ldots & b\\ b & a & \ldots & b\\ \vdots & \vdots & \ddots & \vdots\\ b & b & \ldots & a\end{bmatrix}$$



where $0 < b < a$.



I am interested in the expression for the determinant $\det[A]$ in terms of $a$, $b$ and $n$. This seems like a trivial problem, as the matrix $A$ has such a nice structure, but my linear algebra skills are pretty rusty and I can't figure it out. Any help would be appreciated.


Answer



Add row 2 to row 1, add row 3 to row 1,..., add row $n$ to row 1, we get
$$\det(A)=\begin{vmatrix}

a+(n-1)b & a+(n-1)b & a+(n-1)b & \cdots & a+(n-1)b \\
b & a & b &\cdots & b \\
b & b & a &\cdots & b \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
b & b & b & \ldots & a \\
\end{vmatrix}$$
$$=(a+(n-1)b)\begin{vmatrix}
1 & 1 & 1 & \cdots & 1 \\
b & a & b &\cdots & b \\
b & b & a &\cdots & b \\

\vdots & \vdots & \vdots & \ddots & \vdots \\
b & b & b & \ldots & a \\
\end{vmatrix}.$$
Now add $(-b)$ of row 1 to row 2, add $(-b)$ of row 1 to row 3,..., add $(-b)$ of row 1 to row $n$, we get
$$\det(A)=(a+(n-1)b)\begin{vmatrix}
1 & 1 & 1 & \cdots & 1 \\
0 & a-b & 0 &\cdots & 0 \\
0 & 0 & a-b &\cdots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & \ldots & a-b \\

\end{vmatrix}=(a+(n-1)b)(a-b)^{n-1}.$$


numerical methods - Computing (on a computer) the first few (non-trivial) zeros of the zeta function of a number field

If $K$ is a number field, whose Galois closure over the rationals has degree 24 or so, and whose discriminant is around $163^4$, then what is a numerically efficient way of computing the first few zeros of its zeta function on the critical line?



I tried in pari but pari seems to choke on zetakinit.




I tried in magma and got much further. I can create the number field, and use the LSeries command to compute some form of the $L$-function. I can now evaluate the $L$-function at pretty much any point I want on the critical line, and use things like LSetPrecision to warn magma that I'm going up the critical line. I have no feeling for these things though; I don't even know how far I might expect to look up the line for the first, say, five zeros. The main problem I have though is that I'm just naively evaluating the function at some random points, and each evaluation might take a minute, and I evaluate the function at a point and it's non-zero and now I don't even know whether to move up or down.



Are there any other computer algebra packages that might be able to help me out?

Wednesday 11 December 2019

complex analysis - Real part and imaginary part of an expression under sqrt

How can one find real and imaginary part of below expression? b abd a are real numbers:



$$\sqrt{a+ i b}$$

algebra precalculus - Is this an incorrect proof of $cot (x)+tan(x)=csc(x)sec(x)$?

If you input the trig identity:
$$\cot (x)+\tan(x)=\csc(x)\sec(x)$$
Into WolframAlpha, it gives the following proof:



Expand into basic trigonometric parts:
$$\frac{\cos(x)}{\sin(x)} + \frac{\sin(x)}{\cos(x)} \stackrel{?}{=} \frac{1}{\sin(x)\cos(x)}$$
Put over a common denominator:



$$\frac{\cos^2(x)+\sin^2(x)}{\cos(x)\sin(x)} \stackrel{?}{=} \frac{1}{\sin(x)\cos(x)}$$




Use the Pythagorean identity $\cos^2(x)+\sin^2(x)=1$:



$$\frac{1}{\sin(x)\cos(x)} \stackrel{?}{=} \frac{1}{\sin(x)\cos(x)}$$



And finally simplify into



$$1\stackrel{?}{=} 1$$



The left and right side are identical, so the identity has been verified.




However, I take some issue with this. All this is doing is manipulating a statement that we don't know the veracity of into a true statement. And I've learned that any false statement can prove any true statement, so if this identity was wrong you could also reduce it to a true statement.



Obviously, this proof can be easily adapted into a proof by simply manipulating one side into the other, but:



Is this proof correct on its own? And can the steps WolframAlpha takes be justified, or is it completely wrong?

Tuesday 10 December 2019

sequences and series - Studying the convergence of $ U_{n+1} = sqrt{1 + U_n} $

How to can I study the convergence of $\begin{cases} U_0 \geqslant -1 \\ \forall n \in \mathbb{N}, U_{n+1} = \sqrt{1 + U_n} \end{cases} $ ?



My try to find the general term using Newton's method was for naught.

Probability that any outcome of a dice roll happens more than X times out of Y trials



I'm trying to determine the probability that a person experiences a "lucky number" when rolling a single, fair, 6-sided dice over a set of rolls in a single trial. A "lucky number" in this case is any face of the die that occurs visibly more common than one would normally expect. If you roll a six-sided die 100 times, you expect the outcome to occur with ~16.6 results of 1, 2, 3, 4, 5, and 6 ea, on average.



For example, you roll a six-sided dice in 100 independent trials, what is the probability that the occurrence of rolling any side of the dice happens at least 33 times over the course of the 100 independent trials? It doesn't matter if the roll was 1, 2, 3, 4, 5, or 6, just that the same result happened at least 33 times out of the 100 trials.



How would I calculate this?



Thanks.



Answer



The chance that $1$ comes up exactly $33$ times in $100$ comes from the binomial distribution. The chance of success is $\frac 16$ and failure is $\frac 56$ so it is ${100 \choose 33}(\frac 16)^{33}(\frac 56)^{67}\approx 0.00003$ If we sum from $33$ to $100$ we get the chance of at least $33\ 1$s, which is about $0.0005$ per Alpha. You can multiply these by $6$ to get the chance for any number, as it is very unlikely we doublecount by having at least $33$ of two different numbers. So the chance of a "lucky number" happening by chance is about $0.0003$ or one in $3300$. Pretty unlikely, but rarer things happen all the time.


Monday 9 December 2019

limits - Discuss the differentiability of $e^{-|x|}$?

Discuss the differentiability of $e^{-|x|}$ ?






I tried something like making $2$ domains, $x\ge 0$ and $x<0$




$e^{-|x|} = e^{-x} | x\ge 0 $



and



$e^{-|x|} = e^{x} | x<0 $



By using definition of differentiability,



$$\lim_{x \to 0^+}\frac{f(x)-f(0)}{x-0} = \lim_{x \to 0^+}\frac{e^{-x}-1}{x} = -1$$




and Similarly



$$\lim_{x \to 0^-}\frac{f(x)-f(0)}{x-0} = \lim_{x \to 0^-}\frac{e^{x}-1}{x} =+1$$



Hence, I can say that it is not differentiable at $x=0$.






Is my understanding right or Am I missing something ?

Sunday 8 December 2019

real analysis - Bijection f: [0, 1] to (0, 1)?

My solution for this function is let 0 map to 1/2, 1 to 1/3, and then for any other values map them to that value + 1/2. Is this the correct solution for this mapping?

real analysis - dedekind cut for the square root

Can somebody give me the hints to solve it ?




  1. What is the Dedekind cut $(A, B)$ for $\sqrt 2$ ?


  2. What is the Dedekind cut $(C, D)$ for
    $\sqrt 2 + \sqrt 3$ ?

  3. In $\mathbb R[x]/(x^2 + 1)$, what is the value of $[x^3 + x^2 + x + 1]$?
    (Just simplify $x^3 + x^2 + x + 1$)
    where $[p]$ means the equivalence class that has $p$ as an element;
    $[p]$ is the set of polynomials $q$ in $\mathbb R[x]$ such that
    $x^2 + 1$ divides $p - q$.
    $\mathbb R[x]$ is the set of polynomials whose coefficients are all real numbers.

Saturday 7 December 2019

divisibility - Numbers divisible by all of their digits: Why don't 4's show up in 6- or 7- digit numbers?



For reasons I'll explain below the question if you're interested, I stumbled across a peculiar phenomenon involving numbers divisible by their digits.



I'm concerned with numbers that are divisible by all of its digits, and do not have any zeros or repeated digits.




Ex: 175, 9867312, 1



Not: 111, 105



There are 548 such numbers: 105 of them have 6 or more digits. (There can't be any with more than 7 digits, for reasons I'll leave you to discover.*) For some reason, though, no six- or seven- digit numbers have any 4's in their digits.
Why?



I know why there can't be any 8 digit numbers: 1+2+3+4+6+7+8+9 = 40, which is not divisible by 3, so 9, 6, and 3 won't divide. But why don't 4's show up past 6 digits?







Explanation for why I have this question:



*About two years ago I was in a TI-Basic programming competition which required me to write the following program:



A number is said to be "digisible" if it meets the following three conditions:
- It has no 0;
- All digits which compose it are different from each other;
- It is divisible by each of the digits that compose it.

You will have to make a program that asks a positive integer greater than or equal to 10 and displays 1 if it is digisible, 0 if it is not.



This was a fun challenge to create and super-optimize in TI-Basic. TI-Basic is really slow, so it wasn't possible to check all numbers for "digisible"-ness. However, in the past year I learned Java, which is speedy-fast. So I returned to the problem and made a program to list out ALL of the digisible numbers.



Looking at all of the Digisibles, I noticed some cool things, some of which makes sense, others which I couldn't find an explanation for. This question is one that I couldn't find an explanation for, but highly suspect one exists.



Hint: 5's do not show up past 5 digit numbers.


Answer



As you point out, a 7-digit number cannot have a 5 in it (otherwise, it would have to end in 5, but would have to have one of 2, 4, 6, or 8 in it, so must be even). Thus a 7-digit number $x$ contains 7 of 1, 2, 3, 4, 6, 7, 8, 9. We need only identify the missing digit. If it is not 9, the resulting number must be divisible by 9, so the missing digit must be 4 ($40-4 = 36$). If the missing digit is 9, then the digits are 1, 2, 3, 4, 6, 7, 8. But the sum here is $40-9=31$, which is not divisible by 3. So this cannot happen.




Thus any 7-digit number of this type must consist of the digits 1, 2, 3, 6, 7, 8, 9.


group theory - Is there an operational isomorphism from $(mathbb{Z},+)$ to $(mathbb{Q}^{+},cdot)$?



Let $\left(\mathbb{Z},+\right)$ and $\left(\mathbb{Q}^{+},\cdot\right)$ be groups (let the integers be a group with respect to addition and the positive rationals be a group with respect to multiplication). Is there a function $\phi\colon\mathbb{Z}\mapsto\mathbb{Q}^{+}$ such that:





  • $\phi(a)=\phi(b) \implies a=b$ (injection)

  • $\forall p\in\mathbb{Q}^{+} : \exists a\in\mathbb{Z} : \phi(a)=p$ (surjection)

  • $\phi(a+b) = \phi(a)\cdot\phi(b)$ (homomorphism)



? If so, provide an example. If not, disprove.


Answer



If such an isomorphism existed it would of course be onto so there would exist some $n \in \mathbb{Z}$ such that $\phi(n) = \frac{1}{2}$ for example. But then, $$\phi(n) = \phi(1+\cdots + 1) = \phi(1)\cdots \phi(1) = \phi(1)^n = \frac{1}{2}.$$ This implies $n=1$ since otherwise $\left(\frac{1}{2}\right)^{1/n} \notin \mathbb{Q}$. So, $\phi(1) = \frac{1}{2}$. Thus, for any $n \in \mathbb{Z}$, $\phi(n) = \phi(1)^n = \frac{1}{2^n}$, so clearly $\phi$ is not onto since we only achieve powers of two in the image. So such an isomorphism cannot exist.


Friday 6 December 2019

decimal expansion - I'm puzzled with 0.99999











After reading all the kind answers for this previous question question of mine,
I wonder... How do we get a fraction whose decimal expansion is the simple $0.\overline{9}$?



I don't mean to look like kidding or joking (of course, one can teach math with fun so it becomes more interesting), but this series has really raised a flag here, because $\frac{9}{9}$ won't solve this case, although it solves for all other digits (e.g. $0.\overline{8}=\frac{8}{9}$ and so on).




Thanks!
Beco.


Answer



The number $0.9999\cdots$ is in fact equal to $1$, which is why you get $\frac{9}{9}$. See this previous question.



To see it is equal to $1$, you can use any number of ideas:




  1. The hand-wavy but convincing one: Let $x=0.999\cdots$. Then $10x = 9.999\cdots = 9 + x$. So $9x = 9$, hence $x=1$.


  2. The formal one. The decimal expansion describes an infinite series. Here we have that

    $$ x = \sum_{n=1}^{\infty}\frac{9}{10^n}.$$
    This is a geometric series with common ration $\frac{1}{10}$ and initial term $\frac{9}{10}$, so
    $$x = \sum_{n=1}^{\infty}\frac{9}{10^n} = \frac{\quad\frac{9}{10}}{1 - \frac{1}{10}} = \frac{\frac{9}{10}}{\quad\frac{9}{10}\quad} = 1.$$




In general, a number whose decimal expansion terminates (has a "tail of 0s") always has two decimal expansions, one with a tail of 9s. So:
$$\begin{align*}
1.0000\cdots &= 0.9999\cdots\\
2.480000\cdots &= 2.4799999\cdots\\
1938.01936180000\cdots &= 1938.019361799999\cdots

\end{align*}$$
etc.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...