Wednesday 31 July 2019

polynomials - Prove $x^n-1$ is divisible by $x-1$ by induction





Prove that for all natural number $x$ and $n$, $x^n - 1$ is divisible by $x-1$.



So here's my thoughts:
it is true for $n=1$, then I want to prove that it is also true for $n-1$



then I use long division, I get:



$x^n -1 = x(x^{n-1} -1 ) + (x-1)$



so the left side is divisible by $x-1$ by hypothesis, what about the right side?



Answer



So first you can't assume that the left hand side is divisible by $x-1$ but for the right hand side we have that $x-1$ divides $x-1$ and by the induction hypothesis we have that $x-1$ divides $x^{n-1}-1$ so what can you conclude about the left hand side.


functional analysis - Is the nuclear norm equal to the Hilbert-Schmidt norm for a particular operator?



Suppose that $\varphi,\gamma\in L^2([0,1],\mathbb C)$ and define the operator $\varphi\otimes\gamma:L^2([0,1],\mathbb C)\to L^2([0,1],\mathbb C)$ by setting $\varphi\otimes\gamma=\langle\cdot,\gamma\rangle\varphi$. This is an integral operator with the kernel given by $\varphi(x)\overline\gamma(y)$ for each $x,y\in [0,1]$. Hence, the Hilbert-Schmidt norm of this operator is given by

$$
\|\varphi\otimes\gamma\|_{HS}=\biggl(\int_0^1\int_0^1|\varphi(x)\overline\gamma(y)|^2dxdy\biggr)^{1/2}=\|\varphi\|\|\gamma\|.
$$
I want to find the singular values of this operator so that I can obtain the nuclear norm. Since the adjoint of $\varphi\otimes\gamma$ is given by $\gamma\otimes\varphi$, we have that
$$
(\varphi\otimes\gamma)^*(\varphi\otimes\gamma)(f)=\langle f,\gamma\rangle\|\varphi\|^2\gamma
$$
and if we set $f=\gamma/\|\gamma\|$, we obtain
$$
(\varphi\otimes\gamma)^*(\varphi\otimes\gamma)(\gamma\|\gamma\|)=\|\varphi\|^2\|\gamma\|^2\gamma/\|\gamma\|.

$$
The composition of operators $(\varphi\otimes\gamma)^*(\varphi\otimes\gamma)$ has one non-zero eigenfunction $\gamma/\|\gamma\|$ with the corresponding eigenvalue $\|\varphi\|^2\|\gamma\|^2$. Hence, the operator $\varphi\otimes\gamma$ has one non-zero singular value given by $\|\varphi\|\|\gamma\|$ and the nuclear norm of $\varphi\otimes\gamma$ is equal to $\|\varphi\|\|\gamma\|$, which is also the Hilbert-Schmidt of this operator.



Are my calculations correct? Does the composition of operators $(\varphi\otimes\gamma)^*(\varphi\otimes\gamma)$ have only one non-zero eigenfunction and the corresponding non-zero eigenvalue? Are the Hilbert-Schmidt and the nuclear norms really equal for this operator?



Any help is much appreciated!


Answer



Yes. The operators $T=(\varphi\otimes\gamma)^*(\varphi\otimes\gamma) $ you are looking at, are rank one and positive. Concretely, you have that your operator is
$$
Tf=\|\varphi\|^2\,\langle f,\gamma\rangle\,\gamma={\|\varphi\|^2}{\|\gamma\|^2}\,\langle f,\gamma'\rangle\,\gamma',

$$
where $\gamma'=\gamma/\|\gamma\|$.
Written this way, $T$ is a scalar multiple of the projection operator onto the span of $\gamma$. So its spectrum consists of $0$ and $\|\varphi\|^2\|\gamma\|^2$. So, for the HS norm,
$$
\|\varphi\otimes\gamma\|_{\rm HS}^2=\text{Tr}\,((\varphi\otimes\gamma)^*(\varphi\otimes\gamma))^{1/2}=(0+\|\varphi\|^2\|\gamma\|^2)^{1/2}=\|\varphi\|\,\|\gamma\|.
$$
For the operator norm,
$$
\|\varphi\otimes\gamma\|=\|(\varphi\otimes\gamma)^*(\varphi\otimes\gamma)\|^{1/2}=\|\varphi\|\|\gamma\|.
$$



calculus - Using the definition of limit to evaluate $lim_{n to infty}frac{3n+1}{n}$



I am trying to to understand the limit definition.
For the sequence $\dfrac{3n+1}{n}$ I can divide by the highest polynomial in denominator and get that 3 may be the limit.



Using the limit definition $$\left|\dfrac{3n+1}{n}-3\right|<\epsilon \implies \left|\dfrac{3n+1-3n}{n}\right|<\epsilon \implies \dfrac{1}{n}< \epsilon \rightarrow \dfrac{1}{\epsilon}which means that whatever $\epsilon$ smallest as I want the will be $N

Now if I take a wrong limit like $2$ I get $$\left|\frac{3n+1}{n}-2\right|<\epsilon \implies \left|\frac{3n+1-2n}{n}\right|<\epsilon \implies\frac{n+1}{n}<\epsilon$$




Now because I can not isolate $n$ that mean that there is no $N

Answer



In general, $x_n \to \ell$ as $n \to \infty$ if and only if $x_n - \ell \to 0$ as $n \to \infty$.



So the way to find out if you have the correct limit is:




Do we have $x_n - \ell \to 0$?








In your example, $x_n = \frac{3n+1}n$ and $\ell = 3$. Notice that $x_n - \ell = \frac 1n \to 0$ as $n \to \infty$. This means that for any $\epsilon > 0$, we can find $N$ such that for every $n \ge N$, $\frac 1n \lt \epsilon$.



But $x_n - 2 = \frac{n+1}n \to 1 \ne 0$


Tuesday 30 July 2019

Asymptotic approximation to incomplete elliptic integral of third kind at a pole - determine constant



while studying a physics problem I found that asymptotically the incomplete elliptic integral of the third kind, (using the Mathematica conventions, where it is called EllipticPi),



$\Pi(n;\phi|m)=\int_0^{\sin\phi} \frac{dt}{(1-nt^2)\sqrt{(1-mt^2)(1-t^2)}},$



which seems to logarithmically diverge when $\sin^2 \phi$ approaches $1/n$, approaches the logarithm indeed in the following way:




$\lim_{\epsilon \rightarrow 0} \left[ 2(n-1)\Pi\left(n;\arcsin \sqrt{\frac{1-\epsilon}{n}}|(2-n)n\right) +\log\epsilon\right]=C(n)$



where $C(n)$ is a constant. This identity (could it be a new discovery?) I checked numerically and it might be independent of the value of $m$.



I need to know the constant $C(n)$ in terms of some known functions (or constants from integrations independent of $n$) if possible, for the value of $m$ given.
Mathematica will not help. Is there any chance to do it? I have no training in this apart from undergraduate analysis and use elliptic integrals for the first time here.



(by the way, for large $n$, I can from looking at the graph see $C(n)$ being of the form $a+log(n+b)$, but for $n$ around unity this is no exact fit while being similar (divergence to minus infinity for small $n$ included).




Recent Edit:



putting the elliptic integral and the logarithm in the same integral and doing what was suggested by user8268, splitting them into two nondiverging integrals, I have now problems with the more complicated, see my comment to the answer. Can anybody tell me if this is an elliptic integral of the third kind or if Mathematica is right when it gives me a mixture of them (including ugly roots, making evaluation of the limit impossible) and if so must there be a calculation mistake or might the advice given even not be correct?



notsoimportantdetails(
Background:
This comes from wanting to evaluate an improper integral (§) that I physically know and numerically see to converge. In turn this integral comes frome the difference of two diverging integrals. Mathematica cannot do the improper integral of the combination but it can do the indefinite integral. But then it splits up the integrand again and the result is the elliptic integral and the logarithm, each diverging when doing the limit. Mathematica cannot do the limit and as I said astonishingly does not even know that there is a divergence for the elliptic integral. When the elliptic integral and the logarithm are put together into one integral (use $-log\epsilon=\int_0^{\sqrt{\frac{1-\epsilon}{n}}}\frac{2tn}{1-t^2n}dt$ or transform the elliptic integral to obtain (o) below) we are more or less back where we started, I suppose - well, my version of Mathematica even hangs up for the indefinite evaluation now.



Edit: Since maybe all I have done so far is making it more complicated, the original problem is to evaluate the integral (§) of
$\frac{1-\sqrt{1-\frac{\text{r0} (-2 m+\text{r0})}{x (-2 m+x)}}}{\left(1-\frac{2 m}{x}\right) \sqrt{1-\frac{\text{r0} (-2 m+\text{r0})}{x (-2 m+x)}}}$

from $r0$ to infinity, where $r0>2m$. This has the obvious split into the two divergent integrals.



For your reference, the combined integral (o) I talked about in the end above reads
$C(n)=\lim_{\epsilon\rightarrow 0}\int_1^\epsilon\frac{1 - n + \sqrt{(1 + n (-1 + t) - 2 t) (-1 + t) (-1 + n +
t)}}{t \sqrt{(1 + n (-1 + t) - 2 t) (-1 + t) (-1 + n + t)}}$ where the integration will solve the problem by providing $C(n)$ ; $\epsilon$ can actually be replaced by $0$ right away since the limit exists.



notsoimportantdetails)


Answer



I finally after all this time found the answer by simply using the second identity on http://functions.wolfram.com/EllipticIntegrals/EllipticPi3/17/01/ which generally expresses the incomplete elliptic integral of the 3rd kind with one term of an incomplete integral of the 3rd kind, one term of an incomplete integral of the 1st kind and one logarithm, which cancels the logarithm in my limit.
The result has whence indeed the form suggested by user8268.



calculus - What is the value of $lfloor{100N}rfloor$

What is the value of $\lfloor{100N}\rfloor$ where $\displaystyle N= \lim\limits_{a\,\rightarrow\,\infty}\sum\limits_{x=-a}^{a}\frac{\sin x}{x}$.



This is a part of a bigger problem that I was solving. I need the exact integer value of $\lfloor{100N}\rfloor$ .
I was only able to simplify $\displaystyle \lim\limits_{a\,\rightarrow\,\infty}\sum\limits_{x=-a}^{a}\frac{\sin x}{x}= 1+2\left(\lim\limits_{a\,\rightarrow\,\infty}\sum\limits_{x=1}^{a}\frac{\sin x}{x}\right)$. I don't know how to proceed further. Please help

discrete mathematics - How to solve a congruence with Euclidean algorithm with exponents

I need help solving this particular congruence with Euclidean algorithm. I, in general, don't know how to solve congruences with exponents so I don't even know how to start.



\begin{align*}
x \equiv 6^{29} \;(\bmod\; 7)
\end{align*}



I know how to solve the ones without exponents, so it would be great if someone could do a step by step solution for this example and explain it to me a little bit better. I want to use Euclidean algorithm because that's how I learnt to do all other congruences and Chinese remainder theorem. (This is how i do them: https://www.youtube.com/watch?v=4-HSjLXrfPs)




Any help is much appreciated.

Monday 29 July 2019

probability - What is the rationale behind the evaluation of the Expectation operator?



We know that the expectation operator is defined for a random variable $x$, as such:




$$
\mathbb{E} \left\{x\right\} = \int_{-\infty}^{\infty} x \: p_x(x) \; \mathrm{d}x
$$



Where $p_x{x}$ is the PDF of the random variable $x$.



If there is an arbitrary(?) function $f$ acting on the random variable $x$, then the expected value of this function can also be written as:



$$
\mathbb{E}\left\{f(x) \right\} = \int_{-\infty}^{\infty} f(x) \: p_x(x) \: \mathrm{d}x

$$



My questions are: On many algorithms that I study, (statistical in nature), one often finds themselves taking the expected value of some entity, that is a function of the random variable $x$. In the reverse case, one can also find themselves poking around and manipulating the probability distribution function of $x$, and then we can 'take it back' into an expression using the expectation operator.



Upon evaluating the expected value of $x$ however, ($\mathbb{E[x]})$, I often come across this estimation formula:



$$
\mathbb{E}\left\{x\right\} \approx \frac{1}{N}\sum_{n=1}^{N} x[n]
$$




and similarly,



$$
\mathbb{E}\left\{f(x)\right\} \approx \frac{1}{N}\sum_{n=1}^{N} f(x[n])
$$
Where each $x[n]$ is an individual realization of the random variable $x$.



My question is, why is this formula true, and how did it come about? Every book I read seems to just include it as if it fell from the sky one day and no explanation is given as to why it is true.



Could someone please give an intuitive and mathematical explanation for why - and more importantly, how this happens to be true? What is the history/rationale behind it?




Many thanks.


Answer



As you are aware, the expected value of a random variable $X$ is computed as:



$\mathbb{E}\{X\} = \int_{-\infty}^{\infty}z f_{X}(z)dz$, where $f_X$ is the probability density function (PDF) of $X$.



Note that the above integral may not converge (e.g. if $X$ is a Cauchy random variable), and the PDF may not exist as well. But let's assume we are not getting into such problems, which is indeed the case in most practical settings.



Often, we will not know the PDF, or even its functional form. In such a case, one resorts to estimating the PDF from realizations of the random variable. That is, assume we have $N$ realizations of $X$ - $\{x_i\}_{i=1}^N$. Let us consider the following estimate:




$\hat{f}_X(z) = \frac{1}{N}\sum_{i=1}^N \delta(z-x_i)$, where $\delta(.)$ is the Dirac delta function.



So essentially, we are treating the random variable as a uniform discrete random variable, and putting a mass of $1/N$ over each of the observed values. The estimate of the expectation of $X$ becomes:



$\hat{\mathbb{E}}_N\{X\} = \int_{-\infty}^{\infty}z \frac{1}{N}\sum_{i=1}^N \delta(z-x_i)dz$



$= \frac{1}{N}\sum_{i=1}^N \int_{-\infty}^{\infty}z \delta(z-x_i)dz$



By the sifting property of the delta function, you will note that the integral becomes $x_i$. Hence, we arrive at the following expression for the sample expectation:




$\hat{\mathbb{E}}_N\{X\} = \frac{1}{N}\sum_{i=1}^N x_i$



Note that $x_i$'s are themselves random, since another set of draws from $X$ is likely to give me a different set of realizations. Thus the above estimate of the expected value is itself a random variable - dependent on the draws $\{x_i\}_{1}^N$ and the number of draws $N$. Since all $x_i$s are distributed identically (with PDF $f_X$), and are independent draws, the Laws of Large Numbers tell us that $\mathbb{E}_N\{X\} \rightarrow \mathbb{E}\{X\}$ with probability $1$ (almost surely), and in probability, as $N \rightarrow \infty$.


real analysis - Proving that $lim_{ntoinfty}(int_{a}^{b}f(x)^ndx)^{1/n} = max_{xin [a,b]}f(x)$




I'm trying to prove the following statement:



$$\lim_{n\to\infty}\left(\int_{a}^{b}f(x)^ndx\right)^{1/n} = \max_{x\in [a,b]}f(x)$$



where $[a,b] \subset \mathbb{R} $ and $f$ is non-negative and continuous.



I've tried to prove it in a similar way that we prove that $\displaystyle\lim_{n\to\infty}(a^n+b^n)^{1/n} = b$ if $b>a$.




However, I'm stuck in the end with an iterated limit of the form $\displaystyle\lim_{n\to\infty} \lim_{\epsilon\to 0 } \left(\epsilon^{1/n}\max_{x\in [a,b]}f(x)\right)$.



Is this last expression equal to $\displaystyle\max_{x\in [a,b]}f(x)$? If not, could anyone please give me a hint as to how to go about this proof?


Answer



Here is it, an elementary proof. Let $M=\max \{f(x): x\in [a,b]\}$. Since $f$ is non negative then $M>0$. For $n\geq 1$, define $u_n= \left(\int_{a}^{b} f(x)^n dx \right)^{1/n}$. By monotony of the integral we have
$$ u_n\leq \left(\int_{a}^{b} M^n dx \right)^{1/n}= M(b-a)^{1/n}.$$
Let $c\in [a,b]$ such that $f(c)=M$. Then by continuity of $f$, for any $\epsilon \in (0,2M)\; \exists [s,t]\subset [a,b] $ ($[s,t]$ is a neighborhood of $c$) such that for all $x\in [s,t]$ we have $f(x)\geq M-{\epsilon \over 2}$.



Hence for any $n\geq 1$




$$ u_n \geq \left(\int_{s}^{t} f(x)^n dx \right)^{1/n}\geq \left(\int_{s}^{t}\left( M-{ \epsilon \over 2 }\right)^n dx \right)^{1/n}= (M-{\epsilon \over 2})(t-s)^{1/n}.$$



Thus we can say that $$\forall \epsilon \in (0,2M) \, \exists [s,t]\subset [a,b] \,\text{ s.t }\, \forall n\geq 1\; (M-{\epsilon \over 2})(t-s)^{1/n}\leq u_n\leq M(b-a)^{1/n}.$$



On the other hand $\lim_{n\to \infty} M(b-a)^{1/n}= M$ and $\lim_{n\to \infty } (M-{\epsilon \over 2})(t-s)^{1/n}=M-{\epsilon \over 2},$ so



$\exists n_1 \geq 1, \forall n\geq n_1, \; M(b-a)^{1/n} and $\exists n_n \geq 1, \forall n\geq n_n, \; (M-{\epsilon \over 2})(t-s)^{1/n}>M-\epsilon$.



Let $n_0=\max\{n_1,n_2\}$. For $n\geq n_0$, $M-\epsilon, and thus we have shown that
$$ \forall \epsilon >0 , \;\exists n_0\geq 1,\; \forall n\geq n_0,\; |u_n-M|<\epsilon.$$




Thus $\lim_{n\to \infty} u_n=M.$


Equal functions with non-equal definitions

Suppose we have two functions $f, g: \mathbb{D} \rightarrow \mathbb{R}$ where $\mathbb{D} \subseteq \mathbb{R}$. Also, $\forall x \in \mathbb{D}: f(x) = g(x)$.



My question is: Is it possible for such two functions (i.e. can such a pair of functions exist) to have their definitions (i.e. the expressions that describe the relationship between $x$ and $f(x)$ resp. $g(x)$) not equal?



By "definitions not equal" I mean that one expression cannot be transcribed to the other one.



In other, less mathematical words: can there be two functions that are different but give equal values for every point of their domain?



If something is unclear, please comment and I'll update the question accordingly.




EDIT: To exclude trivial examples, let $\mathbb{D}$ be a nontrivial interval or a union of a finite number of nontrivial intervals.



EDIT 2: To narrow it down further more, let $\mathbb{D}$ be induced by the definitions of the functions only, i.e. it covers as much of $\mathbb{R}$ as possible that the functions are still defined. E.g. if $f(x) = \frac{1}{x}$ and $g(x) = \frac{2}{x}$ then $\mathbb{D} = \mathbb{R} \setminus \{0\}$ (I know, in this case they are not equal in values, but it's just for the illustration of what I want from the domain).

calculus - Differntiable and continuous

Is it true that a function which is not continuous at a point will not be differentiable at that point? Graphically it seems so, but can we prove this formally?
Also, if the above statement is incorrect, can we still disprove it?



Note that we know "All differentiable functions are continuous, but not all continuous functions are differentiable". But I am asking about the negation of this in some sense

Sunday 28 July 2019

real analysis - Constructing a bijection from (0,1) to the irrationals in (0,1)



How does one construct a bijection from (0,1) to the irrationals in (0,1)?



Or if I am getting my notation right, can you provide an explicit function $f:(0,1)\rightarrow(0,1)\backslash\mathbb{Q}$ such that $f$ is a bijection?


Answer



(1) Choose an infinite countable set of irrational numbers in $(0,1)$, call them $(r_n)_{n\geqslant0}$.




(2) Enumerate the rational numbers in $(0,1)$ as $(q_n)_{n\geqslant0}$.



(3) Define $f$ by $f(q_n)=r_{2n+1}$ for every $n\geqslant0$, $f(r_n)=r_{2n}$ for every $n\geqslant0$, and $f(x)=x$ for every irrational number $x$ which does not appear in the sequence $(r_n)_{n\geqslant0}$.



Let me suggest you take it from here and show that $f$ is a bijection between $(0,1)$ and $(0,1)\setminus\mathbb Q$.


real analysis - Is this a Darboux function?



Let $f(x)=x$ if $0\leq x\leq 1$ and $f(x)=x-\frac{1}{2}$ if $1

Questions

  • Am I right? Is $f$ a Darboux function?

  • If yes, then why do authors give complicated examples of Darboux functions like $g(x)=\sin\left(\frac{1}{x}\right)$ if $x>0$ and $g(0)=0$ or $h(x)=\left(x^2\sin\left(\frac{1}{x}\right)\right)'$ which are hard to draw near $x=0$?. Even worse when they mention Conway base 13 function.
  • Answer



    It doesn't satisfy the intermediate value theorem :



    $$f(\frac{9}{10}) = \frac{9}{10}$$




    $$f(\frac{11}{10}) = \frac{6}{10}$$



    But there is no $x \in [\frac{9}{10}, \frac{11}{10}]$ such that $f(x) = \frac{8}{10}$


    functions - Show that $frac{1}{1+k}=frac{frac{1}{k}}{1+frac{1}{k}}leq ln(1+frac{1}{k})leqfrac{1}{k}$




    Prove the following:
    $$\frac{1}{1+k}=\frac{\frac{1}{k}}{1+\frac{1}{k}}\leq \ln(1+\frac{1}{k})\leq\frac{1}{k}$$




    I know I can prove it with induction if the values were naturals. However, the "problem" for me is that they're real.


    Answer



    For all $x \in \mathbb{R}$,
    $$e^x \geq 1 + x$$
    Taking log on both sides we get,
    $$\ln (1 + x) \leq x, \forall x > -1$$
    Substituting $x = \frac{1}{k}, k \notin [0, -1]$, we get,
    $$\displaystyle{\ln \left(1 + \frac{1}{k}\right) \leq \frac{1}{k}}$$
    Substituting $x = \frac{-1}{k + 1}, k \notin [0, -1]$, we get,

    $$\ln \left(\frac{k}{k + 1}\right) \leq \frac{-1}{k + 1}\Rightarrow \ln \left(1 + \frac{1}{k}\right) \geq \frac{1}{k + 1} $$


    Analysis Problem on Differentiability

    Prove that there is no differentiable function $ f(x)$ defined on $(-\infty, \infty)$ such that $f'(0) = 1$, but $f'(x) \geq 2$ for $x \ne 0.$



    So I use contradiction method, suppose there exists a function $f(x)$ with those properties, using the definition of limit gives me



    $f'(0) = \lim_{h\rightarrow0} \mid \frac {f(0+h)-f(0)}{h}\mid$=1 and $f'(x_0) = \lim_{ h\to 0} \mid \frac {f(x_0+h)-f(x_0)}{h}\mid$ $\geq 2$ for $x_0 \in R$ but $x_0\neq 0.$ I don't know how to come up with a contradiction. Please help.

    Thursday 25 July 2019

    number theory - Proving that $19mid 5^{2n+1}+3^{n+2} cdot 2^{n-1}$





    How can I prove that $$5^{2n+1}+3^{n+2} \cdot 2^{n-1} $$ can be divided by 19 for any nonnegative n? What modulo should I choose?


    Answer



    You can prove this by induction.






    First, show that this is true for $n=1$:



    $5^{2\cdot1+1}+3^{1+2}\cdot2^{1-1}=19\cdot8$




    Second, assume that this is true for $n$:



    $5^{2n+1}+3^{n+2}\cdot2^{n-1}=19k$



    Third, prove that this is true for $n+1$:



    $5^{2(n+1)+1}+3^{n+1+2}\cdot2^{n+1-1}=$



    $5^{2+2n+1}+3^{1+n+2}\cdot2^{1+n-1}=$




    $5^{2}\cdot5^{2n+1}+3^{1}\cdot3^{n+2}\cdot2^{1}\cdot2^{n-1}=$



    $5^{2}\cdot5^{2n+1}+3^{1}\cdot2^{1}\cdot3^{n+2}\cdot2^{n-1}=$



    $25\cdot5^{2n+1}+\color\green{6}\cdot3^{n+2}\cdot2^{n-1}=$



    $25\cdot5^{2n+1}+(\color\green{25-19})\cdot3^{n+2}\cdot2^{n-1}=$



    $25\cdot5^{2n+1}+25\cdot3^{n+2}\cdot2^{n-1}-19\cdot3^{n+2}\cdot2^{n-1}=$




    $25\cdot(\color\red{5^{2n+1}+3^{n+2}\cdot2^{n-1}})-19\cdot3^{n+2}\cdot2^{n-1}=$



    $25\cdot\color\red{19k}-19\cdot3^{n+2}\cdot2^{n-1}=$



    $19\cdot25k-19\cdot3^{n+2}\cdot2^{n-1}=$



    $19\cdot(25k-3^{n+2}\cdot2^{n-1})$







    Please note that the assumption is used only in the part marked red.


    real analysis - Functional equation $g(x+y) = g(x)g(y)$







    Let $g: \mathbf{R} \to \mathbf{R}$ be a function which is not identically zero and which satisfies the functional equation $g(x+y)=g(x)g(y)$




    Suppose $a>0$, show that there exists a unique continuous function satisfying the above, such that $g(1)=a$.

    elementary number theory - Find $6^{1000} mod 23$





    Find $6^{1000} \mod 23 $




    Having just studied Fermat's theorem I've applied $6^{22}\equiv 1 \mod 23 $, but now I am quite clueless on the best way to proceed.



    This is what I've tried:




    Raising everything to the $4th$ power I have $$6^{88} \equiv 1 \mod 23 $$ $$6^{100} \equiv 6^{12} \mod 23 $$ $$6^{1000}\equiv 6^{120} \mod 23 $$ $$6^{1000} \equiv 6^{10} \mod 23$$ How do I simplify now the right hand side of the congruence ?


    Answer



    We may exploit the fact that $1000\equiv 10\pmod{22}$ to get:
    $$ 6^{1000}\equiv 6^{10} \equiv (6^5)^2 \equiv 2^2 \equiv\color{red}{4}\pmod{23}.$$
    As an alternative, we may notice that $a^{11}\pmod{23}$ is the Legendre symbol $\left(\frac{a}{23}\right)$, so:



    $$ 6^{11} \equiv \left(\frac{2}{23}\right)\cdot\left(\frac{3}{23}\right) \equiv 1\cdot 1\equiv 1\pmod{23} $$
    gives $6^{1001}\equiv 1\pmod{23}$ and by multiplying both sides by the inverse of $6$ we get $4$ as above.


    polynomials - Prove two bases are dual in a finite field.

    Let K be a finite field, $F=K(\alpha)$ a finite simple extension of degree $n$, and $ f \in K[x]$ the minimal polynomial of $\alpha$ over $K$. Let $\frac{f\left( x \right)}{x-\alpha }={{\beta }_{0}}+{{\beta }_{1}}x+\cdots +{{\beta }_{n-1}}{{x}^{n-1}}\in F[x]$ and $\gamma={f}'\left( \alpha \right)$.



    Prove that the dual basis of $\left\{ 1,\alpha ,\cdots ,{{\alpha }^{n-1}} \right\}$ is $\left\{ {{\beta }_{0}}{{\gamma }^{-1}},{{\beta }_{1}}{{\gamma }^{-1}},\cdots ,{{\beta }_{n-1}}{{\gamma }^{-1}} \right\}$.



    I met this exercise in "Finite Fields" Lidl & Niederreiter Exercises 2.40, and I do not how to calculate by Definition 2.30. It is



    Definition 2.30 Let $K$ be a finite field and $F$ a finite extension of $K$. Then two bases $\left\{ {{\alpha }_{1}},{{\alpha }_{2}},\cdots ,{{\alpha }_{m}} \right\}$ and $\left\{ {{\beta }_{1}},{{\beta }_{2}},\cdots ,{{\beta }_{m}} \right\}$ of $ F$ over $K$ are said to be dual bases if for $1\le i,j\le m$ we have $T{{r}_{{F}/{K}\;}}\left( {{\alpha }_{i}}{{\beta }_{j}} \right)=\left\{ \begin{align}

    & 0\;\;\text{for}\;\;i\neq j, \\
    & 1\;\;\text{for}\;\;i=j. \\
    \end{align} \right.$



    I think $\gamma =\underset{x\to \alpha }{\mathop{\lim }}\,\frac{f(x)-f{{(\alpha )}_{=0}}}{x-\alpha }={{\beta }_{0}}+{{\beta }_{1}}\alpha +\cdots {{\beta }_{n-1}}{{\alpha }^{n-1}}$.



    How can I continue? The lecturer did not teach the "dual bases" section.

    calculus - Limit of $(cos{xe^x} - ln(1-x) -x)^{frac{1}{x^3}}$



    So I had the task to evaluate this limit



    $$ \lim_{x \to 0} (\cos{(xe^x)} - \ln(1-x) -x)^{\frac{1}{x^3}}$$




    I tried transforming it to:



    $$ e^{\lim_{x \to 0} \frac{ \ln{(\cos{xe^x} - \ln(1-x) -x)}}{x^3}}$$



    So I could use L'hospital's rule, but this would just be impossible to evaluate without a mistake. Also, I just noticed this expression is not of form $\frac{0}{0}$.



    Any solution is good ( I would like to avoid Taylor series but if that's the only way then that's okay).



    I had this task on a test today and I failed to do it.



    Answer



    First notice that $$\cos (xe^x) = \sum_{n=0}^{\infty}(-1)^n \frac{(xe^x)^{2n}}{2n!}$$



    and $$\ln (1 - x) = -\sum_{n=0}^{\infty} \frac{x^n}{n}$$



    Thus $$\cos (xe^x) - \ln (1 - x) = \sum_{n=0}^{\infty}(-1)^n \frac{(xe^x)^{2n}}{2n!} +\sum_{n=0}^{\infty}\frac{x^n}{n} = 1 + x - \frac{2x^3}{3} - O(x^4) $$



    Therefore we have



    $$\ln (1 - \frac{2x^3}{3}) = - \frac{2x^3}{3} - \frac{2x^6}{9} - O(x^9)$$




    Finally



    $$\begin{align}\lim_{x \to 0} \frac{\ln (\cos xe^x - \ln (1 - x) - x)}{x^3} &= \lim_{x \to 0} -\frac{2}{3} - \frac{2x^3}{9} - O(x^6) =\color{red}{ -\frac{2}{3}}\end{align}$$



    Now you may find your limit.


    real analysis - Proving an alternating Euler sum: $sum_{k=1}^{infty} frac{(-1)^{k+1} H_k}{k} = frac{1}{2} zeta(2) - frac{1}{2} log^2 2$



    Let $$A(p,q) = \sum_{k=1}^{\infty} \frac{(-1)^{k+1}H^{(p)}_k}{k^q},$$
    where $H^{(p)}_n = \sum_{i=1}^n i^{-p}$, the $n$th $p$-harmonic number. The $A(p,q)$'s are known as alternating Euler sums.




    Can someone provide a nice proof that
    $$A(1,1) = \sum_{k=1}^{\infty} \frac{(-1)^{k+1} H_k}{k} = \frac{1}{2} \zeta(2) - \frac{1}{2} \log^2 2?$$





    I worked for a while on this today but was unsuccessful. Summation by parts, swapping the order of summation, and approximating $H_k$ by $\log k$ were my best ideas, but I could not get any of them to work. (Perhaps someone else can?) I would like a nice proof in order to complete my answer here.



    Bonus points for proving $A(1,2) = \frac{5}{8} \zeta(3)$ and $A(2,1) = \zeta(3) - \frac{1}{2}\zeta(2) \log 2$, as those are the other two alternating Euler sums needed to complete my answer.





    Added: I'm going to change the accepted answer to robjohn's $A(1,1)$ calculation as a proxy for the three answers he gave here. Notwithstanding the other great answers (especially the currently most-upvoted one, the one I first accepted), robjohn's approach is the one I was originally trying. I am pleased to see that it can be used to do the $A(1,1)$, $A(1,2)$, and $A(2,1)$ derivations.

    Answer



    $A(1,1)$:
    $$
    \begin{align}

    \sum_{n=1}^N\frac{(-1)^{n-1}}{n}H_n
    &=\sum_{n=1}^N\frac{(-1)^{n-1}}{n^2}+\sum_{n=2}^N\frac{(-1)^{n-1}}{n}H_{n-1}\\
    &=\sum_{n=1}^N\frac{(-1)^{n-1}}{n^2}+\frac12\sum_{n=2}^N\sum_{k=1}^{n-1}\frac{(-1)^{n-1}}{n}\left(\frac1k+\frac1{n-k}\right)\\
    &=\sum_{n=1}^N\frac{(-1)^{n-1}}{n^2}+\frac12\sum_{n=2}^N\sum_{k=1}^{n-1}\frac{(-1)^{n-1}}{k(n-k)}\\
    &=\sum_{n=1}^N\frac{(-1)^{n-1}}{n^2}+\frac12\sum_{k=1}^{N-1}\sum_{n=k+1}^N\frac{(-1)^{n-1}}{k(n-k)}\\
    &=\sum_{n=1}^N\frac{(-1)^{n-1}}{n^2}+\frac12\sum_{k=1}^{N-1}\sum_{n=1}^{N-k}\frac{(-1)^{n+k-1}}{kn}\\
    &=\color{#00A000}{\sum_{n=1}^N\frac{(-1)^{n-1}}{n^2}}
    -\color{#0000FF}{\frac12\sum_{k=1}^{N-1}\frac{(-1)^{k-1}}{k}\sum_{n=1}^{N-1}\frac{(-1)^{n-1}}{n}}\\
    &+\color{#C00000}{\frac12\sum_{k=1}^{N-1}\frac{(-1)^{k-1}}{k}\sum_{n=N-k+1}^{N-1}\frac{(-1)^{n-1}}{n}}\tag{1}
    \end{align}

    $$
    where, using the Alternating Series Test, we have
    $$
    \begin{align}
    &\color{#C00000}{\frac12\left|\sum_{k=1}^{N-1}\frac{(-1)^{k-1}}{k}\sum_{n=N-k+1}^{N-1}\frac{(-1)^{n-1}}{n}\right|}\\
    &\le\frac12\left|\sum_{k=1}^{N/2}\frac{(-1)^{k-1}}{k}\sum_{n=N-k+1}^{N-1}\frac{(-1)^{n-1}}{n}\right|
    +\frac12\left|\sum_{k=N/2}^{N-1}\frac{(-1)^{k-1}}{k}\sum_{n=N-k+1}^{N-1}\frac{(-1)^{n-1}}{n}\right|\\
    &\le\frac12\cdot1\cdot\frac2N+\frac12\cdot\frac2N\cdot1\\
    &=\frac2N\tag{2}
    \end{align}

    $$
    Applying $(2)$ to $(1)$ and letting $N\to\infty$, we get
    $$
    \sum_{n=1}^\infty\frac{(-1)^{n-1}}{n}H_n=\color{#00A000}{\frac12\zeta(2)}-\color{#0000FF}{\frac12\log(2)^2}\tag{3}
    $$


    Wednesday 24 July 2019

    calculus - Homeomorphism between two sets

    I am to prove that there is no homeomorphism between $(a,b)$ and $[a,b)$



    It is defined that function is bijective, continuous, and inverse continuous.



    How can I derive a contradiction assuming that there exists a homeomorphism between two sets?



    One approach that I take is if f is continuous map $[a,b)$ to $(a,b)$, then $f^{-1}((a,b))$ must be open set, but $[a,b)$ is not open.



    I think this way is more of like set theory rather than using definition of continuity and I doubt this completes proof or not.

    calculus - Continuous function with a point of periodicity

    I have trouble in following question.




    Suppose $f : [a, b] \to [a, b]$ is continuous and $x_0$ is a point having period $3$. How many points of period $5$ are there? How many orbits of period $5$? How many points of period $29$ are there?




    I think there should be a specific function so that we can check the periodicity, but apparently there's anything wrong in this wording of this problem according to class. Could anyone give me the clue? Might be bifurcation?

    Tuesday 23 July 2019

    integration - Is it true that $frac{1}{pi^{2n+1}} int_0^{theta} ln^{2n}left(frac{sin x}{sinleft(theta-xright)}right),dx$ is a rational...

    I was trying to evaluate $\displaystyle \int_0^{\frac{\pi}{6}}\ln^2\left(2\sin x\right)\,dx$ in an elementary way (no complex variable) so i have considered:



    $\displaystyle \int_0^{\frac{\pi}{6}} \ln^2\left(\frac{\sin x}{\sin\left(\frac{\pi}{6}-x\right)}\right)\,dx$.



    Using lindep a function in PARI GP i have conjectured that this integral is equal to a rational times $\pi^3$*.
    Then i have considered:




    $\displaystyle \frac{1}{\pi^5}\int_0^{\frac{\pi}{6}} \ln^4\left(\frac{\sin x}{\sin\left(\frac{\pi}{6}-x\right)}\right)\,dx,\frac{1}{\pi^7} \int_0^{\frac{\pi}{6}} \ln^6\left(\frac{\sin x}{\sin\left(\frac{\pi}{6}-x\right)}\right)\,dx$ and it seems that these integrals are rational numbers.



    then i have considered:



    $\displaystyle \frac{1}{\pi^5}\int_0^{\frac{\pi}{7}} \ln^4\left(\frac{\sin x}{\sin\left(\frac{\pi}{7}-x\right)}\right)\,dx,\frac{1}{\pi^7} \int_0^{\frac{\pi}{7}} \ln^6\left(\frac{\sin x}{\sin\left(\frac{\pi}{7}-x\right)}\right)\,dx$



    same things happen.



    Then i have considered:




    $\displaystyle \frac{1}{\pi^3}\int_0^{\sqrt{2}} \ln^2\left(\frac{\sin x}{\sin\left(\sqrt{2}-x\right)}\right)\,dx$.



    and lindep doesn't show that this number is rational. (it's not a proof).



    i have tested much more values ($\frac{\pi}{7}+\frac{1}{10000}$ for example)



    My question:



    is it true that:




    $0< \theta <\pi$, a real



    for all $n$, natural integer



    $\displaystyle \frac{1}{\pi^{2n+1}} \int_0^{\theta} \ln^{2n}\left(\frac{\sin x}{\sin\left(\theta-x\right)}\right)\,dx$ is a rational



    if only if $\theta=r\pi$, $0< r<1$ a rational.



    *: i think i have a proof for this.




    PS:



    The idea of this came after reading: Evaluation of $\int_0^{\pi/3} \ln^2\left(\frac{\sin x }{\sin (x+\pi/3)}\right)\,\mathrm{d}x$

    Linear independence of orthogonal vectors.

    Let {$v_1...v_k$} be a linearly independent subset of a vector space V. Let {$u_1...u_m$} be a linearly independent subset of a vector space $V_{perpendicular}$. (So for example, $v \cdot x = 0)$. I want to prove now that the whole list {$v_1...v_k, u_1, ...u_m$} is linearly independent.



    How would I go about doing this? Thanks!

    radicals - Prove or disprove that $sqrt{2+frac{1}{n}}$ is irrational for $n in mathbb{Z}^+$

    I have good reason to suspect that $\sqrt{2+\frac{1}{n}}$ is irrational for all $n \in \mathbb{Z}^+$ but a proof of this eludes me. I've tried proof by contradiction have had no success. I've also tried induction where clearly the base case is true, i.e. $\sqrt{2+1}=\sqrt{3}$ is irrational, but I haven't been able to show the induction step.

    probability theory - Convergence of Truncated Expectation of Order Statistics $E[Y_{k:N}|Y_{k:N}>v]rightarrow v$



    Setting
    Let $(X_i)_{i\leq N}$ be a set of i.i.d. random variables, with $X_i$ mapping to some interval $[a,b]$.
    Let $Y_{k:N}$ be the $k$th order statistic of this set and $v\in[a,b]$.
    Denote by $f_X,F_X$ the continuous pdf and the continuous CDF of $X_i$ and by $f_{Y_{k:N}}$ the pdf of $Y_{k:N}$



    Quantity of interest
    I am interested in the truncated expectation of the order statistic $$E[Y_{k:N}|Y_{k:N}>v].$$
    This can be written as $$E[Y_{k:N}|Y_{k:N}>v]=\frac{\int_v^\infty yf_{Y_{k:N}}(y)dy}{\int_v^\infty f_{Y_{k:N}}(y)dy}.$$
    Conjecture
    Computing this quantity in MATLAB, suggests that
    $$E[Y_{k:N}|Y_{k:N}>v]\underset{N\rightarrow\infty}{\rightarrow}v.$$
    Also my intuition is in line with this conjecture: For growing $N$, the support of $f_{Y_{k:N}}$ shrinks to a small region and we can predict $E[Y_{k:N}|Y_{k:N}>v]$ better. Furthermore, the probability of the next value being close to $v$ is large.




    However, I am missing a formal proof.
    Any ideas?


    Answer



    We need to assume something. Assume $E|X| < \infty$ and $F(v)$ is increasing, such that for all $u>v$, $F(u) > F(v)$



    For $u > v$ we have,
    $$
    P(Y_{k:n} > u | Y_{k:n} > v) = \frac{P(Y_{k:n}>u)}{P(Y_{k:n}>v)}.
    $$



    Now $P(Y_{k:n}>x)$ is asking for the probability that out of $n$ tries at most $k-1$ of the $X_i$ is below or equal to $x$. So if $N_{n,x} \in Bin(F(x),n)$ (binomial distributed) we have,

    $$
    P(Y_{k:n}>x) = P(N_{n,x} < k).
    $$
    Now this probability is decreasing in $x$ and it is not hard to see that we can write for a fixed $k$,
    $$
    P(N_{n,x} < k) = C(x,n)n^{k-1}(1-F(x))^{n-k},
    $$ with $C(x,n)C_0$ if $F(v) > 0$,
    Hence,
    $$
    \frac{P(Y_{k:n}>u)}{P(Y_{k:n}>v)} = \frac{P(N_{n,u}
    $$ if $F(v) > 0$, with $0\leq p <1$ due to the fact that $F(v)$ is monotonically increasing at $v$. Hence, this goes to zero as $n$ goes to infinity. If $F(v)=0$, then we note, $E(Y_{k:n}|Y_{k:n}>v)=E(Y_{k:n})$ and it enough to observe that still we have $C(x,n)$$
    P(Y_{k:n}>u) = P(N_{n,u}$$ Again with $0\leq p <1$ due to the fact that $F(v)$ is increasing at $v$. Again, because the geometric decrease is faster than the polynomial increase in $n^{k-1}$ this goes to zero as $n$ goes to infinity.



    This shows the most probability mass lies at $v$ so expectation over any finite region above an $u$ will have a value that goes to zero and because of $E|X|$ is finite, the tail goes to zero and we are left with essentially a delta measure on $v$ and the expectation is indeed $v$.


    calculus - How do I take the limit as $n$ goes to $infty$ of $frac{sqrt{n}}{log(n)}$?



    How do take this limit:



    $$ \lim_{n\to\infty} \frac{\sqrt{n}}{\log(n)}$$



    I have a feeling that it is infinity, but I'm not sure how to prove it. Should I use L'Hopitals Rule?


    Answer



    Let $n = e^x$. Note that as $n \rightarrow \infty$, we also have $x \rightarrow \infty$. Hence, $$\lim_{n \rightarrow \infty} \frac{\sqrt{n}}{\log(n)} = \lim_{x \rightarrow \infty} \frac{\exp(x/2)}{x}$$
    Note that $\displaystyle \exp(y) > \frac{y^2}{2}$, $\forall y > 0$ (Why?). Hence, we have that $$\lim_{n \rightarrow \infty} \frac{\sqrt{n}}{\log(n)} = \lim_{x \rightarrow \infty} \frac{\exp(x/2)}{x} \geq \lim_{x \rightarrow \infty} \frac{\frac{x^2}{8}}{x} = \lim_{x \rightarrow \infty} \frac{x}{8} = \infty$$



    Monday 22 July 2019

    sequences and series - Is 1 divided by 3 equal to 0.333...?




    I have been taught that $\frac{1}{3}$ is 0.333.... However, I believe that this is not true, as 1/3 cannot actually be represented in base ten; even if you had infinite threes, as 0.333... is supposed to represent, it would not be exactly equal to 1/3, as 10 is not divisible by 3.



    0.333... = 3/10 + 3/100 + 3/1000...


    This occured to me while I discussion on one of Zeno's Paradoxes. We were talking about one potential solution to the race between Achilles and the Tortoise, one of Zeno's Paradoxes. The solution stated that it would take Achilles $11\frac{1}{3}$ seconds to pass the tortoise, as 0.111... = 1/9. However, this isn't that case, as, no matter how many ones you add, 0.111... will never equal precisely $\frac{1}{9}$.




    Could you tell me if this is valid, and if not, why not? Thanks!



    I'm not arguing that $0.333...$ isn't the closest that we can get in base 10; rather, I am arguing that, in base 10, we cannot accurately represent $\frac{1}{3}$


    Answer



    Here is a simple reasoning that $1/3=0.3333...$.



    Lets denote $0.333333......$ by $x$. Then



    $$x=0.33333.....$$
    $$10x=3.33333...$$




    Subtracting we get $9x=3$. Thus $x=\frac{3}{9}=\frac{1}{3}$.



    Since $x$ was chosen as $0.3333....$ it means that $0.3333...=\frac{1}{3}$.


    Sunday 21 July 2019

    calculus - Prove $lim_{xtoinfty}x^{ln(x)} = infty$



    I am trying to prove



    $$\lim_{x\to\infty}x^{\ln(x)} = \infty$$



    I am going to break this into two methods: one my professor mentioned and my method (which is where the question lies - skip ahead if you must!). Note that this is not homework, but simply an exercise my professor decided to do during notes the other day. The problem is taken from Stewart 7e Calculus (#70a in section 6.3 if you want to bust out your (e)-book).



    Method 1:




    Recall $\ln(e^x) = x, e^{\ln(x)} = x$. Thus we can write the original limit as
    $$\lim_{x\to\infty}\left(e^{\ln(x)}\right)^{\ln(x)} = \lim_{x\to\infty}e^{\left(\ln (x)\right)^2}$$



    He then let $u = \ln(x)$. As $x\to\infty$, then $u=\ln(x) \to\infty$. As $u\to\infty, v = u^2 \to\infty$. Also, as $v\to\infty, e^v \to\infty$. So, as $x\to\infty, e^{\left(\ln (x)\right)^2} \to\infty$. Thus it is sufficient to say $$\lim_{x\to\infty}x^{\ln(x)} = \infty \ \ \ \ \ \ \ \mathrm{Q.E.D.}$$



    Method 2 (my attempt):



    Let $t = \ln x$. As $x\to\infty, t\to\infty$ because the $\ln$ function is strictly increasing.




    $$\lim_{x\to\infty}x^{\ln(x)} \equiv \lim_{t\to\infty}x^t \tag{1}$$



    Does the last statement of line (1) make sense mathematically though since the limit is with the variable $t$, yet the argument contains an $x$ still?



    Since line (1) may not be formally correct, I decided to try to write $x$ in terms of $t$. Recall that I made the substitution $t = \ln x \implies e^t = e^{\ln x} = x$. Thus I rewrote the limit as



    $$\lim_{t\to\infty}\left(e^t\right)^t$$ which diverges to $\infty$ for sufficiently large values of $t$.



    As an added bonus, are there any other 'simple' proofs for this limit?


    Answer




    Your statement (1) does not really make sense for the reason you cite. Also note that "converges to $\infty$" is generally not correct; instead one usually says "diverges to $\infty$".



    Simple proof: Observe that for $x\geq e$, we have $x^{\ln x}>x$. Thus $\lim\limits_{x\to\infty} x^{\ln x}\geq \lim\limits_{x\to\infty} x$, which is clearly $\infty$.


    radicals - Prove that ${sqrt2}^{sqrt2}$ is an irrational number without using a theorem.




    Prove that ${\sqrt2}^{\sqrt2}$ is an irrational number without using Gelfond-Schneider's theorem.




    I'm interested in this problem because I knew that ${\sqrt2}^{\sqrt2}$ is a transcendental number by Gelfond-Schneider's theorem. I've tried to prove that ${\sqrt2}^{\sqrt2}$ is an irrational number without using the Gelfond-Schneider's theorem, but I'm facing difficulty. I need your help.



    I crossposted to MO:

    https://mathoverflow.net/questions/138247


    Answer



    I'm posting an answer just to inform that the question has received an answer by Mark Sapir on MO.



    https://mathoverflow.net/questions/138247/prove-that-sqrt2-sqrt2-is-an-irrational-number-without-using-a-theorem


    sequences and series - Riemann $zeta(3)$ convergence with Cauchy



    I'm an undergraduate freshman math student, and we were asked to prove that the sequence $a{_n} =\sum_{k=1}^{n} \frac{1}{k^3}$ converges (obviously, we weren't asked to calculate its limit.) Our teacher hinted to prove that it's a Cauchy sequence. We don't know much, only Cauchy, several sentences about sequences and limits and monotonic sequences and such (basic first few months of undergraduate freshman). I'm stuck. any hints / ideas?




    Here's my attempt:



    Let $\varepsilon > 0$. We need to find N, such that for all $m > n > N$, $a_{m}-a_{n} < \varepsilon$. $a_{m}-a_{n} = \sum_{k = n+1}^{m} \frac{1}{k^3}$.



    $\sum_{k = n+1}^{m} \frac{1}{k^3} < \frac{m-n}{(n+1)^3}$.
    But this leads nowhere.



    Note: We don't have to prove it by Cauchy, any solution (from the little we have learnt) will do.


    Answer




    For $k\geq 2$ we have $k^2\geq k+1$



    and



    $$\frac{1}{k^3}\leq \frac{1}{k(k+1)}$$



    but



    $$\sum_{k=2}^n\frac{1}{k(k+1)}=\sum_{k=2}^n (\frac{1}{k}-\frac{1}{k+1})$$




    $$=\frac{1}{2}-\frac{1}{n+1}\leq \frac{1}{2}$$



    thus the sequence of partial sums



    $S_n=\sum_{k=2}^n\frac{1}{k^3}$ is increasing and bounded, and therefore convergent.


    calculus - Calculating $lim_{xto0} leftlfloorfrac{x^2}{sin x tan x}rightrfloor$


    Find $$\lim_{x\to0} \left\lfloor\frac{x^2}{\sin x \tan x}\right\rfloor$$ where $\lfloor\cdot\rfloor$ is greatest integer function




    I am a high school teacher. One of my students came up to ask this limit.
    For $\lfloor\frac{\sin x}{x}\rfloor$, I have used $\sin x > x$ using increasing decreasing functions.




    I tried to prove $x^2 > \sin x \tan x$ using increasing /decreasing
    function but I am not getting it.

    finite fields - Roots of polynomials of the form $x^q-x-alpha$ over $mathbb F_{q^m}$



    Let $q$ be a prime power and $\mathbb F_q$ a finite field with $q$ elements.
    Moreover let $\mathbb F_{q^m}$ be an extension field. For every $\alpha \in \mathbb F_{q^m}$ define
    the polynomial

    $$ g_{\alpha}(x)=x^q-x-\alpha \in \mathbb F_{q^m}[x].$$
    My questions:




    1. For which $\alpha$ (and $m$) the polynomial
      $g_{\alpha}(x)$ has at least one root in $\mathbb F_{q^m}$?


    2. Which is the splitting field of $g_{\alpha}$?




    Here my thoughts.





    1. If $\beta$ is a root of $g_{\alpha}$, then the set of roots of $g_{\alpha}$ is
      $$ \beta+\mathbb F_q:=\{\beta+\lambda \,|\, \lambda \in \mathbb F_q \}.$$
      This means that if $g_{\alpha}$ has one root in $\mathbb F_{q^m}$ then all the roots belongs to $\mathbb F_{q^m}$.


    2. If $\alpha \neq 0$ then $\beta \notin \mathbb F_q$.




    Any help or reference will be appreciated.
    Thanks.




    Alessandro



    EDIT



    It seems by computation with Macaulay2, for every $\alpha \in \mathbb F_{q^m}$ there are only 2 possibilities for the polynomial $g_{\alpha}$.




    1. $g_{\alpha}$ splits into linear factors.


    2. There exist $\lambda \in \mathbb F_q$, $\gamma_1,\ldots, \gamma_{\frac{q^m}{p}}\in \mathbb F_{q^m}$ (where $p$ is the characteritic of the field), s.t.

      $$g_{\alpha}(x)=\prod_{i=1}^{\frac{q^m}{p}} (x^p-\lambda x-\gamma_i). $$
      But I don't know how to prove it.



    Answer



    Partial answer and/or something that is too long to fit into a comment.



    Consider the relative trace function
    $$
    tr^{m}_1:\Bbb{F}_{q^m}\to\Bbb{F}_q,x\mapsto x+x^q+x^{q^2}+\cdots+x^{q^{m-1}}.
    $$

    We easily see (or know) that $tr^m_1$ is $\Bbb{F}_q$-linear, and surjective. Furthermore,
    $$
    tr^m_1(z^q)=tr^m_1(z)
    $$
    for all $z\in\Bbb{F}_{q^m}$. Therefore, again for all $z\in\Bbb{F}_{q^m}$, we have $tr^m_1(z^q-z)=0$. The mapping
    $$P:\Bbb{F}_{q^m}\to\Bbb{F}_{q^m},x\mapsto x^q-x$$
    is also $\Bbb{F}_q$-linear. Clearly $\operatorname{ker}(P)=\Bbb{F}_q$, so rank-nullity theorem implies that $\operatorname{im}(P)$ has dimension $m-1$. But we just saw that $\operatorname{im}(P)\subseteq \operatorname{ker}(tr^m_1)$. Therefore we get equality



    $$\operatorname{im}(P)=\operatorname{ker}(tr^m_1).$$
    Together with your observation this implies that





    $g_\alpha(x)$ has a zero in the field $\Bbb{F}_{q^m}$ if and only if $tr^m_1(\alpha)=0$.




    As you observed, when this holds all the $q$ zeros of $g_\alpha(x)$ will be in the field $\Bbb{F}_{q^m}$.






    The factorization you observed can be described as follows.

    Assume that $tr^m_1(\alpha)\neq0$. Let $z$ be a zero of $g_\alpha(x)$ in some extension field of $\Bbb{F}_{q^m}$. Let's consider the Galois conjugates of $z$ over $\Bbb{F}_{q^m}$. These are gotten by iterating the Frobenius automorphism $F:z\mapsto z^{q^m}$. But we know that $F(\alpha)=\alpha$ and that
    $$
    F(z)=z^{q^m}=z^{q^m}-g_\alpha(z)=z+\alpha.
    $$
    An easy induction then proves that for all $i=1,2,\ldots,p$ we have
    $$
    F^i(z)=z+i\cdot\alpha.
    $$
    In particular the lowest power of $F$ that maps $z$ to itself is $F^p$. Therefore the minimal polynomial of $z$ over $\Bbb{F}_{q^m}$ is
    $$

    m_z(x):=\prod_{i=0}^{p-1}(x-z-i\alpha).
    $$
    Let's denote $T(x)=x^p-x=\prod_{i=0}^{p-1}(x-i)\in\Bbb{F}_p[x]$. We see that
    $$
    T(\frac{x-z}\alpha)=\prod_{i=0}^{p-1}(\frac{x-z}{\alpha}-i)=\alpha^{-p}m_z(x).
    $$
    But $T$ is additive, i.e. $T(x+y)=T(x)+T(y)$ for all $x,y\in\overline{\Bbb{F}_q}$, so
    $$
    m_z(x)=\alpha^p T(\frac{x-z}\alpha)=\alpha^p T(x/\alpha)-\alpha^p T(z/\alpha)
    =x^p-\alpha^{p-1}x-\alpha^pT(z/\alpha).

    $$
    In particular $T(z/\alpha)$ must be an element of $\Bbb{F}_{q^m}$. These are the factors that you see in your Macaulay output. In particular your $\lambda=\alpha^{p-1}$.


    Saturday 20 July 2019

    sequences and series - Showing $sum _{k=1} 1/k^2 = pi^2/6$








    I read my book of EDP, and there appears the next serie
    $$\sum _{k=1} \dfrac{1}{k^2} = \dfrac{\pi^2}{6}$$
    And, also, we prove that this series is equal $\frac{\pi^2}{6}$ for methods od analysis of Fourier, but...



    Do you know other proof, any more simple or beautiful?

    real analysis - Convolution support and property



    I tried to answer a question from a user, as in the following link



    Properties of the solution of the schrodinger equation




    As in the comments I am aware that my answer is not satisfactory, but intuitively it seems so.



    In other words it is well known that if one of the two functions $f$ or $g$ has compact support, and we assume that it is well-defined convolution of them, then we have that



    $$\mathrm{supp}(f \ast g) \subset \mathrm{supp}(f)+\mathrm{supp}(g)$$



    My question is a reversal, that is, if $f$ has not compact support and $g$ has compact support, we may conclude that the convolution does not have compact support?


    Answer



    This is not true. It is even possible that the convolution is identically zero. Consider $f(x) \equiv 1$, which clearly does not have compact support, and let $g(x)$ be any odd function with compact support. Then




    $$(f*g)(x) = \int_{-\infty}^\infty g(y) \, dy = 0.$$


    Friday 19 July 2019

    algebra precalculus - Compute $ax^3+by^3$ given $ax^2+by^2$ and $ax+by$.



    Given are positive real numbers $A$ and $B$ and positive integers $a$ and $b$ such that
    $$ \begin{aligned} ax+by &= A\\ ax^2+by^2 &= B.\end{aligned} \tag{*}$$
    What are the possible values of

    $$ ax^3 + by^3 = C?$$



    Think of it as $a+b$ numbers, $a$ of them are $x$ and $b$ of them are $y$, such that the sum and the sum of their squares are given and the sum of their cubes is asked. I'm not sure if the answer can be expressed as a rational function of $A$, $B$, $a$ and $b$, but I hope so.



    Note that one has the equation
    $$ (A-ax)^2 = (by)^2 = b(B-ax^2),$$
    thus
    $$\begin{aligned} a(a+b)x^2 &= bB-A^2 + 2ax,\\ b(a+b)y^2 &= aB-A^2 + 2by.\end{aligned} \tag{1}$$
    Multiplying with $x$ and adding gives
    $$\begin{aligned} (a+b)C &= B(bx+ay)-A^2(x+y) + 2B \\ &= B(a+b)(x+y)- B(ax+by) - A^2(x+y)+2B \\&= \big(B(a+b)- A^2\big)\,(x+y) - AB+ 2B. \end{aligned}$$




    So this boils down to determining possible values of $x+y$. Here I am a bit stuck. Note also that from (*) one can deduce (multiply the first equation with $x$, $y$, add and subtract the second):
    $$ (a+b)xy = A(x+y) - B.$$



    One could simply solve the quadratic equations (1), verify what combination of solutions works for the original problem and use this to calculate $C$. But actually I am interested in a solution that will work for higher order cases as well, (3 unknowns, sum of first, second and thirth powers is known and the sum of fourth powers must be determined) so I am looking for an approach avoids this. Maybe there is a quadratic equation with $x$ and $y$ as roots hidden somewhere?


    Answer



    Well, you have 3 equations,



    $$ax+by = A\tag{1}$$




    $$ax^2+by^2 = B\tag{2}$$



    $$ax^3+by^3 = C\tag{3}$$



    If you want to express C in terms of $a,b,A,B$, you can eliminate $x,y$ between $(1), (2), (3)$ to have an equation purely in $a,b,A,B,C$ (easily done in Mathematica using the Resultant command).



    However, we get a quadratic in C. But we can use (1) and (2) to simplify its discriminant D and I ended up with,



    $$C = \frac{A(-2A^2+3B(a+b))\pm(a-b)(a b)D^3}{(a+b)^2}$$




    where,



    $$D= (x-y) = \sqrt{\frac{-A^2+(a+b)B}{ab}}\tag{4}$$



    Hence, C is not a rational function of $a,b,A,B$, as you need to take square roots as shown by $(4)$.


    commutative algebra - What is wrong with my proof of a step in Artin's construction of algebraic closure?



    I'm working through Atiyah & MacDonald, and there's an exercise basically asking you to fill in a certain step in Artin's construction of an algebraic closure for a given field. The question is (this is not a quote from the book as I haven't got it here, but it's the same idea):




    Let $k$ be a field. Let $\Sigma$ be the set of irreducible monic polynomials over $k$. Then form the polynomial ring $A=k[\{x_f:f\in\Sigma\}]$, where the $x_f$ are indeterminates indexed by the irreducible polynomials in $\Sigma$. Define an ideal $\mathfrak{a}$ to be the ideal in $A$ generated by polynomials of the form $f(x_f)$, where $f\in\Sigma$. Show that $\mathfrak{a}$ is a proper ideal.





    (The question then goes on to describe how you can use this to construct an algebraic closure of $k$.)



    I think I've found an answer to this. However, it's a lot shorter than other answers I've seen online, so I think there must be a problem with it. Here it is:




    My answer



    It is sufficient to show that we never have an equation of the form:

    $$
    F_1f_1(x_{f_1})+\dots+F_nf_n(x_{f_n})=1
    $$



    where $f_1,\dots,f_n\in\Sigma$ and $F_1,\dots,F_n\in A$. But we know that there is some field extension $K$ of $k$ in which the polynomials $f_i$ have roots $\alpha_i$. Working in $K$, we substitute in $\alpha_i$ for $x_{f_i}$ in the above expression, and obtain:
    $$0=1$$
    which is clearly impossible, since $k\subset K$, so $K$ is not the one-element field. It follows that $\mathfrak a$ is a proper ideal of $A$.




    What is wrong with this answer?



    Answer



    There is nothing wrong with your proof.
    In fact, this is exactly the same argument that Serge Lang gives in his Algebra
    at page 232 (following Artin's argument).


    sequences and series - Proof for Mean of Geometric Distribution

    I am studying the proof for the mean of the Geometric Distribution



    http://www.math.uah.edu/stat/bernoulli/Geometric.html

    (The first arrow on Point No. 8 on the first page).



    It seems to be an arithco geometric series (which I was able to sum using



    http://en.wikipedia.org/wiki/Arithmetico-geometric_sequence#Sum_to_infinite_terms)
    However, I have not able to find any site which uses this simple property above.



    Instead, they differentiate.



    The way the differentiation works is: 1. You have n*x^n-1, so you integrate that to get x^n, and add the differentiation to "balance". 2. You interchange the differentiation and summation (slightly complicated topic). 3. Complete the summation (geometric series). 4. Complete the differentiation. 5. Get your answer.




    Questions:




    Is there anything wrong in arriving at the formula the way I have done.
    Isn't it better to use the arithco-geometric formula then go through all that calculus just to convert an arithco-geometric series into a geometric one.


    Thursday 18 July 2019

    modular arithmetic - Using Fermat's Little Theorem or Euler's Theorem to find the Multiplicative Inverse -- Need some help understanding the solutions here.





    The answers to multiplicative inverses modulo a prime can be found without using the extended Euclidean algorithm.
    a. $8^{-1}\bmod17=8^{17-2}\bmod17=8^{15}\bmod17=15\bmod17$
    b. $5^{-1}\bmod23=5^{23-2}\bmod23=5^{21}\bmod23=14\bmod23$
    c. $60^{-1}\bmod101=60^{101-2}\bmod101=60^{99}\bmod101=32\bmod101$
    d. $22^{-1}\bmod211=22^{211-2}\bmod211=22^{209}\bmod211=48\bmod211$




    The above is using Fermat's little theorem to find the multiplicative inverse of some modular functions. However, there is a final step just before arriving at the answer that I do not understand how to solve, except to solve it by factoring. Factoring takes a very long time.



    Basically, I don't see how the answers move from the third step to the fourth step aside from arriving at the answer by factoring. There has to be a better way using Fermat's Theorem or Euler's Theorem.


    Answer



    Bill's way seems great. Here's another approach (with the goal of finding easily reducible powers)



    \begin{align}

    8^{15}\pmod {17} &\equiv 2^{45}\\
    &\equiv 2\cdot (2^{4})^{11}\\
    &\equiv 2\cdot (-1)^{11} \tag{$16 \equiv -1$}\\
    &\equiv 15 \tag{$15 \equiv -2$}
    \end{align}






    \begin{align}
    5^{21}\pmod {23} &\equiv 5\cdot 5^{20}\\

    &\equiv 5\cdot 25^{10}\\
    &\equiv 5\cdot 2^{10} \tag{$25 \equiv 2$}\\
    &\equiv 5\cdot 32^2\\
    &\equiv 5\cdot 9^2 \tag{$32 \equiv 9$}\\
    &\equiv 5\cdot 12 \tag{$81 \equiv 12$}\\
    &\equiv 14 \tag{$60 \equiv 14$}
    \end{align}







    \begin{align}
    60^{99}\pmod {101} &\equiv 10^{99}&\cdot 6^{99}\\
    &\equiv 10\cdot 100^{49}&\cdot 6^4 \cdot (7776)^{19}\\
    &\equiv -10&\cdot -6^4\\
    &\equiv 12960\\
    &\equiv 32
    \end{align}







    \begin{align}
    22^{209}\pmod {211} &\equiv (2\cdot11)^{11\cdot19}\\
    &\equiv ?\\
    &\text{This is where the superiority of Bill's approach becomes obvious}
    \end{align}


    integration - What's wrong with my double integral for determining the area of an ellipse?



    It is parameterized as follows: $x = 2\cos(\theta)$ and $y = 3\sin(\theta)$.




    This is my double integral. It is not evaluating to the right answer. Why?



    $$\int_0^{2\pi} \int_0^{\sqrt{4\cos^2(\theta) + 9\sin^2(\theta)}} dr\,d\theta$$



    I remember from calculus 3 that to get the area for polar coordinates, I just evaluate $dr$ and $d\theta$. I don't see what's wrong with the limits of integration. The radius is from 0 to the formula and the radians are a full revolution.


    Answer



    In order to include a diagram, I'm turning my comment into an answer. As I said, the $\theta$ that appears in this parametrization is NOT the polar coordinates $\theta$. You can see this quite easily if you think about stretching a circle to make the ellipse. I've substituted $t$ in the parametrization and indicated the polar coordinates $\theta$ as well.



    Ellipse




    To do this integral correctly in polar coordinates you must get the polar coordinates equation of the ellipse for starters:
    $$\frac{x^2}4 + \frac{y^2}9 = 1 \implies \frac{(r\cos\theta)^2}4 + \frac{(r\sin\theta)^2}9 = 1 \implies r = \frac1{\sqrt{\frac{\cos^2\theta}4+\frac{\sin^2\theta}9}}.$$
    Ugh! This will give us the integral
    $$\frac12\int_0^{2\pi} \frac1{\frac{\cos^2\theta}4+\frac{\sin^2\theta}9}\,d\theta,$$ which can be done, but this is not the right way to do this problem!


    Modular exponentiation problem




    $10^7 \pmod {77}$



    I tried repeated squaring, which worked but took many computations. I also tried Fermat's little theorem, but since $7 < 77$ I didn't know how to use it.



    Any simpler way to do this?


    Answer



    Not much effort saved, but work separately mod $11$ (easy, since $10\equiv -1\pmod{11}$) and mod $7$ (easy since by Fermat $10^7\equiv 10\pmod 7$). Then stitch the answers together using the Chinese Remainder Theorem. In this case that can be done by inspection: the answer is $10$.


    linear algebra - Determinant of a matrix in a block form




    Let $A, B, C$ be matrices with size $m \times m$, $n \times n$, and $n \times m$, respectively. If $\det(A) = 2$ and $\det(B) = 3,$ then find
    $$\det \begin{pmatrix} 0 & A \\ B & C \end{pmatrix} =\ldots $$



    I stuck to solve this problem. I also wonder how can we calculate a determinant of matrix with some matrices in it (submatrices)?
    Please, anyone help me


    Answer



    Hints.



    Step 1.

    $$
    \det \left(\begin{array}{cc} 0 & A \\ B & C\end{array}\right)
    =(-1)^m\det \left(\begin{array}{cc} A & 0 \\ C & B\end{array}\right)
    $$



    Step 2.
    $$
    \det \left(\begin{array}{cc} A & 0 \\ C & B\end{array}\right)=\det A\cdot \det B
    $$




    Step 1, is obtained by $m^2$ permutations of rows and as many changes of sign.



    Step 2, is obtained using the Jordan forms of $A$ and $B$.


    combinatorics - Sum of infinite series $sum_{n=50}^{infty} frac{1}{binom{n}{50}}$




    Find Sum of infinite series $$S=\sum_{n=50}^{\infty} \frac{1}{\binom{n}{50}}$$ My Try is :



    $$50S=\sum_{n=50}^{\infty} \frac{n-(n-50)}{\binom{n}{50}}$$ so



    $$50S=\sum_{n=50}^{\infty}\frac{n}{\binom{n}{50}}-\sum_{n=50}^{\infty}\frac{n-50}{\binom{n}{n-50}} $$ so



    $$50S=\sum_{n=50}^{\infty}\frac{n}{\binom{n}{50}}-\sum_{n=0}^{\infty}\frac{n}{{\binom{n+50}{50}}}$$



    any clue further


    Answer




    Hint:



    $$ \frac{1}{\binom{n}{50}} = \frac{50}{49} \bigg( \frac{1}{\binom{n-1}{49}} - \frac{1}{\binom{n}{49}} \bigg). $$


    complex analysis - Why do we negate the imaginary part when conjugating?



    For $z=x+iy \in \mathbb C$ we all know the definition for the "conjugate" of $z$, $\bar{z}=x-iy$. Geometrically this is the reflection of $z$ across the $y$ axis.




    My question is: couldn't we have defined $\underline{z}=-x+iy$ instead? (this is the reflection across the $x$ axis) Is there anything wrong with it?



    I agree that the formula $|z|^2=z \bar{z}$ looks better than $|z|^2=-z \underline{z}$, but are there any more serious problems?


    Answer



    No, the nice thing about conjugation is that it is an automorphism: $\overline{zw} = \bar z\bar w$ and $\overline{z+w}=\bar z+\bar w$.



    At heart, what conjugacy shows you is that, while $i$ and $-i$ are different complex numbers, they have exactly the same behavior. That's not surprising, because we define $i$ so that $i^2=-1$, but then $(-i)^2=-1$. How do we distinguish these two square roots? What if we defined the complex numbers as $a+bi+cj$ where $i+j=0$ and $i^2=j^2=-1$? Then which would be the "primary" square root of $-1$? There is really no way to tell. With positive real numbers, we can always pick the positive square root, but we can't define "positive" in the complex numbers. This yields a duality in the complex numbers, represented by conjugation.


    Wednesday 17 July 2019

    complex numbers - Cartesian $-10i$ to Polar form



    I am trying to convert the following problem to polar form:



    $$z=-j10.$$



    Using this equation, where $|z|=r=\sqrt{x^2+y^2}$ and $\arg z=\theta=\arctan(y/x).$




    $$\eqalign{z&=|z|e^{j\arg z}\\ &=re^{j\theta}\\&=r\angle\theta.}$$



    I determined that x = 0 and y = -10. However, if I plug x and y into arctan(y/x), the result would be indetermined since we're dividing by 0. The solution to that problem is 10<-90degrees.



    Could someone give me some insight on how to convert the above cartesian to polar form?


    Answer



    From





    http://hotmath.com/hotmath_help/topics/polar-form-of-a-complex-number.html




    The polar of a complex number is given by:
    $$z = r(\cos(\theta) + i\sin(\theta))$$
    In your example:
    $$z = -10i$$
    $$r = \sqrt{0^2+(-10)^2} = 10$$
    $$\theta = \arctan(\frac{-10}{0}) = \frac{3\pi}{2}$$
    $\theta$ is $\frac{3\pi}{2}$ because the complex number is in the III quadrant. So the polar form of our complex number is $$z = 10(\cos(\frac{3\pi}{2}) + i\sin(\frac{3\pi}{2}))$$



    probability - How do wer find this expected value?





    I'm just a little confused on this. I'm pretty sure that I need to use indicators for this but I'm not sure how I find the probability. The question goes like this:




    A company puts five different types of prizes into their cereal boxes, one in each box and in equal proportions. If a customer decides to collect all five prizes, what is the expected number of boxes of cereals that he or she should buy?





    I have seen something like this before and I feel that I'm close, I'm just stuck on the probability. So far I have said that $$X_i=\begin{cases}1 & \text{if the $i^{th}$ box contains a new prize}\\ 0 & \text{if no new prize is obtained} \end{cases}$$ I know that the probability of a new prize after the first box is $\frac45$ (because obviously the person would get a new prize with the first box) and then the probability of a new prize after the second prize is obtained is $\frac35$, and so on and so forth until the fifth prize is obtained. What am I doing wrong?! Or "what am I missing?!" would be the more appropriate question.


    Answer



    As the expected number of tries to obtain a success with probability $p$ is $\frac{1}{p}$, you get the expect number :
    $$1+\frac{5}{4}+\frac{5}{3}+\frac{5}{2}+5=\frac{12+15+20+30+60}{12}=\frac{137}{12}\approx 11.41$$


    Tuesday 16 July 2019

    field theory - A proof of Artin's linear independence of characters

    I came up with a proof of Artin's linear independence of characters in field theory. The usual proof uses a clever trick devised by Artin. Since I'm not as clever as him, I prefer a proof which doesn't use a clever trick. Is this proof well-known? The proof consists of a few easy steps.



    Step 1.



    Let $K$ be a field. Let $A \neq 0$ be a not-necessarily-commutative associative unital $K$-algebra. Let $f_1,\dotsc,f_n$ be distinct $K$-algebra homomorphisms from $A$ to $K$. Let $\phi:A \to K^n$ be the map defined by $\phi(x) = (f_1(x),\dotsc,f_n(x))$. Then $\phi$ is surjective.




    The proof is an easy consequence of Chinese remainder theorem.



    Step 2.



    Let $f_1,\dotsc,f_n$ be as above. There are elements $x_1,\dotsc,x_n$ of $A$ such that $f_j(x_i) = \delta(i, j)$ where $\delta(i, j)$ is Kronecker's delta.



    The proof is an easy consequence of Step 1.



    Step 3




    Let $K$ and $A$ be as above. Let $\text{Homalg}(A, K)$ be the set of $K$-algebra homomorphisms from $A$ to $K$. Let $\text{Hom}(A, K)$ be the set of $K$-linear maps from $A$ to $K$. Then $\text{Homalg}(A, K)$ is a linearly independent subset of $\text{Hom}(A, K)$.



    The proof is an easy consequence of Step 2.



    Step 4 (Artin's linear independence of characters)



    Let $K$ be a field. $K$ is regarded as a monoid by multiplication. Let $M$ be a not-necessarily-commutative monoid. Let $\text{Hom}(M, K)$ be the set of monoid homomorphisms. Let $K^M$ be the set of maps from $M$ to $K$. $K^M$ is regarded as a vector space over $K$. Then $\text{Hom}(M, K)$ is a linearly independent subset of $K^M$.



    The proof is an easy consequence of Step 3 if one considers the monoid algebra $K[M]$.

    Monday 15 July 2019

    recreational mathematics - Why is 11 times the 7th term of a fibonacci series equal to the sum of 10 terms?




    Why is 11 times the 7th term of a fibonacci series equal to the sum of 10 terms?




    I was watching scam-school on youtube the other day and this number trick just astonished me. Can someone please explain why this works?




    After a lot of searching, I've been stumbling onto slightly complicated mathematical explanations. An explanation of a simpler nature, one that a child can understand, would be much appreciated.



    Also, Can you extend this to find the sum of n terms of a fibonacci type sequence?


    Answer



    @Claude Leibovici



    In fact, there is a different way to answer this question using characteristic polynomials.



    All Fibonacci-like sequences are associated with the same characteristic polynomial $x^2-x-1$ due to their common property : $$\psi_{n+2}-\psi_{n+1}-\psi_{n}=0.$$




    Let us define a new sequence in the following way :
    $$\chi_n:=(\psi_{n+1}+\psi_{n+2}+...+\psi_{n+10})-11 \psi_{n+7}. \tag{1}$$



    We want to show that, for any $n \geq 0$, $\chi_n=0$.



    This is an easy consequence of the fact that the characteristic polynomial of sequence $\chi_n$, i.e.,



    $$p(x):=(x+x^2+...+x^9+x^{10})-11x^7$$



    is divisible by $(x^2-x-1).$




    Precisely :



    $$p(x)=x(x^2 - x - 1)(x^7 + 2x^6 + 4x^5 - 4x^4 + x^3 - 2x^2 - 1).$$


    Sunday 14 July 2019

    real analysis - Evaluate $lim_{n to infty} ((15)^n +([(1+0.0001)^{10000}])^n)^{frac{1}{n}}$




    Evaluate $\lim_{n \to \infty} ((15)^n +([(1+0.0001)^{10000}])^n)^{\frac{1}{n}}$ Here [.] denotes the greatest integer function.



    My Try : I know how to solve this kind of problem :$\lim_{n \to \infty} ((a)^n +(b)^n)^{\frac{1}{n}}$ where $a, b \geq 0$. But here I can not find $([(1+0.0001)^{10000}])$?



    Can anyone please help me out?



    Thank You.


    Answer



    Since for all $n\in \mathbb{N}$




    $$2\le\left(1+\frac{1}{n}\right)^n \le e < 3$$



    we have that



    $$\left((15)^n +\left[\left(1+\frac1{10000}\right)^{10000}\right]^n\right)^{\frac{1}{n}}= \left((15)^n +2^n\right)^{\frac{1}{n}}=15 \left(1 +(2/15)^n\right)^{\frac{1}{n}}\to 15$$


    elementary number theory - Using Extended Euclidean Algorithm for $85$ and $45$




    Apply the Extended Euclidean Algorithm of back-substitution to find
    the value of $\gcd(85, 45)$ and to express $\gcd(85, 45)$ in the form $85x + 45y$ for a pair of integers $x$ and $y$.





    I have no idea what is the difference between the regular EEA and the back-substitution EEA. The only thing that I have been taught is constructing the EEA table, for some values a, b. Can anyone help me explain this? Thanks a ton!


    Answer



    You’re probably intended to do the substitutions explicitly. You have



    $$\begin{align*}
    85&=1\cdot45+40\\
    45&=1\cdot40+5\\
    40&=8\cdot5\;.
    \end{align*}$$




    Now work backwards:



    $$\begin{align*}
    5&=1\cdot45-1\cdot40\\
    &=1\cdot45-1\cdot(1\cdot85-1\cdot45)\\
    &=(-1)\cdot85+2\cdot45\;.
    \end{align*}$$



    The tabular method is just a shortcut for this explicit back-substitution.



    probability - Let $F_X(x):=P(Xleq x)$ a distribution function of a random variable $X$. Prove that $F_X$ is right-continuous.

    Let $F_X(x):=P(X\leq x)$ a distribution function of a random variable $X$.



    Prove that $F_X$ is right-continuous.



    I need to show that for every non-increasing sequence $x_n$ with $\lim x_n=x$ I will get:



    $$\lim_{n\to\infty}f(x_n)=f(x_0)$$




    How do I show this ? Any ideas ?

    linear algebra - What are the eigenvalues of matrix that have all elements equal 1?




    As in subject: given a matrix $A$ of size $n$ with all elements equal exactly 1.




    What are the eigenvalues of that matrix ?


    Answer



    Suppose $\,\begin{pmatrix}x_1\\x_2\\...\\x_n\end{pmatrix}\,$ is an eigenvector of such a matrix corresponding to an eigenvalue $\,\lambda\,$, then



    $$\begin{pmatrix}1&1&...&1\\1&1&...&1\\...&...&...&...\\1&1&...&1\end{pmatrix}\begin{pmatrix}x_1\\x_2\\...\\x_n\end{pmatrix}=\begin{pmatrix}x_1+x_2+...+x_n\\x_1+x_2+...+x_n\\.................\\x_1+x_2+...+x_n\end{pmatrix}=\begin{pmatrix}\lambda x_1\\\lambda x_2\\..\\\lambda x_n\end{pmatrix}$$



    One obvious solution to the above is



    $$W:=\left\{\begin{pmatrix}x_1\\x_2\\..\\x_n\end{pmatrix}\;;\;x_1+...+x_n=0\right\}\,\,\,,\,\,\lambda=0$$




    For sure, $\,\dim W=n-1\,$ (no need to be a wizard to "see" this solution since the matrix is singular and thus one of its eigenvalues must be zero)



    Other solution, perhaps not as trivial as the above but also pretty simple, imo, is



    $$U:=\left\{\begin{pmatrix}x_1\\x_2\\..\\x_n\end{pmatrix}\;;\;x_1=x_2=...=x_n\right\}\,\,\,,\,\,\lambda=n$$



    Again, it's easy to check that $\,\dim U=1\,$ .



    Now, just pay attention to the fact that $\,W\cap U=\{0\}\,$ unless the dimension of the vector space $\,V\,$ we're working on is divided by the definition field's characteristic (if you're used to real/complex vector spaces and you aren't sure about what the characteristic of a field is disregard the last comment)




    Thus, assuming this is the case, we get $\,\dim(W+U)=n=\dim V\Longrightarrow V=W\oplus U\,$ and we've thus found all the possible eigenvalues there are.



    BTW, as as side effect of the above, we get our matrix is diagonalizable.


    real analysis - is it possible to proof that this number is not rational



    It is an idea I had when reading the proof that $(0,1)$ is uncountable. There the numbers in $(0,1)$ are written into a list in decimal expansion and then the diagonal is modified and the resulting number is a number not on the list.



    Now instead consider $S= \mathbb Q \cap (0,1)$. Like in the proof about $(0,1)$ write the numbers into a list in decimal expansion. Add $1$ to every digit on the diagonal and compute the remainder modulo $10$. I am trying to proof that this new diagonal number is not rational but without using the knowledge that the rationals are countable. Here is the proof:



    There are two cases: A rational has either a finite expansion or is periodic. Let the digits in the expansions be called $a_{mn}$. Let the new diagonal number after the modification be $d_n$.




    In the first case: If $a_{nn}$ is finite then $d_n$ is finite and $d_n = 0$ for $n>N$ for an $N$. Then, it is possible to find $n+1$ different periodic rational with no $0$ in the expansion. But this is a contradiction. Therefore $a_{n n}$ can not be finite.



    In the second case: if $a_{nn}$ is a periodic rational with period $c_1 c_2 \dots c_N$. Then because of a same argument like the finite case this period can not be constant ($N=1$). It is forced to contain all digits in $\{0, ..., 9\}$. But what now? Is it possible to finish this proof without using that the rationals are countable?


    Answer



    You start by explicitly assuming that $S=\mathbb Q\cap (0,1)$ is countable as you write all these numbers into a list.
    If your list does not contain all elements of $S$ it is well possible that the antidiagonal number is in fact one of the rationals not in the list.
    For example if $S$ consists only of those rational numbers having at least one $2$ in their decimal expansion, it might happen that the diagonal number is simply $0.222\ldots =\frac29$ and your antidiagonal number becomes $0.333\ldots=\frac13$, which is rational but not an element of $S$.
    On the other hand, if $S$ really contains all eventually periodic decimal expansions then it is clear that the antidiagonal is not eventually periodic as it differs from each single element of $S$.




    By the way, you should have a closer look at how you define your antidiagonal number: You might accidentally end up with $0.000\ldots=0$ or $0.999\ldots = 1$


    Saturday 13 July 2019

    trigonometry - Proving the identity $csc x−sin x = (cot x)(cos x) $



    We recently started Trigonometry and I was trying to solve this.



    $$\csc x − \sin x=(\cot x)(\cos x) $$



    So starting with LHS:
    \begin{align}
    \frac{1}{\sin x} − \sin x &= \frac{1 − (\sin x)^2}{ \sin x } \\

    &= \frac{(\cos x)^2 }{ \sin x }
    \end{align}



    I am stuck now and wanted to know how should I proceed. Is this much correct?


    Answer



    Its done. Just split the numerator as:



    $$ \frac{\cos x . \cos x }{\sin x}$$



    $$= \frac{\cos x }{\sin x} . \cos x $$




    $$= \cot x . \cos x $$



    Proved!


    real analysis - Is this proof ok?

    My friend and I were trying to prove that a function can't be discontinuous on the irrationals. His proof was this:





    Let $f:\mathbb{R}\to\mathbb{R}$ be continuous on all of $\mathbb{Q}$. Then for any $\{x_n\}\subset\mathbb{Q}$ with $x_n\to r\in\mathbb{Q}$, we have $\lim_{n\to\infty}f(x_n)=f(r)$. Now choose $\{x_n\}$ such that $\{x_n\}$ is Cauchy. Then for any $\epsilon>0$ there exists an $N_0\in\mathbb{N}$ such that for all $n,\ m\ge N_0$, $|x_m-x_n|<\epsilon$.



    Now in the interval $(x_n,\ x_m)$ there's an irrational $\gamma$ between them. Therefore, $\gamma$ also satisfies the definition of continuity, and so $f(x)$ is continuous for at least one $\gamma\in\mathbb{Q}^c$ and so it can't be continuous on only the rationals.




    Something about this doesn't sit well with me, it feels so off, but I can't tell why. If it were this simple all along, what's with all the $F_\sigma$ and $G_\delta$ sets and Baire Spaces and Theorems and all this other complicated stuff?

    real analysis - Only removable/jump discontinuities

    Definitions



    Consider R with the usual metric and let $f:A \to R$ , $A \subset R$ be a function.



    We say that $f$ has a jump discontinuity at a point $p \in A $ iff
    $\lim_{x \to p^-}f(x)$ and $\lim_{x \to p^+}f(x) $ exists but have different values.



    We say $f$ has a removable discontinuity at a point $p \in A$ iff
    $\lim_{x \to p}f(x)$ exists but $\lim_{x \to p}f(x) \neq f(p) $ .



    Question




    I'm trying to find two functions with $R$ as co-domain that are everywhere discontinuous ( discontinuous at every point of the domain ) such that one only has jump disconinuities and the other only has removable discontinuities.
    Is there any pair of examples showing this is possible,or can we prove that this is not possible ?
    $A$ is an arbitrary subset of $R$.



    Thanks a lot in advance.

    Friday 12 July 2019

    integration - solving $intfrac{2x^3}{x^2+1} , dx$





    $$\int\frac{2x^3}{x^2+1} \, dx$$





    $$u=x^2$$



    $$du=2x \, dx$$



    $$\int \frac{u}{u+1} \, du$$$$=\int \frac{u+1-1}{u+1}du$$$$=\int 1-\frac{1}{u+1} \, du$$



    how should I continue? is there an algotherm for integrating rational functions?


    Answer



    Notice, here is another simple method, $$\int \frac{2x^3}{x^2+1}\ dx$$

    $$\int \frac{2x^3+2x-2x}{x^2+1}\ dx$$
    $$=\int \frac{2x(x^2+1)-2x}{x^2+1}\ dx$$
    $$=\int 2x\ dx-\int \frac{2x}{x^2+1}\ dx$$
    $$=2\int x\ dx-\int \frac{d(x^2+1)}{x^2+1}$$
    $$=\color{red}{x^2-\ln(x^2+1)+C}$$


    Thursday 11 July 2019

    linear algebra - Eigenvalues of product of p.d. Matrix with upper-triangular Matrix

    Let $A$ be a positive definite matrix (positive eigenvalues). Let $B$ be an upper triangular matrix, with ones in its main diagonal (i.e. all its eigenvalues are 1). Is there anything I can say about the eigenvalues of $AB$ ? I would like to find a way to prove that $AB$ has positive eigenvalues, if that's true.
    Thanks.

    Wednesday 10 July 2019

    discrete mathematics - Using induction to verify the formula for a summation $sum_{k=1}^n k^2 = frac{n(n+1)(2n+1)}6$


    Problem 4. use the principle of induction to verify:
    $$\sum_{k=1}^n k^2 = \frac{n(n+1)(2n+1)}6$$




    enter image description here



    base case is obviously easy, but I don't know how to prove the inductive case

    discrete mathematics - Proving the sum of the first $n$ natural numbers by induction




    I am currently studying proving by induction but I am faced with a problem.



    I need to solve by induction the following question.



    $$1+2+3+\ldots+n=\frac{1}{2}n(n+1)$$



    for all $n > 1$.




    Any help on how to solve this would be appreciated.






    This is what I have done so far.



    Show truth for $N = 1$



    Left Hand Side = 1




    Right Hand Side = $\frac{1}{2} (1) (1+1) = 1$



    Suppose truth for $N = k$



    $$1 + 2 + 3 + ... + k = \frac{1}{2} k(k+1)$$



    Proof that the equation is true for $N = k + 1$



    $$1 + 2 + 3 + ... + k + (k + 1)$$




    Which is Equal To



    $$\frac{1}{2} k (k + 1) + (k + 1)$$



    This is where I'm stuck, I don't know what else to do. The answer should be:



    $$\frac{1}{2} (k+1) (k+1+1)$$



    Which is equal to:




    $$\frac{1}{2} (k+1) (k+2)$$



    Right?



    By the way sorry about the formatting, I'm still new.


    Answer



    Basic algebra is what's causing the problems: you reached the point



    $$\frac{1}{2}K\color{red}{(K+1)}+\color{red}{(K+1)}\;\;\;\:(**)$$




    Now just factor out the red terms:



    $$(**)\;\;\;=\color{red}{(K+1)}\left(\frac{1}{2}K+1\right)=\color{red}{(K+1)}\left(\frac{K+2}{2}\right)=\frac{1}{2}(K+1)(K+2)$$


    galois theory - Showing Multiplicative Inverses of Polynomials of Algebraic Elements Are Also Polynomials





    1. Let $E$ be a field extension of $F$.

    2. Let $\alpha \in E$ be algebraic over $F$.

    3. Consider $a_0 + a_1 \alpha + \ldots + a_{n} \alpha^n = f(\alpha) \in F[\alpha]$.



    How can I show that $\frac{1}{a_0 + a_1 \alpha + \ldots + a_{n} \alpha^n} \in F[\alpha]$?



    I have tried to think about the simpler case of just showing that $1/\alpha \in F[\alpha]$ but haven't been able to come up with anything. I feel like I'm missing something really obvious.



    Answer



    This follows from the fact that if $F$ is a field then $F[x]$ is a Euclidean domain with respect to the degree map.



    Let $\beta = a_0 + a_1\alpha + ... + a_n\alpha^n\neq 0$ be the element we are interested in finding the inverse of. Then there is a corresponding polynomial $\beta(x) = a_0 + a_1 x + ... + a_n x^n$ in $F[x]$.



    Let $m(x)$ be the minimal polynomial of $\alpha$ over $F$. Now we may assume the degree of $\beta(x)$ is less than the degree of $m(x)$ since if $\beta(x)$ contained higher powers of $\alpha$ we could use the minimal polynomial to replace these powers with smaller powers of $\alpha$.



    Now the fact that $F[x]$ is a Euclidean domain, along with the fact that $\beta(x)$ and $m(x)$ must be coprime ($m(x)$ is irreducible and $\beta(\alpha)\neq 0$) tells us that there exist polynomials $a(x),b(x)\in F[x]$ such that:



    $a(x)\beta(x) + b(x)m(x) = 1$.




    Substituting $x=\alpha$ tells us that $a(\alpha)\beta = 1$ and so we have found our multiplicative inverse $a(\alpha)\in F[\alpha]$.


    Tuesday 9 July 2019

    matrices - Is there a way to extract the diagonal from a matrix with simple matrix operations



    I have a square matrix A. Is there a way I can apply operations like addition, subtraction, matrix multiplication, matrix inverse and transpose to get the diagonal of the matrix. For example having:
    $$\begin{pmatrix}1&2\\3&4\end{pmatrix}$$

    I would like to get $(1,4)$.



    P.S. based on the conversation with mvw, here is a better description:



    I am on board of an alien space ship and the board computer allows only matrix operations but access to the individual matrix elements is blocked. I can only use addition, subtraction, matrix multiplication, matrix inverse and transpose. No access to individual row/column/element. I can only create matrices of any dimension $(1 x n)$, $(n x 1)$, $(n x 2n)$ that have all zeros or all ones. Is there a way for me to get a diagonal vector?


    Answer



    Note: This solution is not working for the updated question.
    $$
    D = \text{diag}(a_{11}, \ldots, a_{nn}) = \sum_{i=1}^n P_{(i)} A P_{(i)}
    $$

    where $P_{(i)}$ is the projection on the $i$-th coordinate:
    $$
    (P_{(i)})_{jk} = \delta_{ij} \delta_{jk} \quad (i,j,k \in \{1,\ldots,n\})
    $$
    and $\delta$ is the Kronecker delta ($1$ for same index values, otherwise $0$).



    Transforming the diagonal matrix $D$ into a row vector can be done by
    $$
    d = u^T D
    $$

    where each of the $n$ components of $u$ is $1$.
    $$
    u = (1,1,\ldots,1)^T
    $$
    Combining both gives
    $$
    d = \sum_i u^T P_{(i)} A P_{(i)} = \sum_i e_i^T A P_{(i)}
    $$
    where $e_i$ is the $i$-th canonical base vector.




    Example:



    octave> A, P1, P2, u
    A =
    1 2
    3 4

    P1 =
    1 0
    0 0


    P2 =
    0 0
    0 1

    u =
    1
    1

    octave> u'*(P1*A*P1+P2*A*P2)

    ans =
    1 4

    exponentiation - Calculate exponential limit involving trigonometric functions



    Calculate the following limit:
    $$\lim_{x \rightarrow 0} \left( \frac{\tan x}{x} \right) ^ \frac{1}{\sin^2 x}$$



    I know the result must be $\sqrt[3]{e}$ but I don't know how to get it. I've tried rewriting the limit as follows:




    $$\lim_{x \rightarrow 0} e ^ {\ln {\left( \frac{\tan x}{x} \right) ^ \frac{1}{\sin^2 x}}} = \lim_{x \rightarrow 0} e ^ {\frac{1}{\sin^2 x} \ln {\left( \frac{\tan x}{x} \right)}}$$



    From this point, I applied l'Hospital's rule but got $1$ instead of $\sqrt[3]{e}$.



    Thank you!


    Answer



    $$\lim_{x\to0}\frac{\log\frac{\tan x}x}{\sin^2x}\stackrel{l'H}=\lim_{x\to0}\frac{\frac x{\tan x}\frac{x\sec^2x-\tan x}{x^2}}{2\sin x\cos x}=\lim_{x\to0}\frac {\frac1{\sin x\cos x}-\frac1x}{2\sin x\cos x}=$$



    $$=\lim_{x\to0}\frac{x-\sin x\cos x}{\underbrace{2x\sin^2x\cos^2x}_{=\frac x2\sin^22x}}\stackrel{l'H}=\lim_{x\to0}\frac{\overbrace{1-\cos^2 x+\sin^2x}^{2\sin^2x}}{\frac12\sin^22x+\underbrace{x\sin2x\cos2x}_{=x\sin4x}}\stackrel{l'H}=\lim_{x\to0}\frac{2\sin2x}{2\sin4x+4x\cos4x}=$$




    $$\stackrel{l'H}=\lim_{x\to0}\frac{4\cos2x}{12\cos4x-16x\sin4x}=\frac4{12}=\frac13$$



    and the limit is $\;\;e^{1/3}\;$


    definite integrals - Integration question.



    The question is as follows




    For any real number $x$, let $\lfloor{x}\rfloor$ denote the greatest integer less than or equal to $x$. Let $f$ be a real valued function defined on the interval $[-10,10]$ by




    $f(x)=
    \begin{cases}
    x-\lfloor{x}\rfloor & \ \ \text{if} \ \lfloor{x}\rfloor \ \text{is odd} \\
    1+\lfloor{x}\rfloor-x & \ \ \text{if} \ \lfloor{x}\rfloor \ \text{is even}
    \end{cases}$



    Then the value of $\dfrac{\pi^2}{10}\displaystyle\int_{-10}^{10} f(x) \ \cos{\pi x} \ dx$ is




    My try--




    $f(x)$ can be rewritten as



    $f(x)=
    \begin{cases}
    \{x\} & \ \ \text{if} \ \lfloor{x}\rfloor \ \text{is odd} \\
    1-\{x\} & \ \ \text{if} \ \lfloor{x}\rfloor \ \text{is even}
    \end{cases}$



    Where $\{ \cdot\}$ denotes the fractional part function.




    The graph of $f(x)$ will be somewhat like this, from $-10$ to $10$.



    enter image description here



    So, $\displaystyle\int_{-10}^{10} f(x) \ dx=10 \times \dfrac{1}{2} \times 2 \times 1=10.$



    Then, using integration by parts, $$\displaystyle\int_{-10}^{10} \underbrace{f(x)}_{\text{2nd function}} \underbrace{\cos{\pi x}}_{\text{1st function}} \ dx=\left[\cos {\pi x} \cdot 10\right]_{-10}^{10}+\displaystyle\int_{-10}^{10}\pi\sin{\pi x} \cdot 10 \ dx \ \ \ \ \ \ \left(\text{because} \ \displaystyle\int_{-10}^{10} f(x) \ dx=10\right)$$



    But my answer does not match that given in the book.



    Answer



    It is easier to note that the answer is (do you see why?)

    $10*\displaystyle\int_{0}^{2} f(x)\cos(\pi x)dx=10*\displaystyle\int_{0}^{1} x\cos(\pi x)dx+10*\displaystyle\int_{1}^{2} (2-x)\cos(\pi x)dx$, and then use IBP (integration by parts) on that.




    EDIT: Your mistake is actually something else in your IBP. Let the integral of $f(x)$ be denoted $F(x)$:

    $\displaystyle\int_{-10}^{10} \underbrace{f(x)}_{\text{2nd function}} \underbrace{\cos{\pi x}}_{\text{1st function}} \ dx=\left[\cos {\pi x} \cdot F(x)\right]_{-10}^{10}+\displaystyle\int_{-10}^{10}\pi\sin{\pi x} \cdot F(x) \ dx $

    When you integrate by parts, the function that isn't differentiated (that's integrated) isn't integrated definitely across the whole range but indefinitely inside the square brackets and the integral.



    $\displaystyle\int_{0}^{1} u'vdx=\left[uv\right]_{0}^{1}-\displaystyle\int_{0}^{1}uv'dx$

    Not $\displaystyle\int_{0}^{1} u'vdx=\left[v(\displaystyle\int_{0}^{1}udx)\right]_{0}^{1}-\displaystyle\int_{0}^{1}v'(\displaystyle\int_{0}^{1}udt)dx$


    real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

    How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...