Thursday 9 January 2020

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$



How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule?

I know when I use lhopital I easy get

$$ \lim_{h\rightarrow 0}\frac{\cos(ah)a}{1} = a$$ but I don't know how to behave without that way


Answer



Hint:



$$\frac{\sin(ha)}{h} = a\cdot\frac{\sin(ha)}{ha}$$



Also, remember what $$\lim_{x\to 0}\frac{\sin(x)}{x}$$ is equal to?


summation - Equality of the sums $sumlimits_{v=0}^k frac{k^v}{v!}$ and $sumlimits_{v=0}^k frac{v^v (k-v)^{k-v}}{v!(k-v)!}$



How can one proof the equality

$$\sum\limits_{v=0}^k \frac{k^v}{v!}=\sum\limits_{v=0}^k \frac{v^v (k-v)^{k-v}}{v!(k-v)!}$$
for $k\in\mathbb{N}_0$?



Induction and generating functions don't seem to be useful.



The generation function of the right sum is simply $f^2(x)$ with $\displaystyle f(x):=\sum\limits_{k=0}^\infty \frac{(xk)^k}{k!}$



but for the left sum I still don't know.



It is $\displaystyle f(x)=\frac{1}{1-\ln g(x)}$ with $\ln g(x)=xg(x)$ for $\displaystyle |x|<\frac{1}{e}$.



Answer



Recall the combinatorial class of labeled trees which is



$$\def\textsc#1{\dosc#1\csod}
\def\dosc#1#2\csod{{\rm #1{\small #2}}}\mathcal{T} = \mathcal{Z}\times \textsc{SET}(\mathcal{T})$$



which immediately produces the functional equation



$$T(z) = z \exp T(z)
\quad\text{or}\quad

z = T(z) \exp(-T(z)).$$



By Cayley's theorem we have



$$T(z) = \sum_{q\ge 1} q^{q-1} \frac{z^q}{q!}.$$



This yields



$$T'(z) = \sum_{q\ge 1} q^{q-1} \frac{z^{q-1}}{(q-1)!}
= \frac{1}{z} \sum_{q\ge 1} q^{q-1} \frac{z^{q}}{(q-1)!}

= \frac{1}{z} \sum_{q\ge 1} q^{q} \frac{z^{q}}{q!}.$$



The functional equation yields



$$T'(z) = \exp T(z) + z \exp T(z) T'(z)
= \frac{1}{z} T(z) + T(z) T'(z)$$



which in turn yields



$$T'(z) = \frac{1}{z} \frac{T(z)}{1-T(z)}$$




so that



$$\sum_{q\ge 1} q^{q} \frac{z^{q}}{q!}
= \frac{T(z)}{1-T(z)}.$$



Now we are trying to show that



$$\sum_{v=0}^k \frac{v^v (k-v)^{k-v}}{v! (k-v)!}
= \sum_{v=0}^k \frac{k^v}{v!}.$$




Multiply by $k!$ to get



$$\sum_{v=0}^k {k\choose v} v^v (k-v)^{k-v}
= k! \sum_{v=0}^k \frac{k^v}{v!}.$$



Start by evaluating the LHS.

Observe that when we multiply two
exponential generating functions of the sequences $\{a_n\}$ and
$\{b_n\}$ we get that




$$ A(z) B(z) = \sum_{n\ge 0} a_n \frac{z^n}{n!}
\sum_{n\ge 0} b_n \frac{z^n}{n!}
= \sum_{n\ge 0}
\sum_{k=0}^n \frac{1}{k!}\frac{1}{(n-k)!} a_k b_{n-k} z^n\\
= \sum_{n\ge 0}
\sum_{k=0}^n \frac{n!}{k!(n-k)!} a_k b_{n-k} \frac{z^n}{n!}
= \sum_{n\ge 0}
\left(\sum_{k=0}^n {n\choose k} a_k b_{n-k}\right)\frac{z^n}{n!}$$



i.e. the product of the two generating functions is the generating

function of $$\sum_{k=0}^n {n\choose k} a_k b_{n-k}.$$



In the present case we have
$$A(z) = B(z) = 1 + \frac{T(z)}{1-T(z)}
= \frac{1}{1-T(z)} $$
by inspection.




We added the constant term to account for the fact that $v^v=1$ when
$v=0$ in the convolution. We thus have




$$\sum_{v=0}^k {k\choose v} v^v (k-v)^{k-v}
= k! [z^k] \frac{1}{(1-T(z))^2}.$$



To compute this introduce



$$\frac{k!}{2\pi i}
\int_{|z|=\epsilon}
\frac{1}{z^{k+1}} \frac{1}{(1-T(z))^2} \; dz$$



Using the functional equation we put $z=w\exp(-w)$ so that $dz =

(\exp(-w)-w\exp(-w)) \; dw$
and obtain



$$\frac{k!}{2\pi i}
\int_{|w|=\gamma}
\frac{\exp((k+1)w)}{w^{k+1}} \frac{1}{(1-w)^2}
(\exp(-w)-w\exp(-w)) \; dw
\\ = \frac{k!}{2\pi i}
\int_{|w|=\gamma}
\frac{\exp(kw)}{w^{k+1}} \frac{1}{1-w} \; dw$$




Extracting the coefficient we get



$$k! \sum_{v=0}^k [w^v] \exp(kw) [w^{k-v}] \frac{1}{1-w}
= k! \sum_{v=0}^k \frac{k^v}{v!}$$



as claimed.


Remark. This all looks very familiar but I am unable to locate the
duplicate among my papers at this time.


elementary number theory - How does one show that for $k in mathbb{Z_+},3mid2^{2^k} +5$ and $7mid2^{2^k} + 3, forall space k$ odd.




For $k \in \mathbb{Z_+},3\mid2^{2^k} +5$ and $7\mid2^{2^k} + 3, \forall \space k$ odd.




Firstly,




$k \geq 1$



I can see induction is the best idea:



Show for $k=1$:



$2^{2^1} + 5 = 9 , 2^{2^1} + 3 = 7$



Assume for $k = \mu$




so: $3\mid2^{2^\mu} + 5 , \space 7\mid2^{2^\mu} + 3$



Show for $\mu +2$



Now can anyone give me a hint to go from here? My problem is being able to show that $2^{2^{\mu+2}}$ is divisible by 3, I can't think of how to begin how to show this.


Answer



You have already shown that the base cases hold.



Assume $3\mid 2^{2^k}+5$. Then $2^{2^k}\equiv 1$ mod $3$. Hence:
$$2^{2^{k+1}}=2^{2^k*2}=\left(2^{2^k}\right)^2\equiv 1 \text{ mod } 3$$

Hence $3\mid 2^{2^{k+1}}+5$.



In the same way:



Assume $7\mid 2^{2^{k}}+3$. Then $2^{2^{k}}\equiv 4$ mod $7$. Hence:
$$2^{2^{k+2}}=\left(2^{2^k}\right)^4\equiv 4^4 \text{ mod } 7$$
And since $4^4=256=36*7+4$, we see that $256\equiv 4\text{ mod }7$. So $7\mid 2^{2^{k+2}}+3$.


summation - How can you prove that $1+ 5+ 9 + cdots +(4n-3) = 2n^{2} - n$ without using induction?

Using mathematical induction, I have proved that



$$1+ 5+ 9 + \cdots +(4n-3) = 2n^{2} - n$$



for every integer $n > 0$.



I would like to know if there is another way of proving this result without using PMI. Is there any geometric solution to prove this problem? Also, are there examples of problems where only PMI is our only option?



Here is the way I have solved this using PMI.




Base Case: since $1 = 2 · 1^2 − 1$, the formula holds for $n = 1$.



Assuming that the
formula holds for some integer $k ≥ 1$, that is,



$$1 + 5 + 9 + \dots + (4k − 3) = 2k^2 − k$$



I show that



$$1 + 5 + 9 + \dots + [4(k + 1) − 3] = 2(k + 1)^2 − (k + 1).$$




Now if I use hypothesis I observe.



$$
\begin{align}
1 + 5 + 9 + \dots + [4(k + 1) − 3]
& = [1 + 5 + 9 + \dots + (4k − 3)] + 4(k + 1) −3 \\
& = (2k^2 − k) + (4k + 1) \\
& = 2k^2 + 3k + 1 \\
& = 2(k + 1)^2 − (k + 1)

\end{align}
$$



$\diamond$

calculus - What is wrong with treating $dfrac {dy}{dx}$ as a fraction?




If you think about the limit definition of the derivative, $dy$ represents $$\lim_{h\rightarrow 0}\dfrac {f(x+h)-f(x)}{h}$$, and $dx$ represents




$$\lim_{h\rightarrow 0}$$
. So you have a $\;\;$$\dfrac {number}{another\; number}=a fraction$, so why can't you treat it as one? Thanks! (by the way if possible please keep the answers at a calc AB level)


Answer



The derivative, when it exists, is a real number (I'm restricting here to real values functions only for simplicity). Not every real number is a fraction (i.e., $\pi$ is not a fraction), but every real number is trivially a quotient of two real numbers (namely, $x=\frac{x}{1}$). So, in which sense is the derivative a fraction? answer: it's not. And now, in which sense is the derivative a quotient to two numbers? Ahhh, let's try to answer that then: By definition $f'(x)=\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}$. Well, that is not a quotient of two numbers, but rather it is a limit. A limit, when it exists, is a number. This particular limit is a limit of quotients of a particular form (still, not of fractions in general, but quotients of real numbers).



The meaning of the derivative $f'(x)$ is the instantaneous rate of change of the value of $f$ at the point $x$. It is defined as a certain limit. If you now intuitively think of $h$ as an infinitesimal number (warning: infinitesimals do not exist in $\mathbb R$, but they exist in certain extensions of the reals) then you can consider the single expression $\frac{f(x+h)-f(x)}{h}$. In systems where infinitesimals really do exist one can show that this single expression, when the derivative exists, is actually infinitesimally close to the actural derivative $f'(x)$. That is, when $h\ne 0$ is infinitesimal, $f'(x)-\frac{f(x+h)-f(x)}{h}$ is itself infinitesimal. One can them compute with this expression as if it were the derivative (with some care). This can be done informally, and to some extend this is how the creators of early calculus (prior to Cauchy) argued, or it can be done rigorously using any one of a number of different techniques to introduce infinitesimals into calculus. However, getting infinitesimals into the picture comes with a price. There are logical/set-theoretical issues with such models rendering all of them not very explicit.


Wednesday 8 January 2020

real analysis - Sin(n) and cos(n) dense in $[-1,1]

We knows that $sin(x)$ and $cos(x)$ are two function with value in the closed set $[-1,1]$. How can I prove that $X=({sin(n)|n\in\mathbb{N}})$ and $Y=({cos(n)|n\in\mathbb{N}})$ are or not dense in $[-1,1]$.

linear algebra - Given a Characteristic Polynomial of a Matrix...



This question contains three parts. I have already answered the first two. The last part is confusing me.




Suppose $A$ is a $4 \times 4$ matrix whose characteristic polynomial is $p(x) = (x - 1)(x + 2)^2(x - 3)$.



Part (a): Show that $A$ is invertible. Find the characteristic polynomial of $A^{-1}$.



We have that the roots of a characteristic polynomial are the eigenvalues of $A$. That is, $\lambda = -2, -2, 1, 3$ are our eigenvalues. The determinant of an $n \times n$ matrix is the product of its eigenvalues. Hence, det$A = 12$. An $n \times n$ matrix is invertible if and only if its determinant is nonzero. Therefore, $A$ is invertible.



Since none of the eigenvalues are zero, we have that $\lambda$ is an eigenvalue of $A$ if and only if $\frac{1}{\lambda}$ is an eigenvalue of $A^{-1}$. Then, the characteristic polynomial for $A^{-1}$ is $q(x) = (x - 1)(x + 1/2)^2(x - 1/3)$.



Part (b): Find the determinant and trace of $A$ and $A^{-1}$.




This is easy since the determinant of an $n \times n$ matrix is the product of its eigenvalues and the trace of an $n \times n$ matrix is the sum of its eigenvalues.



Part (c): Express $A^{-1}$ as a polynomial in $A$. Explain your answer.



Not really sure what part (c) is getting at.


Answer



By the Cayley-Hamilton theorem, we have $(A-1)(A+2)^2(A-3)=0$, that is,
$A^4-9A^2-4A+12I=0$.
Multiply both sides by $A^{-1}$, and be amazed!


soft question - Most useful heuristic?

As opposed to the most harmful heuristics, what are the most useful heuristics which





  • are hand-waving,


  • are conducive to proper mathematical education, and


  • you have seen taught or taught yourself?




In this context:




  • Hand-waving means imprecise, intuitive, ambiguous, with a purpose of impressing or convincing.



  • Proper mathematical education means that a person can understand, use, discuss, and derive the learnt mathematical claims after finishing the education process to the levels (a) advertised by goals of the education process and at the same time (b) having, up to some allowed degree of ambiguity, the same, widely accepted meaning in the community. Example: "Real Calculus" could mean "basics of differentiation and integration over the functions $\mathbb{R}\to\mathbb{R}$".


  • Seen taught means you closely observed or participated as a learner in the educational process.


  • Taught yourself means you were a lecturer or an author of used educational material.


What's the formula to solve summation of logarithms?



I'm studying summation. Everything I know so far is that:



$\sum_{i=1}^n\ k = \frac{n(n+1)}{2}\ $



$\sum_{i=1}^{n}\ k^2 = \frac{n(n+1)(2n+1)}{6}\ $




$\sum_{i=1}^{n}\ k^3 = \frac{n^2(n+1)^2}{4}\ $



Unfortunately, I can't find neither on my book nor on the internet what the result of:



$\sum_{i=1}^n\log i$.



$\sum_{i=1}^n\ln i$.



is.




Can you help me out?


Answer



By using the fact that $$\log a + \log b = \log ab $$ then



$$ \sum^n \log i = \log (n!) $$



$$ \sum^n \ln i = \ln (n!) $$


Tuesday 7 January 2020

summation - Prove the identity $binom{2n+1}{0} + binom{2n+1}{1} + cdots + binom{2n+1}{n} = 4^n$



I've worked out a proof, but I was wondering about alternate, possibly more elegant ways to prove the statement. This is my (hopefully correct) proof:



Starting from the identity $2^m = \sum_{k=0}^m \binom{m}{k}$ (easily derived from the binomial theorem), with $m = 2n$:




$2^{2n} = 4^n = \binom{2n}{0} + \binom{2n}{1} + \cdots + \binom{2n}{2n-1} + \binom{2n}{2n}$



Applying the property $\binom{m}{k} = \binom{m}{m-k}$ to the second half of the list of summands in RHS above:



$4^n = \binom{2n}{0} + \binom{2n}{1} + \cdots + \binom{2n}{n-1} + \binom{2n}{n} + \underbrace{\binom{2n}{n-1} +\cdots \binom{2n}{1} + \binom{2n}{0}}_{\binom{m}{k} = \binom{m}{m-k} \text{ has been applied}}$



Rearranging the above sum by alternately taking terms from the front and end of the summand list in RHS above (and introducing the term $\binom{2n}{-1} = 0$ at the beginning just to make explicit the pattern being developed):



$4^n = (\binom{2n}{-1} + \binom{2n}{0}) + (\binom{2n}{0} + \binom{2n}{1}) + \cdots + (\binom{2n}{n-1} + \binom{2n}{n})$




Finally, using the property $\binom{m}{k} + \binom{m}{k-1} = \binom{m+1}{k}$ on the paired summands, we get the desired result:



$4^n = \binom{2n+1}{0} + \binom{2n+1}{1} + \cdots + \binom{2n+1}{n}$


Answer



Why not just
$$ \begin{align} 2^{2n+1} &=\binom{2n+1}{0}+\cdots+\binom{2n+1}{n}+\binom{2n+1}{n+1}+\cdots+\binom{2n+1}{2n+1} \\
&=\binom{2n+1}{0}+\cdots+\binom{2n+1}{n}+\binom{2n+1}{n}+\cdots+\binom{2n+1}{0} \\
&=2\left[\binom{2n+1}{0}+\cdots+\binom{2n+1}{n}\right] \end{align} $$
Then divide each extremity by 2.



summation - Evaluate $sumlimits_{k=1}^{n} frac{k}{2^k}$

Evaluate $$\sum\limits_{k=1}^{n} \frac{k}{2^k}$$

sequences and series - Concerning the sum $sum_{n = 1}^infty sin nx$



I recently came across this question and I posted an answer. It has been pointed out that my answer is incorrect. I cannot work out what is wrong with my reasoning. The answer I gave corresponds with the Abel and Cesaro sum, so perhaps $\sum$ is not the usual summation operator? Am I correct in asserting that if $x$ is in the upper half-plane, i.e., $\textbf{I}[x] > 0$, then $|e^{ix}| < 1$ and consequently
$$\sum_{n = 1}^\infty e^{inx} = \frac{e^{ix}}{1 - e^{ix}},$$
or is my argument flawed? Any help would be appreciated.



Answer



Because you assume that $x$ is not real, the imaginary part of $e^{inx}$ is not $\sin nx$.



Also, when computing the conjugate if $1-e^{ix}$, you don't get $1-e^{-ix}$ when $x$ is non-real, but rather $$1-e^{-i\bar x}$$


real analysis - How do i evaluate this sum $sumlimits_{n=1}^{infty} frac{(-1)^{n+1}}{n^2n!}$?

How do I evaluate this sum:
$$\sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n^2n!}$$



Note: The series converges by the ratio test. I have tried to use this sum:$$ \sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n}= \ln (2) $$ but I didn't succeed. Might there be others techniques which I don't know?



Thank you for any help

Monday 6 January 2020

Vector $p$-norm for square matrices is submultiplicative for $1 le p le 2$




I'm trying to prove that the vector $p$-norm for square matrices is submultiplicative for values of $p$ between $1$ and $2$. The vector $p$-norm for a square matrix $A$ is defined as



$\displaystyle \|A\|_p:=\left(\sum_{i=1}^n \sum_{j=1}^n |a_{ij}|^p \right)^{\frac{1}{p}}$



For $p=1$ and $p=2$ the result follows easily from the Cauchy-Schwarz inequality. Now for $1

My approach is as follows:



Let $A=(a_{ij})$ and $B=(b_{ij})$ be two square $n \times n$ matrices and let $q=\frac{p}{p-1}$. Then




$\displaystyle \left(\|AB\|_p\right)^p=\sum_{i=1}^n \sum_{j=1}^n \left| \sum_{k=1}^n a_{ik}b_{kj} \right|^p \le \sum_{i=1}^n \sum_{j=1}^n \left( \sum_{k=1}^n \left| a_{ik}\right|^p\right) \left( \sum_{k=1}^n \left| b_{ik}\right|^q\right)^\frac{p}{q}$



Now, since $p<2$, we have that $q>2$ and as such $\frac{p}{q}<1$. Here comes my doubt:
Can I assure that $\left( \sum_{k=1}^n \left| b_{ik}\right|^q\right)^\frac{p}{q} \le \sum_{k=1}^n \left| b_{ik}\right|^p$ ? Because if so, then the result would follow immediately.



Any help would be greatly appreciated!


Answer



The answer to your question follows from this: let $\|x\|_p = \left(\sum_{i=1}^n |x_i|^p \right)^{1/p}$. Then is $\|x\|_q \le \|x\|_p$ if $q \ge p$?



This is a very well known result, But here is a proof.




Answer: yes. Let $M = \|x\|_p$. So $\|\frac1Mx\|_p = 1$. Therefore $|\frac{x_i}M|^p \le 1 \Rightarrow |\frac{x_i}M| \le 1 \Rightarrow |\frac{x_i}M|^q \le |\frac{x_i}M|^p$. Hence
$$ \|\frac1M x\|_q^q = \sum_{i=1}^n |\frac{x_i}M|^q \le \sum_{i=1}^n |\frac{x_i}M|^p = \|\frac1Mx\|_p^p = 1.$$
Therefore $\|x\|_q \le M = \|x\|_p$.


Sunday 5 January 2020

linear algebra - Reducing the Matrix to Reduced Row Echelon Form




Reduce the matrix $\begin{bmatrix}1&-1&-6\\4&-1&-15\\-2&2&12\end{bmatrix}$ to reduced row-echelon form




How is my answer incorrect?



I performed the row operations:




1) $R_2 = 4R_1 - R_2$



2) $R_3 = 2R_1 + R_3$



3) $R_2 = R_2 / -3$;



4) $R_3 = R_3/18$



5) $R_2 = R_2 + 7R_3$




6) $R_1 = R_1 + -6R_3$



7) $R_1 = R_1 + R_2$



Which gives me the RREF of the matrix



$\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}$



So how in the world is my solution incorrect?


Answer




After Step 1:



$$\left[\begin{matrix}1 & -1 & -6 \\
0 & -3 & -9 \\
-2 & 2 & 12\end{matrix}\right]$$



After Step 2:



$$\left[\begin{matrix}1 & -1 & -6 \\
0 & -3 & -9 \\

0 & 0 & 0\end{matrix}\right]$$



After Step 3:



$$\left[\begin{matrix}1 & -1 & -6 \\
0 & 1 & 3 \\
0 & 0 & 0\end{matrix}\right]$$



All of the steps with $R_3$ are unnecessary since $R_3$ is all zeroes. Skip to Step 7:




$$\left[\begin{matrix}1 & 0 & -3 \\
0 & 1 & 3 \\
0 & 0 & 0\end{matrix}\right]$$


number theory - gcd as positive linear combination



Good evening,



I have a question concerning the euclidean algorithm.



One knows that for $a_1 , \ldots , a_n \in \mathbb{N} $ and $k\in \mathbb{N} $ there exist some $\lambda_i \in \mathbb{Z}$ such that :




$$\gcd(a_1, \ldots, a_n) = \frac{1}{k}\sum_{i=1}^n \lambda_i a_i$$



Here is my question: can one find a $m_0 \in \mathbb{N}$ that for every $m \geq m_0$ there are scalars $\mu_i \in \mathbb{N}$ such that:



$$\gcd(a_1, \ldots , a_n) = \frac{1}{m}\sum_{i=1}^n \mu_i a_i$$



Unfortunately I have only very rudimentary knowledge about number theory ...



With best regards
Mat



Answer



Let's say that $\gcd(a_1,\ldots,a_n)=d$ and $d=\displaystyle{\sum_{i=1}^{n}\lambda_ia_i}$ for some $\lambda_i\in \mathbb Z$.
Suppose that $s_i\in \mathbb N$ are sufficiently large such that $r_i=\lambda_i+s_ia_1a_2\ldots a_{i-1}a_{i+1}\ldots a_n>\dfrac{a_1}{d}|\lambda_i|$ and $\displaystyle{\sum_{i=1}^{n}r_ia_i}=m_0d$ for some $m_0\in \mathbb N$.
For all $r=0,1,\ldots,\dfrac{a_1}{d}-1$ we have $$(m_0+r)d=\displaystyle{\sum_{i=1}^{n}(r_i+r\lambda_i)a_i}$$ and $r_i+r\lambda_i>0$ for all $i.$
For $r\geq\dfrac{a_1}{d}$ if $r=q\dfrac{a_1}{d}+s$ with $q \in \mathbb N, \ s\in\left\{0,1,\ldots,\dfrac{a_1}{d}-1\right\}$ we have $$(m_0+r)d=(r_1+s\lambda_1+q)a_1+\displaystyle{\sum_{i=2}^{n}(r_i+s\lambda_i)a_i}$$
and $r_1+s\lambda_1+q>0, \ r_i+s\lambda_i>0$ for all $i\geq2.$
Therefore for every $m\geq m_o,\ \ md=\displaystyle{\sum_{i=1}^{n}\mu_i a_i}\Rightarrow d=\displaystyle{\frac{1}{m}\sum_{i=1}^{n}\mu_i a_i}$ for some $\mu_i\in \mathbb N$.


Saturday 4 January 2020

proof that the sum of digits of natural number are divisible by 3 iff the number is




Im trying to prove that every natural number is divisble by three if and only if the sum of its digits are divisible by three.




First i proved by induction that $10^n-1$ is divisible by 9 (and therefore 3) so i could use this in the next step, would this be valid...



ANy natural number can be given a decimal representation



Let $s\in N$



Let $k_0,k_1...k_n\in ${${0,...10}$}



$S=k_0 +10k_1+100k_2+...k_n10^n$




$= 9k_1 + 99k_2 + ... + k_n(10^n-1) + (k_0+k_1+...k_n)$



All terms are divisible by 3 but the sum of the digits in S. Therefore Dividing S by 3 will leave the same remainder as dividing the digits bys 3.



Is this a valid proof of the claim? Thank you for your time


Answer



It's fine, except that after the last $=$ sign you should have written$$9k_1+99k_2+\cdots+\overbrace{99\ldots9}^{n\text{ times}}k_n+(k_0+k_1+\cdots+k_n).$$


Cauchy functional equation three variables

If I have function from $R^3$ to $R$ satisfying




$f(x_1,x_2,x_3)+f(y_1,y_2,y_3) = f(x_1+y_1,x_1+y_2,x_3+y_3)$



is it necessarily linear?



$f(z_1,z_2,z_3) = \lambda _1 z_1+\lambda _2 z_2+\lambda _3 z_3$



Wasn't sure if this was a direct consequence of Cauchy's theorem or not.

Finding limit of a sequence $a_{n+1}=frac{1}{1+a_{n}}$




For the sequence,



$a_{n+1}=\frac{1}{1+a_{n}}\; \forall n \geq 1$ ,



EDIT: $a_{1}=1$



I tried to find the monotonicity by converting it into



$f(x)=\frac{1}{1+x}\ \implies\ f'(x)=-\frac{1}{(1+x)^2} < 0\ \forall\ x\geq 1 $




So the sequence is monotonically decreasing. But fiding the limit of $f(x)$ resulted in



$\lim_{x\to\infty} f(x) = \frac{1}{1+\infty} = \frac{1}{\infty} = 0$



Is my assumption in converting the $a_{n+1}$ t0 $f(x)$ is incorrect. How can I find whether the sequence converges?.



EDIT2: I understand that $f(x)$ formulation is incorrect. How can I prove the sequence is monotonic and find its limit?.


Answer



Note that your sequence is not monotonic, so if there is a limit, the sequence is oscillating around the limit: the even numbered terms are monotonically increasing and the odd ones are monotonically decreasing.




Sketch of the proof:



First prove by induction that the sequence $\{a_n\}$ is positive and that $a_n\leq 1,\,\forall n\in\mathbb N$. Then show that $\{a_{2k}\},\,k=1,2,..$ is monotonically increasing and that $\{a_{2k+1}\},\,k=0,1,2,..$ is monotonically decreasing (again by induction). Because both subsequences are bounded and monotonic, they are convergent to $0\leq L_1\leq 1$ and $0\leq L_2\leq 1$, respectively. Finally you show that both limits are equal from the equation
$$L_i=\frac{1}{1+\frac{1}{1+L_i}},\,i=1,2\Rightarrow L_i=\frac{-1+\sqrt{5}}{2}\in [0,1]$$


linear algebra - Characteristic polynomial of a unitary matrix.

Given a matrix a unitary $A$ $\in$ $M_{n \times n} (\mathbb{R})$, how does one show that its characteristic polynomial $\triangle_A(t)$ satisfies the following:



$t^n \triangle_A(1/t) = \pm \triangle_A(t)$.



I see that the characteristic polynomial is essentially symmetric (or anti-symmetric). I have shown that the determinant of a unitary matrix are $\pm 1$ and that its eigenvalues all have modulus 1. I feel that there is a connection between these properties and the structure of its characteristic polynomial.



If we are dealing with real numbers, it could happen that all of the eigenvalues are either 1 or -1. Then, using the binomial theorem, we would have $\triangle_A(t)= (t \pm 1)^n$, and we would obtain a symmetric or anti-symmetric polynomial.




However, I am stuck here.

sequences and series - Proving the closed form for $sum_{k_1=0}^{infty}cdotssum_{k_n=0}^{infty}frac{1}{a^{k_1+cdots+k_n}}$, where $k_1 neqcdotsneq k_n$ and $a>1$?



I recently encountered a problem that requires us to sum the series



$$
\sum_{i=0}^{\infty} \sum_{j=0}^{\infty} \sum_{k=0}^{\infty} \frac{1}{3^i 3^j 3^k}

$$



given the condition that $i \neq j \neq k$. Upon generalizing the problem, I get this:



$$
\sum_{k_1=0}^{\infty}\sum_{k_2=0}^{\infty}\cdots\sum_{k_n=0}^{\infty} \frac{1}{a^{k_1+\cdots+k_2}} = \frac{n! \times a^n}{\prod_{i=1}^{n} (a^i - 1)}
$$



for $a>1$ and $k_1 \neq \cdots \neq k_2$, i.e. all indices are distinct at all times. Now the closed form expression (on the right hand side) for the infinite series has been obtained purely by guessing. However, I've verified that the equation works, through a computer program. The only task now left to do is to prove the formula, which I'm unable to do.


Answer




Consider the sum
$$\sum_{i=0}^{\infty}\sum_{j=i+1}^\infty\sum_{k=j+1}^\infty\frac{1}{3^i3^j3^k} = \sum_{i=0}^{\infty}\left(\frac{1}{3^i}\sum_{j=i+1}^\infty\left(\frac{1}{3^j}\left(\sum_{k=j+1}^\infty\frac{1}{3^k}\right)\right)\right) = $$$$\frac{1}{2}\sum_{i=0}^{\infty}\left(\frac{1}{3^i}\sum_{j=i+1}^\infty\left(\frac{1}{3^j}\cdot 3^{-j}\right)\right) = \frac{1}{2}\sum_{i=0}^{\infty}\left(\frac{1}{3^i}\sum_{j=i+1}^\infty\left(9^{-j}\right)\right) = $$
$$\frac{1}{2}\sum_{i=0}^{\infty}\left(\frac{1}{3^i}\cdot3^{-2i-2}\cdot\frac{9}{8}\right)= \frac{1}{16}\sum_{i=0}^\infty 3^{-3i} = \frac{1}{16}\cdot\frac{27}{26} = \frac{27}{416}$$
This is summing over all $i

To be clear, each of the intermediate steps were simply using the formula for infinite geometric series.



Now let's go for the general case. Suppose your general formula works for some $n$. Similarly, consider the sum



$$\sum_{k_1=0}^\infty \sum_{k_2=k_1+1}^\infty \sum_{k_3=k_2+1}^\infty... \sum_{k_n=k_{n-1}+1}^\infty \frac{1}{a^{k_1}a^{k_2}...a^{k_n}} = $$

(replacing $k_2$ with $k_2' = k_2-k_1-1$)
$$\sum_{k_1=0}^\infty \sum_{k_2'=0}^\infty \sum_{k_3=k_2'+k_1+2}^\infty... \sum_{k_n=k_{n-1}}^\infty \frac{1}{a^{k_1}a^{k_2'+k_1+1}...a^{k_n}} = $$
(now, replacing $k_3$ with $k_3' = k_3-k_1-1$)
$$\sum_{k_1=0}^\infty \sum_{k_2'=0}^\infty \sum_{k_3'=k_2'+1}^\infty\sum_{k_4 = k_3'+k_1+2}^\infty... \sum_{k_n=k_{n-1}}^\infty \frac{1}{a^{k_1}a^{k_2'+k_1+1}a^{k_3'+k_1+1}...a^{k_n}} = $$
(repeating this until we define $k_n'$)
$$\sum_{k_1=0}^\infty\sum_{k_2'=0}^\infty\sum_{k_3'=k_2'+1}^\infty...\sum_{k_n'=k_{n-1}'+1}^\infty \frac{1}{a^{nk_1+n-1}a^{k_2'}...a^{k_n'}} = \sum_{k_1=0}^\infty \frac{1}{a^{nk_1+n-1}}c =$$$$ \frac{ca^{1-n}}{1-a^{-n}} = \frac{ca}{a^n-1}$$
Where $c$ is simply your formula, with $n-1$ being the number of sums, without the factorial term. Just as we had to multiply by $6=3!$ above, here, we multiply by $n!$ to come to your formula.



That takes care of the inductive step. The base case should be straightforward. My apologies if any of this was unclear.


Friday 3 January 2020

probability - Showing $int_0^{infty}(1-F_X(x))dx=E(X)$ in both discrete and continuous cases



Ok, according to some notes I have, the following is true for a random variable $X$ that can only take on positive values, i.e $P(X<0=0)$



$\int_0^{\infty}(1-F_X(x))dx=\int_0^{\infty}P(X>x)dx$



$=\int_0^{\infty}\int_x^{\infty}f_X(y)dydx$




$=\int_0^{\infty}\int_0^{y}dxf_X(y)dy$



$=\int_0^{\infty}yf_X(y)dy=E(X)$



I'm not seeing the steps here clearly. The first line is obvious and the second makes sense to me, as we are using the fact that the probability of a random variable being greater than a given value is just the density evaluated from that value to infinity.



Where I'm lost is why:
$=\int_x^{\infty}f_X(y)dy=\int_0^{y}f_X(y)dy$



Also, doesn't the last line equal E(Y) and not E(X)?




How would we extend this to the discrete case, where the pmf is defined only for values of X in the non-negative integers?



Thank you


Answer



The region of integration for the double integral is $x,y \geq 0$ and $y \geq x$. If you express this integral by first integrating with respect to $y$, then the region of integration for $y$ is $[x, \infty)$. However if you exchange the order of integration and first integrate with respect to $x$, then the region of integration for $x$ is $[0,y]$. The reason why you get $E(X)$ and not something like $E(Y)$ is that $y$ is just a dummy variable of integration, whereas $X$ is the actual random variable that defines $f_X$.


calculus - Why is$ (1+frac{1}{n})^n=e$ when n goes to infinity?




Why is $\lim\limits_{n\to\infty}(1+\frac1n)^n=e$?



I think it involves $\sum\limits_{n=0}^\infty\frac1{k!}=e$ but not sure how to get from one to the other.


Answer




Have you tried expanding by the binomial theorem? ;-)
$$(1/n+1)^n=\sum_{k=0}^n\binom{n}k\frac1{n^k}=\sum_{k=0}^n\frac{n!}{k!(n-k)!}\frac1{n^k}$$then as $n\to\infty$ we find $n!/(n^k (n-k)!)\to1$ hence we have:$$\lim_{n\to\infty}\sum_{k=0}^n\frac{n!}{k!(n-k)!\cdot n^k}=\sum_{k=0}^\infty\frac1{k!}\equiv e$$


linear algebra - Inverse Matrix and matrix multiplication




If I got the invertible matrix $A$, I can calculate the inverse matrix $A^{-1}$, so that $A \cdot A^{-1} = E$, where $E$ is the identity matrix.



Wikipedia says that not only $A\cdot A^{-1} = E$ must be fulfilled, but also $A^{-1} \cdot A = E $. Can someone explain to me why this is not a contradiction to the fact that matrix multiplication is not commutative ? Is the inverse matrix really defined as a matrix which fulfills both?


Answer



The inverse of a matrix is defined as the matrix that satisfies both relationships.



For square matrices $A$ and $B$,
$$
B\mbox{ is the inverse of }A:=B\mbox{ such that } AB{}={}BA{}={}I\,.
$$




Incidentally, this also means that $A$ is the inverse of $B$.


sequences and series - Does $sum_{n=1}^infty frac{1}{phi(n)^s}$ have a euler product?



Does $$ \sum_{n=1}^\infty \frac{1}{\phi(n)^s}$$



have a euler product and functional equation? $\phi(n)$ is the euler phi function.



Since $\phi(n)$ is multiplicative I think the series could have a euler product and functional equation.




$$ \sum_{n=1}^\infty \frac{1}{\phi(n)^s}= \sum_{n=1}^\infty \frac{a(n)}{n^s}$$



where $a(n)$ is sequence https://oeis.org/A058277 in the OEIS.






I did some research and found many relations between the zeta function and other special functions such as:



$$ \sum_{n=1}^\infty \frac{\phi(n)}{n^s}=\frac{\zeta(s-1)}{\zeta(s)}. $$


Answer




Firstly, $$\phi(p^k)=p^k\left(1-\frac1p\right)\qquad{\text{$k\ge1$, prime $p$}}$$



Define $f(k)=\phi(k)^{-s}$.



For the moment, consider the set of integers $S_N=\{k\,|\,k=p_1^{a_1}p_2^{a_2}\cdots p_N^{a_N},a_{(\cdot)}\ge1\}$.



Then,
$$\begin{align}
\sum_{k\in S_N}f(k)
&=\sum^\infty_{a_N=1}\cdots\sum^\infty_{a_1=1}

\left[p_1^{a_1}\cdots p_N^{a_N}
\left(1-\frac1{p_1}\right)\cdots\left(1-\frac1{p_1}\right)\right]^{-s} \\
&=\left(\prod^N_{i=1}\frac1{1-1/p_i}\right)^s\cdot\prod^N_{j=1}\sum^\infty_{a_j=1}p_j^{-a_js} \\
&=\prod^N_{i=1}\frac{(1-1/p_i)^{-s}}{p_i^s-1}
\end{align}
$$



Define $$f_i:=\frac{(1-1/p_i)^{-s}}{p_i^s-1}$$



Now we want to find

$\displaystyle{\sum_{k\in S^*_N}f(k)}$ where $S^*_N=\{k\,|\,k=p_1^{a_1}\cdots p_N^{a_N},a_{(\cdot)}\color{red}{\ge0}\}$



Summing $f(k)$ over all the elements in $S_N^*$ with $a_\alpha=0$ and other indexes non-zero gives $\displaystyle{\frac1{f_\alpha}\prod^N_{i=1}f_i}$.



How about having two zero indexes $a_\alpha,a_\beta$? $\displaystyle{\frac1{f_\alpha f_\beta}\prod^N_{i=1}f_i}$.
Three?
$\displaystyle{\frac1{f_\alpha f_\beta f_\gamma}\prod^N_{i=1}f_i}$.



Summing all these and factorizing naturally give
$$\sum_{k\in S^*_N}f(k)=\left(1+\frac1{f_1}\right)\left(1+\frac1{f_2}\right)\cdots\left(1+\frac1{f_N}\right)\cdot\prod^N_{i=1}f_i=\prod^N_{i=1}(1+f_i)$$




Taking the limit $N\to\infty$, we find that
$$\sum^\infty_{n=1}\frac1{\phi(n)^s}=\prod_{\text{prime }p}\left(1+\frac{(1-1/p)^{-s}}{p^s-1}\right)$$



I am still working on the functional equation. It is clear that the function is holomorphic on $\text{Re }s>1$, and is likely to have a pole at $s=1$, as plugging in $s=1$ gives $\prod_p\left(1+\frac1p\right)=\infty$.



A few more words on analytic continuation:



Obviously,
$$\begin{align}

F(s):=\sum^\infty_{n=1}\frac1{\phi(n)^s}
&=\prod_{\text{prime }p}\left(1+\frac{(1-1/p)^{-s}}{p^s-1}\right)\\
&=\prod_{p}\frac1{1-p^{-s}}\cdot\prod_p[1-p^{-s}+(p-1)^{-s}] \\
&=\zeta(s)\cdot \underbrace{\prod_p[1-p^{-s}+(p-1)^{-s}]}_{G(s)}
\end{align}
$$




  1. $G(s)$ converges for $\text{Re }s>0$, as $1-p^{-s}+(p-1)^{-s}=1+sp^{-s-1}+O(p^{-s-2})$.

  2. Therefore, $F(s)$ has a simple pole at $s=1$ due to zeta function. The residue there equals $$\operatorname*{Res}_{s=1}F(s)=\prod_p\left(1+\frac1{p(p-1)}\right)=\frac{315\zeta(3)}{2\pi^4}$$

    (See Landau’s totient constant.)

  3. $$\lim_{s\to0^+}G(s)=1$$ This can be seen by plugging in $s=0$ into the above expression.

  4. Let's look at $G'(0)$.
    $$\frac{G'(s)}{G(s)}=\sum_p\frac{p^{-s}\ln p-(p-1)^{-s}\ln(p-1)}{1-p^{-s}+(p-1)^{-s}}$$
    $$G'(0)=-G(0)\sum_p\ln\left(1-\frac1p\right)=\infty$$
    As $G(0)$ is finite and $G'(0)$ is not, this suggests that $0$ is a branch point of $F(s)$. Thus, meromorphic continuation is not possible.







Further analysis shows that $\text{Re }s=0$ is the natural boundary of $F(s)$.



Firstly, by means of successive series expansion, we obtain
$$\begin{align}
\ln(1-p^{-s}+(p-1)^{-s})
&=\sum^\infty_{n=1}\frac{\left[p^{-s}-(p-1)^{-s}\right]^n}{n} \\
&=\sum^\infty_{n=1}\frac{1}{n}\sum^n_{r=0}\binom nr p^{-s(n-r)}(-1)^r(p-1)^{-sr} \\
&=\sum^\infty_{n=1}\frac{p^{-ns}}{n}\sum^n_{r=0}\binom nr (-1)^r\left(1-\frac1p\right)^{-sr} \\
&=\sum^\infty_{n=1}\frac{p^{-ns}}{n}\sum^n_{r=0}\binom nr (-1)^r\sum^\infty_{k=0}\binom{-sr}{k}\frac{(-1)^k}{p^k} \\
&=\sum^\infty_{n=1}\sum^\infty_{k=0}\alpha_{n,k}(s)\frac1{p^{k+ns}}

\end{align}
$$



where $$\alpha_{n,k}(s)=\frac{(-1)^k}{n}\sum^n_{r=0}(-1)^r\binom nr \binom{-sr}{k}$$



Furthermore, notice that
$$\alpha_{n,0}(s)=\frac1n\sum^n_{r=0}(-1)^r\binom nr=0$$



Therefore,
$$\ln(1-p^{-s}+(p-1)^{-s})=\sum_{(n,k)\in\mathbb N^2}\alpha_{n,k}(s)\frac1{p^{k+ns}}$$

$$\begin{align}
\implies \ln G(s)
&=\sum_p \ln(1-p^{-s}+(p-1)^{-s}) \\
&=\sum_{(n,k)\in\mathbb N^2}\alpha_{n,k}(s)\zeta_{\mathbb P}(k+ns) \\
\end{align}
$$

where $\zeta_{\mathbb P}$ is the prime zeta function.



It is well known that $\zeta_{\mathbb P}$ has the natural boundary $\text{Re }s=0$, because $\mathcal S$ (the set of singularities of $\zeta_{\mathbb P}$) clusters on the imaginary axis. Hence, obviously $G(s)$ cannot be analytically continued across $\text{Re }s=0$. A functional equation does not exist.




Meanwhile, we obtained a representation of $F(s)$ in terms of well-known functions:




$$\sum^\infty_{n=1}\frac1{\phi(n)^s}
=\zeta(s)\exp\left[\sum_{(n,k)\in\mathbb N^2}\alpha_{n,k}(s)\zeta_{\mathbb P}(k+ns)\right]
$$

$$\alpha_{n,k}(s)=\frac{(-1)^k}{n}\sum^n_{r=0}(-1)^r\binom nr \binom{-sr}{k}$$



Thursday 2 January 2020

calculus - How can something be proved unsolvable?




My question specifically deals with certain real indefinite integrals such as $$\int e^{-x^2} {dx} \ \ \text{and} \ \ \int \sqrt{1+x^3} {dx}$$ Books and articles online have only ever said that these cannot be expressed in terms of elementary functions. I was wondering how this could be proved? I know this is a naive way of thinking, but it seems to me like these are just unsolved problems, not unsolvable ones.


Answer



The trick is to make precise the meaning of "elementary": essentially, these are functions, which are expressible as finite combinations of polynomials, exponentials and logarithms. It is then possible to show (by algebraically tedious disposition of cases, though not necessarily invoking much of differential Galois theory - see e.g. Rosenthal's paper on the Liouville-Ostrowski theorem) that functions admitting elementary derivatives can always be written as the sum of a simple derivative and a linear combination of logarithmic derivatives. One consequence of this is the notable criterion that a (real or complex) function of the form $x\mapsto f(x)e^{g(x)}$, where $f, g$ are rational functions, admits an elementary antiderivative in the above sense if and only if the differential equation $y'+g'y=f$ admits a rational solution. The problem of showing that $e^{x^2}$ and the lot have no elementary indefinite integrals is then reduced to simple algebra. In any case, this isn't an unsolved problem and there is not much mystery to it once you've seen the material.


complex analysis - Convergence of series and summation methods for divergent series

I would like to know what is the sum of this series:




$$\sum_{k=1}^\infty \frac{1}{1-(-1)^\frac{n}{k}}$$ with $$ n=1, 2, 3, ...$$



In case the previous series is not convergent, I would like to know which are the conditions that would have been required in order for it to be convergent. I can understand that there could be a set of values of $n$ for which the series is not convergent, but this does not directly prove that there are no values of $n$ for which, instead, it is.
In case the previous series is not convergent in the “classical” sense, I would like to know if it can be associated to it a sum, employing those summation methods used to assign a value to a divergent series; like, for example, the Ramanujan summation method which associates to the following well known divergent series



$$\sum_{k=1}^\infty k $$



the value $ -\frac{1}{12} $.



https://en.wikipedia.org/wiki/1_%2B_2_%2B_3_%2B_4_%2B_%E2%8B%AF




https://en.wikipedia.org/wiki/Divergent_series#Examples



Note that, in general, the argument of the sum considered can assume complex values!

sequences and series - Convergence of the following sum



Does the following sum converge? $$\sum_{n=1}^{\infty}\frac{\sin^2(n)}{n}$$
I tried the ratio test and got that $\rho=0$ which means that the series converges absolutely. However, Mathematica and Wolfram Alpha do not give a result when trying to find its convergence. Am I wrong?


Answer



Yes, you are wrong. The ratio test is inconclusive, and the series diverges.



Note that there is some $\varepsilon > 0$ such that $\sin^2(n) + \sin^2(n+1) > \varepsilon$ for all $n$. This is because if $n$ is close to a multiple of $\pi$, $n+1$ will not be. Thus $$\frac{\sin^2(n)}{n} + \frac{\sin^2(n+1)}{n+1} \ge \frac{\varepsilon}{n+1}$$

and a comparison with the harmonic series shows that the series diverges.


set theory - For every infinite $S$, $|S|=|Stimes S|$ implies the Axiom of choice

How to prove the following conclusion:



[For any infinite set $S$,there exists a bijection $f:S\to S \times S$] implies the Axiom of choice.



Can you give a proof without the theory of ordinal numbers.

trigonometry - How to simplify $y = sin(frac{arcsin(x)}{n}), n≥1$?




$y = \sin(\frac{\arcsin(x)}{n}), n≥1$



I know that:



$\lim \limits_{x \to 0} \frac{x}{y} = n$



But I can't figure out what the curve of $x/y$ practically represents. Is there an obvious simple solution?


Answer



Let $\theta = \arcsin x$ and we have to assume that $|x| \lt \dfrac{1}{\sqrt 2}$.




Then $\tan \theta = \dfrac{x}{\sqrt{1-x^2}}$ and $|\tan \theta| \lt 1$.



\begin{align}
\cos \dfrac{\theta}{n} + i \sin \dfrac{\theta}{n}
&= (\cos \theta + i \sin \theta)^{\frac 1n} \\
&= (\cos \theta)^{\frac 1n}(1 + i \tan \theta)^{\frac 1n} \\
&= x^{\frac 1n}\left(1 + i\dfrac{x}{\sqrt{1-x^2}}\right)^\frac 1n\\
&= x^{\frac 1n}\sum_{k=0}^\infty
\binom{1/n}{k} i^k\left(\dfrac{x}{\sqrt{1-x^2}}\right)^k \\

\cos \dfrac{\theta}{n}
&= x^{\frac 1n}\sum_{k=0}^\infty
(-1)^k\binom{1/n}{2k}\left(\dfrac{x^2}{1-x^2}\right)^k \\
\sin\dfrac{\theta}{n}
&= x^{\frac 1n} \dfrac{x}{\sqrt{1-x^2}} \sum_{k=0}^\infty
(-1)^k\binom{1/n}{2k+1}\left(\dfrac{x^2}{1-x^2}\right)^k \\
\end{align}



NOTES







We define $\binom zn$ where $z \in \mathbb R$ and $0 \le n \in \mathbb Z$ as follows



$(z)_n =
\begin{cases}
1 & \text{If $n = 0$.}\\
z(z-1)(z-2)\cdots(z-n+1) &\text{If $n \ge 1$.}
\end{cases}$




then $\binom zn = \dfrac{(z)_n}{n!}$



It can be shown that, if $|x| < 1$, then $(1 + x)^z = \sum_{k=0}^\infty \binom zk x^k$


Wednesday 1 January 2020

combinatorics - Number of $4$-digit numbers that can be formed with the digits $0,1,2,3,4,5$ with constraint on repetition


Problem Statement:-



Find the number of numbers of four digits that can be made from the digits $0,1,2,3,4,5$ if digits can be repeated in the same number. How many of these numbers have at least one digit repeated?




For those who wanna skip all the babble below this para on why I thought what I thought just jump to the Modified Problem Statement in the Edit.




Seems pretty easy, right? Until you are me who misread the given constraint on the repetition of the digits as this - "the numbers can be repeated as many times as their magnitude". So, this would imply $0$ being repeated $0$ times, so it will occur only $1$ time. Similarly, $1$ will repeat $1$ time so it will occur only $2$ times, and so $2$ occurs $3$ times, $3$ occurs $4$ times, and so on. If we were to write the multiset for the digits from which we can choose from then, the multiset would be
$$(\{(0,1),(1,2),(2,3),(3,4),(4,5),(5,6)\})$$



And after spending a lot of time I thought why not try to find a solution here, because I was not able to get one.






My Attempt at a solution to my proposed problem:-




The first digit can be anyone of the five non-zero digits. Assuming the non zero digits to be distinct, we get the total no of nonzero digits to be $20$, hence no of ways of selecting the first digit (i.e. the digit at the thousandths place) is $\displaystyle{{20}\choose{1}}$ and the remaining $20$ digits can be arranged in $3$ remaining spaces is ${}^{20}P_3$. Since we had assumed all the digits to be distinct (even those which were repeating), hence we need to set the overcount that we did due to repetition of arrangements hence we divide the ways of arranging by $(4!)^3.3!.2!.1!$ due to $3,4,5$ can have $4$ copies each in the arrangement, the digit $2$ has $3$ copies and the digit $1$ has $2$ copies. Hence, the total no of ways can be given as
$$\dfrac{\displaystyle{{20}\choose{1}}.{}^{20}P_3}{(4!)^3.3!.2!}$$.



On calculating this comes out to be a fraction. So, this is definitely not the correct approach.



Edit:- For all those who are solving the problem as given in the Problem Statement in the quotes, what I was wanting was that what if instead of the constraint the question had placed the constraint had been what I had interpreted it initially which I have already written just below the quote, i.e. "the numbers can be repeated as many times as their magnitude"



So, lets state the problem that I am trying to solve if you wanna skip all the babble in the first part





Modified Problem Statement:-



Find the number of numbers of four digits that can be made from the digits $0,1,2,3,4,5$ if digits can be repeated as many times as their magnitudes. How many of these numbers have at least one digit repeated?


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...