Tuesday 22 October 2013

linear algebra - Why are these eigenvalic theorems true?




So I was reading up on the Wikipedia page on eigenvalues and eigenvectors (which I fondly call eigencrap ;) ) and I was confused by one paragraph in particular.



Let $A$ be an arbitrary $n$ by $n$ matrix of complex numbers with eigenvalues $\lambda_1,\lambda_2, \ldots, \lambda_n$. Each eigenvalue appears $\mu_A(\lambda_i)$ times in this list, where $\mu_A(\lambda_i)$ is the eigenvalue's algebraic multiplicity. The following are the properties of this matrix and its eigenvalues:




  • The trace of $A$, defined as the sum of its diagonal elements, is also the sum of all eigenvalues,
    $$
    \text{tr}(A)=\sum_{i=1}^{n}A_{i,i}=\sum_{i=1}^{n}\lambda_i=\lambda_1+\lambda_2+\ldots+\lambda_n.
    $$

  • The determinant of $A$ is the product of all its eigenvalues,

    $$
    \det(A)=\prod_{i=1}^{n}\lambda_i=\lambda_1\lambda_2\cdots\lambda_n.
    $$
    First off, $\mathbb{R}\subset\mathbb{C}$, correct? Therefore the properties listed should hold for any $n\times n$ square matrix -- namely that $\operatorname{tr}(A)$, the sum of the diagonal elements, is equal to the sum of all eigenvalues, and that $\operatorname{det}(A)$ is equal to the product of such.



This does not strike me as intuitively true, why should this be the case? Is there a condition missing from the theorem -- perhaps that the matrix must be orthogonally diagonalized, or symmetric?



Just a thought, but is there a relation to Vieta's theorems? The sum/product identities seem indicative of such.




Any thoughts, intuitions, and explanations are appreciated, thank you!


Answer



Another well known property of the Trace operator is that it is invariant over commutation of matrices, that means:
$$
Tr(AB)=Tr(BA)
$$
So, if $A$ is diagonalized by:
$$
A=V D V^{-1}
$$

With $D$ a diagonal matrix, then:
$$
Tr(A) = Tr(V D V^{-1})=Tr(V V^{-1} D) = Tr(D)= \lambda_1+ \cdots +\lambda_n
$$
Where the invariance over commutation was used.



Regarding the second property, another well known result is Binet's theorem, that states:
$$
Det(AB)= Det(A)Det(B)
$$

Thus, upon diagonalization:
$$
Det(A)= Det(V D V^{-1}) = Det(V)Det(D)Det(V^{-1})
$$
And of course this also implies:
$$
Det(I)= Det(V V^{-1}) = Det(V)Det(V^{-1}) =1
$$
Thus:
$$

Det(A)= Det(D) = \lambda_1 \times \cdots \times\lambda_n
$$



Also, yes, there are some relation to Viète's theorems (or Girard Relations), that come from the characteristic polynomial:
$$
p(\lambda) = Det(A-\lambda I)
$$
Since for instance this implies:
$$
p(0)=Det(A)

$$


No comments:

Post a Comment

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...