Saturday 14 May 2016

linear algebra - Singular matrix



Suppose I have a singular matrix given by



$$A = \begin{pmatrix}
a_{11} & a_{12} & a_{13} & a_{14}\\
a_{12} & a_{11} & a_{14} & a_{13}\\
a_{31} & a_{32} & a_{33} & a_{34}\\
a_{32} & a_{31} & a_{34} & a_{33}
\end{pmatrix}$$

Which is a homogeneous system in $(x_1, x_2, x_3, x_4)^T$ where there is a variable $\Lambda$ in the coefficients that makes the matrix singular if chosen such that the determinant is zero.



Now I have the matrices:
$$B = \begin{pmatrix}
a_{11} + a_{12} & a_{13} + a_{14}\\
a_{31} + a_{32} & a_{33} + a_{34}
\end{pmatrix}$$



$$C = \begin{pmatrix}
a_{11} - a_{12} & a_{13} - a_{14}\\

a_{31} - a_{32} & a_{33} - a_{34}
\end{pmatrix}$$



which are homogeneous systems in $(x_1 + x_2, x_3 + x_4)^T$ and $(x_1 - x_2, x_3 - x_4)^T$ respectively.



If I make the matrices $B$ and $C$ out of $A$, and I determine the $\Lambda$ such that those determinants are zero, can I say anything about the determinant of $A$ with those values of $\Lambda$?



The idea is that I won't have a fourth degree polynomial to solve (which Maple or Mathematica can seem to do in this case) and then I can obtain the original result by summing or subtracting and such.


Answer



Yes. We have $\det A=\det B\cdot \det C$. There are some different ways to see this; here is one:




Your matrix $A$ can be written as the block matrix $\left(\begin{array}{cc} X&Y\\ U&W\end{array}\right)$, where $X$, $Y$, $U$, $W$ are the following $2\times 2$ matrices:



$X=\left(\begin{array}{cc} a_{11}&a_{12}\\ a_{12}&a_{11}\end{array}\right)$;



$Y=\left(\begin{array}{cc} a_{13}&a_{14}\\ a_{14}&a_{13}\end{array}\right)$;



$Z=\left(\begin{array}{cc} a_{31}&a_{32}\\ a_{32}&a_{33}\end{array}\right)$;



$W=\left(\begin{array}{cc} a_{33}&a_{34}\\ a_{34}&a_{33}\end{array}\right)$.




Now, these matrices $X$, $Y$, $U$, $W$ are circulant matrices, and thus can be diagonalized by the unitary discrete Fourier transform matrix



$F_2=\frac{1}{\sqrt 2}\left(\begin{array}{cc} 1&1\\ 1&-1\end{array}\right)$.



So we have



$X=F_2\mathrm{diag}\left(a_{11}+a_{12},a_{11}-a_{12}\right)F_2^{-1}$;



$Y=F_2\mathrm{diag}\left(a_{13}+a_{14},a_{13}-a_{14}\right)F_2^{-1}$;




$Z=F_2\mathrm{diag}\left(a_{31}+a_{32},a_{31}-a_{32}\right)F_2^{-1}$;



$W=F_2\mathrm{diag}\left(a_{33}+a_{34},a_{33}-a_{34}\right)F_2^{-1}$.



As a consequence, the block matrix $A=\left(\begin{array}{cc} X&Y\\ U&W\end{array}\right)$ can be written as



$A=\left(\begin{array}{cc} F_2&0\\ 0&F_2\end{array}\right)\left(\begin{array}{cc} \mathrm{diag}\left(a_{11}+a_{12},a_{11}-a_{12}\right) & \mathrm{diag}\left(a_{13}+a_{14},a_{13}-a_{14}\right) \\ \mathrm{diag}\left(a_{31}+a_{32},a_{31}-a_{32}\right) & \mathrm{diag}\left(a_{33}+a_{34},a_{33}-a_{34}\right) \end{array}\right) \left(\begin{array}{cc} F_2&0\\ 0&F_2\end{array}\right)^{-1}$



(check this!), so that




$\det A = \det \left(\begin{array}{cc} \mathrm{diag}\left(a_{11}+a_{12},a_{11}-a_{12}\right) & \mathrm{diag}\left(a_{13}+a_{14},a_{13}-a_{14}\right) \\ \mathrm{diag}\left(a_{31}+a_{32},a_{31}-a_{32}\right) & \mathrm{diag}\left(a_{33}+a_{34},a_{33}-a_{34}\right) \end{array}\right) $.



Now, the determinant on the right hand side can be even simplified by transposing the second row with the third row and transposing the second column with the third column:



$\det A = \det \left(\begin{array}{cccc} a_{11}+a_{12} & a_{13}+a_{14} & 0 & 0 \\ a_{31}+a_{32} & a_{33}+a_{34} & 0 & 0 \\ 0 & 0 & a_{11}-a_{12} & a_{13}-a_{14} \\ 0 & 0 & a_{31}-a_{32} & a_{33}-a_{34} \end{array}\right)$.



Now the matrix on the right hand side is obviously just the block matrix $\left(\begin{array}{cc} B&0\\ 0&C\end{array}\right)$, so its determinant is $\det B\cdot \det C$.


No comments:

Post a Comment

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...