Sunday 13 March 2016

matrices - Can you add a scalar to a matrix?

If I add a scalar to every element of a matrix, e.g. for a $2\times2$ matrix



$$ \begin{pmatrix}a_{11} & a_{12} \\ a_{21} & a_{22}\end{pmatrix} + b \overset{?}{=} \begin{pmatrix}a_{11}+b & a_{12}+b \\ a_{21}+b & a_{22}+b\end{pmatrix},$$



with $b$ a scalar, then what is the correct notation? Matrix addition and subtraction are only defined for matrices of the same size. However, it seems tedious to first multiply $b$ with a matrix of ones to have two same-sized matrices to add:



$$ J_2 = \begin{pmatrix} 1 & 1 \\ 1 & 1\end{pmatrix}.$$




Thus to write:



$$ \begin{pmatrix}a_{11} & a_{12} \\ a_{21} & a_{22}\end{pmatrix} + bJ_2 = \begin{pmatrix}a_{11}+b & a_{12}+b \\ a_{21}+b & a_{22}+b\end{pmatrix}.$$



Do you always write $A+bJ_d$ (with $d$ the dimensions of $A$)? Another notation would be $A+\mathbf{b}$ (bold $b$), implying a matrix of the size of $A$. However, this notation is also used for the multiplication of $b$ with the identity matrix, $bI_d$, which is different and therefore confusing.



Why is the addition of a scalar to a matrix not simply defined like scalar multiplication, i.e. an operation of every matrix element? An example where this is permitted is the MATLAB language, where you can add a scalar to a matrix $A$ simply by addition: e.g. A+3. I feel this is a logical choice. Addition of a scalar to a matrix could be defined as $A+b = A+bJ_d$, with $d$ the dimensions of $A$. This is commutative and associative, just like regular matrix addition. Then $A+\mathbf{b}$ would be the addition of $A$ and $bI_d$ and $A+B$ the matrix addition as we know it, only valid for matrices of the same dimensions. Why aren't these the definitions?

No comments:

Post a Comment

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...