Wednesday, 7 August 2013

tensors - Proving vector identity by Einstein summation




Given that Nij=δijϵijknk+ninj
and Mij=δij+ϵijknk
Show that NijMjk=2δik (where n is a unit vector in R3)



What I have tried:
Mjk=δjk+ϵjkknk=δjk
So NijMjk=(δijϵijknk+ninj)δjk=δik0+nink



I'm not entirely sure what I did is legal under summation rules, but the terms I ended up doesn't match with the provided answer. Any hint is appreciated.




And what are the usual strategies when dealing with proving vector identities using suffix notation?


Answer



As far as computations go, spaceisdarkgreen's answer above is pretty ok for me. But I have some qualms with this exercise in general. Your mistake is one of the reasons why I prefer to write vector components with upper indices, and Einstein's convention is: if the same index appears twice, once up and once down, sum over it. If n is a unit vector in R3, we'd write its components as n1, n2 and n3. This calls for the index balance Nij=δijϵijknk+ninjandMij=δij+ϵijknk.Then, as it is, the expression NijMjk makes no sense as the index j appears twice up.



As it is written, Nij and Mij are components of tensors M,N:(R3)×(R3)R, in the standard basis of R3 (and its dual basis).



In order to make sense of the expression NijMjk, we should work with the tensor ˜M:R3×(R3)R, which is equivalent to M under index lowering. Since (if?) these components are with respect with an orthonormal basis, and the scalar product is positive-definite, ˜M's components are just Mji=δji+ϵjiknk,and so the expression NijMkj makes sense. So Einstein's convention comes with a built-in error detector. Notice that i and k are fixed indices. So they're off limits when we want to relabel mute indices. We then have: NijMkj=(δijϵijrnr+ninj)(δkj+ϵkjsns)=δijδkj+δijϵkjsnsϵijrnrδkjϵijrnrϵkjsns+ninjδkj+ϵkjsninjns=δik+δijϵkjsnsϵikrnr(ϵijrϵkjs)nrns+nink+(ϵkjsnjns)ni.



To simplify the piece in blue, we use some well-known permutation identities: ϵijrϵkjs=ϵjirϵkjs=(δikδrsδisδkr)=δisδkrδikδrs.




Modulo sign, the piece in red is the k-th component of the cross product n×n, which is zero.
\require{cancel}
So: \begin{align}N^{ij}M_{j}^{\;k} &= \delta^{ik} + \delta^{ij}\epsilon_{j\;s}^{\;k}n^s - \epsilon^{ik}_{\;\;r}n^r - \delta^i_{\;s}\delta^k_{\;r}n^rn^s+\delta^{ik}\delta_{rs}n^rn^s+n^in^k \\ &=\delta^{ik} + \delta^{ij}\epsilon_{j\;s}^{\;k}n^s - \epsilon^{ik}_{\;\;r}n^r - \cancel{n^kn^i}+\delta^{ik}\color{green}{\delta_{rs}n^rn^s}+\cancel{n^in^k} \\ &=2\delta^{ik} + \delta^{ij}\epsilon_{j\;s}^{\;k}n^s - \epsilon^{ik}_{\;\;r}n^r , \end{align}since the piece in green is nothing more than n \cdot n = 1.



Further simplification will violate Einstein's convention as I have stated it. Since we're in a good situation (as explained earlier), you can lower all indexes and use Einstein's convention as you're using there to simplify what is left, if you want to.






For readers who know portuguese, I happen to have written some material on tensors, it might be helpful.







I just realized that we actually can simplify further, using that r in the last term is mute. We'll have \begin{align}N^{ij}M_{j}^{\;k} &= 2\delta^{ik} + \delta^{ij}\epsilon_{j\;s}^{\;k}n^s - \epsilon^{ik}_{\;\;r}n^r \\ &= 2\delta^{ik} + \cancel{\epsilon_{\;\;s}^{ik}n^s} - \cancel{\epsilon^{ik}_{\;\;s}n^s} \\ &= 2\delta^{ik}.\end{align}


No comments:

Post a Comment

real analysis - How to find lim_{hrightarrow 0}frac{sin(ha)}{h}

How to find \lim_{h\rightarrow 0}\frac{\sin(ha)}{h} without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...