Eikonal Blog

2010.02.26

Moore-Penrose inverse for light-cone vectors

Filed under: mathematics — Tags: — sandokan65 @ 16:54

This is continuation of previous post https://eikonal.wordpress.com/2010/02/17/some-examples-of-the-moore-penrose-inverse/


Example (added 2010.02.25 Fri): D=1+1 light-cone vector (i.e. R_\mu R^\mu \equiv 0) R_\mu (\mu=0,1) can be cast via D=1+1 Dirac matrices \left\{\gamma^0 = \begin{pmatrix}1 & 0 \\ 0 & -1 \end{pmatrix}, \gamma^1 = \begin{pmatrix}0 & 1  \\ -1 & 0 \end{pmatrix}\right\} into the nilpotent (i.e. \not{R}^2 \equiv 0) matrix \not{R}:\equiv R_\mu \gamma^\mu = R_0 \begin{pmatrix}1 & \epsilon \\ -\epsilon & -1 \end{pmatrix}, where R_1 = \epsilon R_0 (and \epsilon=\pm 1).

To find the MPI of that matrix, start with the transposed matrix \not{R}^T  = \begin{pmatrix}R_0 & -R_1 \\ -R_1 & -R_0 \end{pmatrix}. It is also nilpotent (i.e. (\not{R}^T)^2 = 0) but products following (singular) product are non-zero:

  • \not{R}^T \cdot \not{R}= 2R_0^2 \begin{pmatrix}1 & \epsilon \\ \epsilon & 1 \end{pmatrix},
  • \not{R} \cdot \not{R}^T = 2R_0^2 \begin{pmatrix}1 & -\epsilon \\ -\epsilon & 1 \end{pmatrix}.

Now calculate \not{R}^+ via following limiting process:

    \not{R}^+ = \lim_{\delta\rightarrow 0} (\not{R}^T \not{R} + \delta {\bf 1})^{-1} \not{R}^T
    = \frac1{2 R_0^2} \lim_{\delta\rightarrow 0}\begin{pmatrix}1+\delta & \epsilon \\ \epsilon & 1+\delta\end{pmatrix}^{-1}  \not{R}^T
    = \frac1{2 R_0^2} \lim_{\delta\rightarrow 0} \frac1{2\delta+\delta^2} \begin{pmatrix}1+\delta & -\epsilon \\ -\epsilon & 1+\delta\end{pmatrix}  \not{R}^T
    = \frac1{2 R_0} \lim_{\delta\rightarrow 0} \frac1{2\delta(1+\frac12\delta)} \begin{pmatrix}\delta & -\epsilon\delta \\ \epsilon\delta & -\delta\end{pmatrix}
    = \frac1{4 R_0}  \begin{pmatrix}1 & -\epsilon \\ \epsilon & -1\end{pmatrix}  = \frac1{4R_0^2} \not{R}^T.

Direct check verifies that all four defining properties of MPI are satisfied.

Note that \not{R}^+ corresponds to the D=1+1 vector (R^+)_\mu = \left\{\frac1{4R_0}, -\frac{\epsilon}{4R_0}\right\} = \frac1{4R_0^2} \left\{R_0, -R_1\right\} = \frac1{4R_0^2} \hat{P}R_\mu\hat{P}, where \hat{P} is the space-parity operator.

The two products \Pi_1:\equiv \not{R}\cdot \not{R}^+ = \frac12 \begin{pmatrix}1 & -\epsilon \\ -\epsilon & 1\end{pmatrix} and \Pi_2:\equiv \not{R}^+ \cdot \not{R}  = \frac12 \begin{pmatrix}1 & +\epsilon \\ +\epsilon & 1\end{pmatrix} are projectors: \Pi_a \Pi_b = \delta_{a,b} \Pi_b (a, b \in \{1,2\}), \Pi_1 + \Pi_2 = {\bf 1}. They project on two light cone directions defined by vectors R_\mu and R^+_\mu.


More at this blog:

2010.02.19

Weighted Moore-Penrose inverse

Filed under: mathematics — Tags: — sandokan65 @ 15:13

This is a generalization of the original concept of Moore-Penrose inverse (MPI). The weighted MPI A^{+(N,M)} of a matrix A \in {\Bbb F}^{n\times m} is defined by the following four properties:

  • (A): A \cdot A^{+(N,M)} \cdot A = A,
  • (B): A^{+(N,M)} \cdot A \cdot A^{+(N,M)} = A^{+(N,M)},
  • (C)_N: (M \cdot A \cdot A^{+(N,M)})^c  = M \cdot A \cdot A^{+(N,M)},
  • (D)_M: (A^{+(N,M)}\cdot A \cdot N)^c  = A^{+(N,M)}\cdot A \cdot N.

where the weighting matrices M and N are of the orders n\times n and m\times m.

When weighting matrices are equal to the corresponding identities, the above definition reduces to ordinary MPI A^c.

Source: R. B. Bapat, S. K. Jain and S. Pati “Weighted Moore-Penrose Inverse of a Boolean Matrix”; Linear Algebra and Its Applications 225:267-279 (1997); North-Holland; pg. 692-704. http://www.math.ohiou.edu/~jain/077.pdf.


More at this blog:

2010.02.17

Some examples of the Moore-Penrose inverse

Filed under: mathematics — Tags: — sandokan65 @ 16:49

Source: T2009.02.12

Definition: For rectilinear matrix A \in {\Bbb F}^{n\times m} there exist a unique matrix A^+ \in {\Bbb F}^{m\times n} (called the Moore-Penrose inverse [MPI]) s/t:

  • (A): A A^+ A = A,
  • (B): A^+ A A^+ = A^+,
  • (C): (A A^+)^c = A A^+,
  • (D): (A^+ A)^c = A^+ A.

where M^c is the appropriate conjugation defined on the field {\Bbb F}, i.e. (M^c)^c = M (\forall M).

In particular, for {\Bbb F} = {\Bbb R}:

  • (A): A A^+ A = A,
  • (B): A^+ A A^+ = A^+,
  • (C): (A A^+)^T = A A^+,
  • (D): (A^+ A)^T = A^+ A,

where A^T is the transposition of the matrix A.

and for {\Bbb F} = {\Bbb C}:

  • (A): A A^+ A = A,
  • (B): A^+ A A^+ = A^+,
  • (C): (A A^+)^\dagger = A A^+,
  • (D): (A^+ A)^\dagger = A^+ A,

where A^\dagger :\equiv (A^*)^T is the Hermitian conjugation of a matrix A.


Properties

  • The matrix equation A \cdot \underline{y} = \underline{x} has infinitely many solutions
    \underline{y} = A^{+} \underline{x} + ({\bf 1}_n - A^{+} A) \underline{q}
    parametrized by an arbitrary $\underline{q}, provided that the consistency condition A A^{+} \underline{x} = \underline{x} is satisfied.
  • If B is a square positive semidefinite matrix, then for each \underline{z} the following inequality holds:
    (A\underline{z}-\underline{x})^T \cdot B \cdot (A\underline{z}-\underline{x}) \ge  \underline{x}^T C \underline{x}.
    Here C:\equiv B - B A (A^T B A)^{+} A^T B^T.
    That inequality gets to be an equality for \underline{z}=(A^T B A)^{+} A^T B \underline{x} + [{\bf 1}_n - (A^T B A)^{+}(A^T B A)]\underline{q}.
  • \Pi_1:\equiv A^+ A and \Pi_2:\equiv A A^{+} are projectors: \Pi_{1,2}^2 = \Pi_{1,2}.

The LU method

  • 1) Separate A to an LU-product A=L_0 \cdot U_0.
  • 2) Then trim U_0 by dropping the empty rows (getting U), and trim L_0 by trimming the corresponding columns (getting L). Note that it is still valid that A=L \cdot U.
  • 3) Finally calculate \Phi:\equiv U^T (U U^T)^{-1} and \Psi:\equiv (L^T L)^{-1} L^T, to get A^{+} = \Phi \cdot\Psi.

Examples

Example 1:

  • A=\begin{pmatrix} 1&1&0\\  0&1&1 \end{pmatrix},
  • A^{+}=\begin{pmatrix} \frac23&-\frac13\\  \frac13&\frac13 \\  -\frac13&\frac23 \end{pmatrix},
  • A\cdot A^{+} = {\bf 1}_2,
  • A^{+} \cdot A = \begin{pmatrix} \frac23&\frac13&-\frac13 \\  \frac13&\frac23&\frac13 \\  -\frac13&\frac13&\frac23 \end{pmatrix}..

Example 2:

  • A=\begin{pmatrix} a\\  b \end{pmatrix},
  • A^{+}=\begin{pmatrix} \frac{a}{a^2+b^2}& \frac{b}{a^2+b^2} \end{pmatrix},
  • A^{+}\cdot A = {\bf 1}_2,
  • A\cdot A^{+} =\begin{pmatrix} \frac{a^2}{a^2+b^2}& \frac{ab}{a^2+b^2 } \\ \frac{ab}{a^2+b^2}& \frac{b^2}{a^2+b^2} \end{pmatrix}..

Example 3:

  • A=\begin{pmatrix} \underline{a} \end{pmatrix},
  • A^{+}=\begin{pmatrix} \frac1{\underline{a}^T\cdot\underline{a}}\underline{a}^T \end{pmatrix} = \frac1{\underline{a}^T\cdot\underline{a}} A^T,
  • A^{+}\cdot A = 1,
  • A\cdot A^{+} = \underline{a} \underline{a}^T.

Example 4:

  • A = \left(U | \underline{v}\right),
  • A^{+} = \begin{pmatrix}  U^+ - \frac1{\underline{r}^T\cdot\underline{r}} U^{+}\underline{v}\underline{v}^T (1-U^{+T}U^T) \\ \frac1{\underline{r}^T\cdot\underline{r}}\underline{v}^T  (1-U^{+T}U^T) \end{pmatrix}, where \underline{r} :\equiv (1- U U^{+}) \underline{v},
  • A^{+}\cdot A = \begin{pmatrix} U^{+} U & 0 \\  0 & 1 \end{pmatrix},
  • A\cdot A^{+} =  U U^{+} + \frac1{\underline{r}^T\cdot\underline{r}}\underline{r}\underline{r}^T..

More at this blog:

More:

Blog at WordPress.com.