Eikonal Blog

2010.02.15

Lutzky’s expansion

Filed under: mathematics — Tags: — sandokan65 @ 22:16

\frac{d e^{z}}{dt} = e^z \left\{\frac{dz}{dt},\frac{e^z-1}{z}\right\}

e^{z(t)} :\equiv e^{x}e^{t y}

  • z(t) = \sum_{l=0}^\infty z_l t^l
  • z_0 = x
  • z_1 = \left\{ y , \frac{x}{e^x-1}\right\} = y - \frac1 [y,x] + \frac1{12} \{y,x^2\} - \frac1{720} \{y,x^4\} + \cdots

Here the expansion \frac{x}{e^x-1} = \sum_{k=0}^\infty \frac{B_k}{k!} x^k was used.

Calculating z_2

Definition:

  • \frac{x e^{\xi x}}{e^x-1} = \sum_{k=0}^\infty \varphi_k(\xi) x^k
  • \frac{e^{\xi x} -1}{e^x-1} = \sum_{k=0}^\infty \Psi_k(\xi) x^k
  • \varphi_k(\xi) = \frac1{k!} \sum_{m=0}^\infty \binom{k}{m} B_m \xi^{k-m}
  • \partial_\xi \varphi_{k+1}(\xi) = \varphi_k(\xi)
  • \varphi_k(0)= \frac1{k!}B_k,
  • \varphi_k(1)= \varphi_k(0) (for k\ne 1),
  • \varphi_0(\xi) \equiv 1
  • \int_0^1 \varphi_k(\xi) d\xi = \delta_{k,0},
  • \Psi_{k}(\xi) = \varphi_{k+1}(\xi) - \varphi_{k+1}(0),
  • \int_0^1 \varphi_k(\xi) \Psi_{l}(\xi)  d\xi = \delta_{k,0}\varphi_{l+1}(0) \theta(l) + \delta_{l,0}\varphi_{k+1}(0) \theta(k),

Now:
z_2 = - \sum_{l=1}^\infty \sum_{k=0}^\infty \{y,x^l,y,x^k\} \frac{B_{l+1}B_k}{(l+1)!k!} -\frac12 \sum_{k=1}^\infty \sum_{l=1}^\infty \sum_{j=0}^\infty I_{kl} \{\Delta_{kl}, x^j\}  \frac{B_j}{j!}
where:

  • \{y,x^k,y\} :\equiv [\{y,x^k\},y],
  • \{y,x^l,y,x^k\} :\equiv \{\{y,x^l,y\},x^k\},
  • \Delta_{kl} :\equiv [\{y,x^l,\{y,x^k\}] = - \Delta_{lk},
  • I_{kl} :\equiv \int_0^1 \varphi_k(\xi) \varphi_{l+1}(\xi), (I_{l+2,l}=0)

The more compact expression is:
z_2 = -\frac12 \left\{ J, \frac{x}{e^x-1}\right\}
where
J :\equiv \int_0^1 d\xi \left[\{z_1,e^{x\xi}\}, \left\{z_1,\frac{e^{x\xi}-1}{x}\right\}\right] = \int_0^1 d\xi \left[\{y,\frac{xe^{x\xi}}{e^x-1}\}, \left\{y,\frac{e^{x\xi}-1}{e^{x}-1}\right\}\right].

The Hausdorff recursion formula:
z_n = \frac1{n} \left(\left\{y, \frac{x}{e^x-1}\right\} D_x \right) z_{n-1}.


Source: T1265 = M. Lutzky “Parameter Differentiation of Exponential Operators and the Baker-Campbell-Hausdorff Formula”; Journal of Mathematics Physics, Vol 9, N 7, July 1968.

2010.01.28

BCS (Baker–Campbell–Hausdorff) formula

Filed under: Uncategorized — Tags: , — sandokan65 @ 17:12

In this posting the matrix C(t) is defined by e^{C(t)}:\equiv e^{t(A+B)} where A and B are constant matrices.

The Baker–Campbell–Hausdorff theorem claims that:

C(t) = B + \int_0^1  dt g(e^{t a} e^b ) A,

where g(z):\equiv \frac{\ln(z)}{z-1} = \sum_{m=0}^\infty \frac{(1-z)^m}{m+1}, and the lower-case letters a and b represent the adjoint actions of the corresponding matrices A and B (e.g. a X:\equiv ad(A) X :\equiv [A,X]).

One is frequently seeing the following series expression:

C(t) = t (A+B) + \frac{t^2}2 [A,B] + \frac{t^3}{12} ([[A.B],B]-[[A,B],A]) + \cdots = t (A+B) + \frac{t^2}2 a B + \frac{t^3}{12} (b^2 A + a^2 B) + \cdots


Source: T1269 = A.N.Richmond “Expansion for the exponential of a sum of matrices”; Int. Journal of Control (issue and year unknown).

The Trotter Formula

Filed under: mathematics — Tags: — sandokan65 @ 17:02

e^{t(A+B)} = \lim_{m\rightarrow \infty} \left(e^{\frac{t}{m}A}e^{\frac{t}{m}B}\right)^m.


Source: T1269 = A.N.Richmond “Expansion for the exponential of a sum of matrices”; Int. Journal of Control (issue and year unknown).

The Zassenhaus formula

Filed under: mathematics — Tags: , — sandokan65 @ 16:49

e^{A+B} = e^{A}e^{B}e^{C_2}e^{C_3}\cdots
where

  • C_2 = -\frac12 [A,B] = -\frac12 a B,
  • C_3 = -\frac13 [[A,B],B] - \frac16 [[A,B],A] = -\frac13 b^2 A + \frac16 a^2 B, etc.

Here, as elsewhere on this site, I am using notation a:\equiv ad(A) etc.


Sources:

  • T1264 = W. Magnus “On the exponential solution of differential equations for a linear operator”; Communications of Pure and Applied mathematcis, Vol VII, 6490673 (1954).
  • T1269 = A.N.Richmond “Expansion for the exponential of a sum of matrices”; Int. Journal of Control (issue and year unknown).

2010.01.26

Magnus on matrix exponentials

Definition: The Magnus’ bracket is defined as \{y,x^n\}_M :\equiv (- ad(x))^n y.

Properties:

  • e^{-ad(x)} y = \sum_{n=0}^\infty \{y, x^n\}_M
  • for a power series P(x)=\sum_{n=0}^\infty p_l x^l one has \{y, P(x)\}_M = \sum_{n=0}^\infty p_l \{y, x^n\}_M
  • Hausdorf:
    • e^{-x} (y D_x) e^x = \left\{y,\frac{e^x -1}{x}\right\}_M,
    • (y D_x e^{-x}) e^x = \left\{y,\frac{1-e^{-x}}{x}\right\}_M,

    where the Hausdorff polarization operator (y D_x) is defined as following:

    (y D_x) x^n :\equiv \sum_{k=0}^{n-1} x^k y x^{n-k-1}.

  • Lemma: For two power series P(x) and Q(x) s/t P(x) Q(x) = 1 one has the following equivalence:
      \{y,P(x)\}_M=u \Leftrightarrow y = \{u, Q(x)\}_M.
  • \{\{y,x^n\}_M,x^m\}_M = \{y,x^{n+m}\}_M.

For e^{z} :\equiv e^{x}e^{y} one has:

  • z=\ln(1+u) = u - \frac12 u^2 + \frac13 u^3 + \cdots

    where
    u:\equiv e^x e^y - 1 = x + y + \frac12 x^2 + xy + \frac12 y^2 + \cdots
  • z = x + y + \frac12 [x,y] + \frac1{12} \{x,y^2\}_M + \frac1{12} \{y,x^2\}_M -\frac1{24} [\{x,y^2\}_M,x] - \frac1{720} \{x,y^4\}_M - \frac1{720} \{y,x^4\}_M + \frac1{180} [[\Delta_1,x],y] - \frac1{180}\{\Delta_1,y^2\}_M - \frac1{120} [\Delta_1,\Delta] - \frac1{360} [\Delta_2,\Delta] + \cdots

    where

    \Delta=[x,y],
    \Delta_1=[\Delta,x],
    \Delta_2=[\Delta,y].

The Magnus’ formula for ordered exponentials

And this is really beautiful thing: for the ordered exponential U(t|A):\equiv \left(e^{\int_0^t dt_1 A(t_1)}\right)_+ one can observe its logarithm (the Magnus function of A) \Omega(t|A) :\equiv \ln U(t|A) (i.e. e^{\Omega}=U) and get following nonlinear ODE for it:


\frac{d\Omega(t)}{dt} = \{A, \frac{\Omega}{1-e^{-\Omega}}\}_M = \sum_{n=0}^\infty \beta_n \{A,\Omega^n\}_M = A + \frac12 [A,\Omega] + \frac1{12}\{A,\Omega^2\}_M + \cdots

where \beta_{2n1+1}=0 (\forall n\in{\Bbb N}_0), \beta_{2n} = (-)^{n-1} \frac{B_{2n}}{(2n)!} and B_n‘s are ordinary Bernoulli’s numbers.


Source: T1264 = W. Magnus “On the exponential solution of differential equations for a linear operator”; Communications of Pure and Applied mathematics, Vol VII, 6490673 (1954). Note: This paper is probably mother of the whole this area of matrix exponentials.


Related here: Magnus’ Bracket Operator – https://eikonal.wordpress.com/2010/02/15/magnus-bracket-operator/

Expansions of the exponentials of the sums of matrices

Richmond’s formula #1: e^{t(A+B)} = e^{tA}e^{tB} + \sum_{r=0}^\infty E_r t^r, where:

  • E_{r+1} = \frac1{r+1}\left((A+B)E_r+[B,F_r]\right) \ with E_0=E_1=0,
  • F_{r+1} = \frac1{r+1}(A F_r + F_r B) \ with F_0=1.

Richmond’s formula #2: e^{t(A+B)} = \frac12(e^{tA}e^{tB}+e^{tB}e^{tA}) + \sum_{r=0}^\infty E'_r t^r, where:

  • E'_{f+1} = \frac1{r+1}((A+B)E'_r+\frac12[B,F_r]+\frac12[A,F^{*}_r]) \ with E'_0=E'_1=E'_2=0, and where F_r‘s are the same as in the Richmond’s formula #1.
  • Following bounds are valid: ||E'_r|| < \frac1{(r-1)!} (||A||+||B||)^r.

Note that the exponential generating function {\cal F}(z):\equiv \sum_{r=0}^\infty \frac{z^r}{r!} F_r satisfies the second order ODE: \partial_z (z\partial_z {\cal F}(z)) = A{\cal F}(z)+{\cal F}(z)B.

—-
Source: T1269 = A.N.Richmond “Expansion for the exponential of a sum of matrices”; Int. Journal of Control (issue and year unknown).

Blog at WordPress.com.