Вы находитесь на странице: 1из 13

Cent. Eur. J. Math.

10(1) 2012 137-149


DOI: 10.2478/s11533-011-0074-3

Central European Journal of Mathematics

Numerical solution of the Maxwell equations


in time-varying media using Magnus expansion
Research Article

Istvn Farag1 , gnes Havasi2 , Robert Horvth3

1 Department of Applied Analysis and Computational Mathematics, Etvs Lornd University, Pzmny P. stny 1/C, Budapest,
1117, Hungary

2 Department of Meteorology, Etvs Lornd University, Pzmny P. stny 1/A, Budapest, 1117, Hungary

3 Institute of Mathematics, Department of Analysis, University of Technology and Economics Budapest, Egry J. u. 1, Budapest,
1111, Hungary

Received 15 May 2011; accepted 9 July 2011

Abstract: For the Maxwell equations in time-dependent media only finite difference schemes with time-dependent conduc-
tivity are known. In this paper we present a numerical scheme based on the Magnus expansion and operator
splitting that can handle time-dependent permeability and permittivity too. We demonstrate our results with nu-
merical tests.

MSC: 35Q61, 65L05, 65M06

Keywords: Maxwell equations Numerical solution Magnus expansion Operator splitting


Versita Sp. z o.o.

1. Introduction
The Maxwell equations are the governing equations of electromagnetic phenomena and can be formulated as a system
of coupled partial differential equations

(E) (H)
= H E Je , = E, (E) = 0, (H) = 0, (1)
t t

E-mail: faragois@cs.elte.hu

E-mail: hagi@nimbus.elte.hu

E-mail: rhorvath@math.bme.hu

137
Unauthenticated
Download Date | 8/23/17 6:40 PM
Numerical solution of the Maxwell equations in time-varying media using Magnus expansion

where we have supposed that there are no present free charges in the space. When the material parameters the
permittivity , the permeability and the conductivity and the electric source density Je together with the auxiliary
initial and boundary conditions are given, then the unknown electric and magnetic field strengths (denoted, respectively,
by E and H) can be determined.
The first efficient numerical solution method of the Maxwell equations was published in 1966 [22]. This is the so-called
Finite Difference Time Domain (FDTD) method [18]. The method applies two spatially staggered grids in order to
get a semi-discrete system of ordinary differential equations and then uses the so-called leap-frog scheme in the time
integration. Because the FDTD method is based on simple finite difference approximations, it is easy to understand and
easy to implement on computers. The method is used up to the present date.
Media that have time-dependent material parameters such as permeability, permittivity or conductivity are called time-
varying or non-stationary media. In many cases such as sudden ionization of a gas, plasma or semiconductor crystal;
transience induced by laser pulse excitation; ball lightning explosion; electric arc discharge, etc. the Maxwell equations
must be solved with time-varying material parameters [79, 20]. An abrupt temporal change in the material parameters
behaves like boundaries between two regions that have different material properties: it splits the incident wave into
direct and inverse waves. When the material parameters are changing, exact solutions of the equations can be given
only for very special geometrical settings. In [20], the system is transformed into a Volterra integral equation. In one-
dimensional cases, taking the Laplace transform of the equations could lead to the exact solution. In two dimensions, the
method of variable separation was successively applied in [23]. Naturally the exact analytical solution is impossible for
complicated geometries. In these cases numerical solvers are used. The original FDTD method was extended to time-
varying media e.g. in [13, 19, 21]. In these papers, however, only the conductivity is considered to be time-dependent.
The permittivity and permeability are supposed to be constants.
In this paper, we are going to show how the combination of operator splitting techniques with Magnus expansion results
in efficient numerical schemes for the time-varying Maxwell equations. Because the classical FDTD method can be
formulated also using the operator splitting technique for the system of ordinary differential equations obtained after
spatial semi-discretization, our method can be considered as a generalization of this method. Magnus expansion is used
in the computation of the exact solution of the split subproblems. Most of the properties of the new method are the same
as those of the classical method: the method is an explicit, time-domain method and their convergence properties are
also similar.
The paper is organized as follows. In Sections 2 and 3 we familiarize the reader with the Magnus expansion technique
and the operator splitting method, respectively. Then, in Section 4, the staggered grid spatial discretization of the
Maxwell equations is described. In Section 5, the new integration scheme is defined and analyzed. In the final section,
some numerical tests are given that support the theory introduced earlier.

2. Magnus expansion
The Magnus expansion method is used to write the solution of a non-autonomous Cauchy problem in an exponential
form. Let us suppose that we are given a Cauchy problem in the form

dy
= A(t)y + g(t), t (0, T ], y(0) = y0 , (2)
dt
where A(t) is a time-dependent n n matrix, g is a time-dependent column vector with n elements and y0 is a given
initial vector. The fundamental solution Y(t) of (2) can be written in the form

Y(t) = exp A (t),

where the exponent A (t) is an infinite sum of nested integrals and commutators A (t) = A,1 (t) + A,2 (t) + . . . with
Z t Z tZ s1
1
A,1 (t) = A(s) ds, A,2 (t) = [A(s1 ), A(s2 )] ds2 ds1 , ...
0 2 0 0

This form of A (t) is the so-called Magnus series [14].

138
Unauthenticated
Download Date | 8/23/17 6:40 PM
I. Farag, . Havasi, R. Horvth

Remark 2.1.
It is worth noticing that if A(t) has the property that

[A(s1 ), A(s2 )] = 0

Rt Rt
for all values s1 , s2 (0, T ], then A (t) = A,1 (t) = 0
A(s) ds. In this way, Y(t) = exp 0
A(s) ds.

It is known that if t is sufficiently small, namely if t satisfies the inequality

Z t
kA(s)k ds < ,
0

then the Magnus series is absolutely convergent provided that A(t) is bounded [12, 16]. Thus, for sufficiently small
values of t, the solution of (2) can be written in the form

Z t 
y(t) = Y(t) Y1 (s)g(s) ds + y0 .
0

If the initial condition is given at t = t0 instead of t = 0, as this will be the case in the numerical methods to be
presented, then the left end of the interval of the integration must be changed to t0 from 0. In this case we will write
A (t; t0 ) instead of A (t). Similar notations are used for the terms of the Magnus series. Thus, the fundamental solution
has the form Y(t) = exp (t; t0 ) and the solution is

Z t 
y(t) = Y(t) Y1 (s)g(s) ds + y(t0 ) .
t0

A second order approximation for the solution can be given as

Z t 
exp (A,1 (s; t0 ))g(s) ds + y(t0 ) + O (t t0 )3 ,

y(t) = exp A,1 (t; t0 )
t0

because the first truncated sum of the Magnus series gives the approximation A (t, t0 ) = A,1 (t, t0 ) + O (t t0 )3 . The


integral in the expression of A,1 (t; t0 ) can be approximated numerically with the midpoint or the trapezoidal rule. Both
methods result in a third order local error:
 
t t0 A(t) + A(t0 )
+ O (t t0 )3 = (t t0 ) + O (t t0 )3 .
 
A,1 (t; t0 ) = (t t0 ) A
2 2

3. Operator splitting methods for linear systems of ODEs


Splitting methods are very useful techniques for the numerical solution of complicated time-dependent real-life problems.
The method was successfully applied e.g. to air-pollution models and to the numerical treatment of the Maxwell equations
with time-independent material parameters; e.g. [2, 10, 11, 24]. The basic idea of the method is to split the original
problem (2) into two subproblems as

dy(B)
= B(t)y(B) + g(B) (t) (B)
dt
dy(C)
= C (t)y(C) + g(C) (t), (C)
dt

139
Unauthenticated
Download Date | 8/23/17 6:40 PM
Numerical solution of the Maxwell equations in time-varying media using Magnus expansion

t (0, ], where A(t) = B(t) + C (t), g(t) = g(B) (t) + g(C) (t), and then to solve these systems cyclically on successive
intervals all with length 0 <  T using the solution of one system as the initial condition of the next one. The
starting vector is y0 . The step size is called the splitting time-step. The most frequently applied splitting methods
are the sequential splitting [1], the StrangMarchuk (SM) splitting [15, 17] and the symmetrically weighted sequential
(SWS) splitting [2, 5].
The sequential splitting method iterates as follows. First we solve system (B) on the interval [(n 1), n], then we
solve system (C) on the same interval. The solution of this system will be the approximate solution at the time instant
n, n = 1, 2, . . . , bT /c.
The StrangMarchuk splitting executes the following iteration. We solve system (B) on the interval [(n 1), (n 1/2)],
then system (C) on the whole interval [(n 1), n], and then system (B) again on the interval [(n 1/2), n]. The
solution of this system will be the approximate solution at the time instant n, n = 1, 2, . . . , bT /c.
The symmetrically weighted sequential splitting comes from the symmetrization of the sequential splitting. In the
sequential splitting the subproblems in (B)(C) are solved in the order (B)(C). In the SWS splitting, the subproblems
are solved also in the reverse order (C)(B) and then the results are averaged out in order to obtain the initial vector
of the next cycle.
The splitting procedure introduces a so-called splitting error. The local splitting error is defined as Espl () = y()yspl (),
where yspl is the approximation of the exact solution at the point obtained by the splitting procedure. A splitting
method is said to have accuracy of order p if Espl = O( p+1 ). This order p usually defines the convergence order of the
corresponding splitting method. It is well known that the sequential, SM and SWS splitting methods have, respectively,
first, second and second order accuracy when they are applied to autonomous homogeneous systems (A(t) A Rdd ,
g(t) 0). It was proven in [3] that the methods preserve their order of accuracy also for autonomous inhomogeneous
systems (A(t) A Rdd ) even when the inhomogeneous term depends on the independent variable t. Recently it has
been shown that these orders are preserved also for non-autonomous systems [6].

Remark 3.1.
We note that the split subsystems are generally solved numerically, thus the splitting methods are combined with
numerical methods. The order of the combined method is the minimum of the order of the splitting method and the order
of the numerical schemes applied to the subsystems [4].

4. Semi-discretized Maxwell equations


We will apply the Magnus expansion technique and the operator splitting method to the semi-discretized system of the
Maxwell equations. We perform the semi-discretization exactly in the same way as in the FDTD method on the
staggered grids depicted in Figure 1. Consider the Maxwell equations (1) with time-dependent material parameters.
Thus = (t), = (t) and = (t), and we suppose that these parameters do not depend on the spatial coordinates.

x
Ez
Hy Hx
z Ey
Ex
Hz
y

Figure 1. A Yee cell with the positions of the unknowns on a non-colocated staggered grid.

140
Unauthenticated
Download Date | 8/23/17 6:40 PM
I. Farag, . Havasi, R. Horvth


We introduce the notations E = E, H = H, Je = Je / , and rewrite (1) in the form

" # " # " #


E E J
= A e ,
H H
(3)
t 0

where
0

1


A=
2
.
1 0

2

The symbols 0 and 0 denote the time derivatives of the material parameters.
Let us define the function Ex |i,j,k : R R as

 x y z 
Ex |i,j,k (t) = E i , j , k , t
2 2 2

for all odd integers i and even integers j, k such that the point

 x y z
i , j , k
2 2 2

falls inside the computational domain . The functions Ey |i,j,k , Ez |i,j,k , H |i,j,k , and (Je ) |i,j,k ( = x, y or z) can be
defined similarly taking the positions of the field components into consideration (Figure 1). The above defined functions
approximate the field components in the middle points of the faces and edges of the Yee cells.
Changing the spatial derivatives in (3) to central differences, we obtain the following differential equations for Ex |i,j,k
and Hz |i,j+1,k (and, similarly, for the other unknown functions):

Ex |0i,j,k (t) = Hz |i,j+1,k (t) Hz |i,j1,k (t) + Hy |i,j,k1 (t)


1 1 1
p p p
y (t)(t) y (t)(t) z (t)(t)
0 (t)
 
(t)
Hy |i,j,k+1 (t) Ex |i,j,k (t) (Je )x |i,j,k (t),
1
p +
z (t)(t) (t) 2(t)
(4)
Hz |0i,j+1,k (t) = Ex |i,j+2,k (t) Ex |i,j,k (t) + Ey |i1,j+1,k (t)
1 1 1
p p p
y (t)(t) y (t)(t) x (t)(t)
0 (t)
Ey |i+1,j+1,k (t) Hz |i,j+1,k (t)
1
p
x (t)(t) 2(t)

(i is odd, j and k are even numbers). Listing all the equations for all discretization points, we arrive at the Cauchy
problem
w 0 (t) = Z (t)w(t) + f (t), w(0) is given, (5)

where the vector-scalar function w : R R6N consists an arbitrary ordering of the functions Ex |i,j,k , Ey |i,j,k , Ez |i,j,k , Hx |i,j,k ,
Hy |i,j,k and Hz |i,j,k for all possible indices i, j, k. For every time instant t the matrix Z (t) R6N6N is a sparse matrix
(at most five nonzero elements in each row), where N denotes the number of Yee cells in the computational domain. The
components of the vector-scalar function f (t) are defined by the sixth terms of the right-hand side of (4). Its components
that belong to the magnetic field component rows are zero. It will be useful in the sequel to write the matrix function
Z (t) in the form Z (t) = M(t) + D(t), where the skew-symmetric matrix function M(t) comes from the discretization of
the curl operator (the first four terms on the right-hand side of (4), and the diagonal matrix function D(t) comes from
the loss terms (the fifth term on the right-hand side of (4)).

141
Unauthenticated
Download Date | 8/23/17 6:40 PM
Numerical solution of the Maxwell equations in time-varying media using Magnus expansion

5. Time-integration combining operator splitting and Magnus expansion


Let us split the matrix function M(t) into the form M(t) = M 1 (t) + M 2 (t), where M 1 (t) is obtained from M(t) by zeroing
the rows that belong to the magnetic field components and M 2 (t) is obtained similarly by zeroing the rows of the electric
field components. It is clear that indeed M(t) = M 1 (t) + M 2 (t). Similar splitting is applied for the matrix function D(t)
and for the vector function f (t). Thus D(t) and f (t) are split into the form D(t) = D 1 (t) + D 2 (t) and f (t) = f 1 (t) + f 2 (t),
respectively, where f 2 (t) 0. This splitting, that is splitting according to the electric and magnetic fields, is applied
in the classical FDTD scheme, too [10]. We formulate some propositions regarding the matrix functions M i (t), i = 1, 2.
Let X 1 (t) and X 2 (t) be two matrix functions with the same size defined on the interval (0, T ]. We say that the matrix
function X 2 (t) covers the matrix function X 1 (t) if the condition (X 2 )ij (t) = 0 for all t (0, T ] and for some indices i and
j implies (X 1 )ij (t) = 0 for all t (0, T ].

Proposition 5.1.
Let X 1 (t), X 2 (t) be two matrix functions defined on (0, T ]. Let us suppose that the matrix function M i (t), i = 1, 2, covers
both X 1 (t) and X 2 (t). Then we have X 1 (s1 )X 2 (s2 ) = 0 for all s1 and s2 values from (0, T ].

Proof. For the sake of simplicity let i = 1. The case i = 2 can be proven similarly. It follows from the zero-nonzero
element pattern of M 1 (t) that non-identically zero elements can occur only at the intersections of the electric field rows
and magnetic field columns in the matrix functions X 1 (t) and X 2 (t). This fact together with the calculation of matrix
product in the row-column product fashion implies that X 1 (s1 )X 2 (s2 ) = 0 irrespectively of the choice of s1 and s2 .

Proposition 5.2.
For arbitrary parameters t1 , t2 (0, T ], the relation

Z t2 Z t2
exp M i (s) ds = I + M i (s) ds, i = 1, 2,
t1 t1

is fulfilled.

Rt
Proof. Let T M i , i = 1, 2, denote the integral t12 M i (s) ds. Because the matrix function M i (t) covers the constant
matrix T M i , we have (T M i )2 = 0. Thus the exponential of T M i can be calculated with the series of the exponential
function as
Z t2
X i
exp T M i = T Mi = I + T Mi = I + M i (s) ds.
i=0 t1

This completes the proof.

Remark 5.3. R t2
If we do not want to or cannot compute the integral t1
M i (s) ds, then we can use the midpoint or the trapezoidal
quadrature rules. If T M i is an approximation of T M i with one of the above mentioned methods, then exp T M i = I + T M i .
Rt
This follows from the fact that M i (t) covers the matrix obtained by the quadrature rule for the integral t12 M i (s) ds.

The next two propositions show that Cauchy problems with the coefficient matrix functions M i (t) and D i (t) can be solved
exactly if the elements of M i (t) and D i (t) can be integrated exactly, i = 1, 2.

Proposition 5.4.
The solution of Cauchy problems of the form

y0 (t) = M i (t)y(t), t (0, T ], y(0) is given, i = 1, 2,

142
Unauthenticated
Download Date | 8/23/17 6:40 PM
I. Farag, . Havasi, R. Horvth

can be written as  Z t 
y(t) = I+ M i (s) ds y(0), t (0, T ].
0

Proof. Because M i (t) covers itself, i = 1, 2, we obtain that

M i (s1 )M i (s2 ) = 0, i = 1, 2,

for any s1 , s2 (0, T ]. This implies that [M i (s1 ), M i (s2 )] = 0, i = 1, 2. Thus, applying the Magnus expansion and
Remark 2.1, we find that the solution of the Cauchy problem can be written in the form

 Z t 
y(t) = exp M i (s) ds y(0).
0

Then the statement of the previous proposition with the choices t1 = 0 and t2 = t results in the desired formula.

Proposition 5.5.
The solution of Cauchy problems in the form

y0 (t) = D i (t)y(t), t (0, T ], y(0) is given, i = 1, 2,

can be written as
y(t) = E Di (t)y(0), t (0, T ],

where E Di (t) is a diagonal matrix function with the elements


Z t
E Di (t) jj = exp (D i (s))jj ds, j = 1, . . . , 6N.
0

Proof. Because D i (t) is diagonal, we have that the commutator [D i (s1 ), D i (s2 )] is zero for arbitrary parameters s1 , s2
(0, T ]. This implies that all terms excepting the first one of the Magnus series disappear (Remark 2.1) and the solution
can be written in the form  Z  t
y(t) = exp D i (s) ds y(0).
0

Then the statement follows from the fact that the exponential of a diagonal matrix is a diagonal matrix with the
exponentials of the elements of the original matrix.

Proposition 5.6.
For the split matrix functions M i (t) and D i (t), i = 1, 2, the relation M i (s1 )D i (s2 ) = 0, i = 1, 2, s1 , s2 (0, T ], is true.

Proof. Let i = 1, the proof of the case i = 2 is similar. In D 1 (t) only the diagonal elements of the rows that belong to
the electric field components may be nonzero. However, the columns belonging to the electric components are identically
zero. Thus the product M 1 (s1 )D 1 (s2 ) = 0 irrespectively of the choice of s1 , s2 (0, T ].

Remark 5.7.
Let us notice that the product of the matrices M i (s1 ) and D i (s2 ) is zero only in the order M i (s1 )D i (s2 ). Thus M i (s1 ) and
D i (s2 ) generally do not commute.

143
Unauthenticated
Download Date | 8/23/17 6:40 PM
Numerical solution of the Maxwell equations in time-varying media using Magnus expansion

While Cauchy problems with the coefficient matrices M i (t) and D i (t) can be solved exactly (provided that the integrals
of the matrix elements can be computed exactly), systems with the sum M i (t) + D i (t) must be handled numerically. The
reason for this is that D i (s1 )M i (s2 ) 6 0 in general, thus the Magnus series has infinitely many nonzero terms. Let us
consider the Cauchy problem

y0 (t) = (M i (t) + D i (t))y(t), t (0, T ], y(0) is given, i = 1, 2. (6)

Let t be a positive sufficiently small constant, the so-called time-step, and let yn y(nt) denote the approximation of
the exact solution at the time instant nt. The next theorem suggests a second order numerical scheme for the solution
of (6). We will suppose that the matrix functions M i (t) and D i (t) are at least twice continuously differentiable on the
interval [0, T ]. This can be guaranteed if (t) and (t) are at least three times continuously differentiable functions, (t)
is at least twice continuously differentiable function and there exists a positive constant a0 such that (t), (t) a0 ,
t [0, T ].

Theorem 5.8.
Let us suppose that the matrix functions M i (t) and D i (t) are at least twice continuously differentiable on the interval
[0, T ]. Then
 1
     
n+1
yn+1 = I t yn ,
t t
D i (n + 1)t I+ D i (nt) + t M i
2 2 2

n = 0, 1, . . ., y0 = y(0), provides a second order numerical scheme for the Cauchy problem (6).

Proof. Let us introduce the notation D|n+1 for D i ((n + 1)t) and use similar notation for the matrix M i . Let


F () | = F i | = I D| , G () | = G i | = I + D|/t + M|/(2t) ,
() ()
2 2

where and are arbitrary positive real numbers. In the above notations and in the remainder of this proof we left the
subscript i for the sake of simplicity. With these notations we have

1 1 1
yn = F (t) |n G (t) |n F (t) |n1 G (t) |n1 . . . F (t) |1 G (t) |1 y0 . (7)

Let us show first the stability of the scheme. We have to show that

1 (t)
(t) 1 (t)
F |n G |n . . . F (t) |1 G |1 C (T )

2

for all n N for which nt T . Here C (T ) is a positive constant that depends only on the end point T of the
1
considered time interval and k.k2 is the usual Euclidean norm. The Euclidean norm of the diagonal matrix F (t) |k ,
k = 1, . . . , n, is its spectral radius. Thus, because all eigenvalues of D|k are non-positive and the matrix has also zero
eigenvalue (half of all eigenvalues), we have

1  1
F (t) |k = = 1,
1 + t minj {|(D|k )j |}/2


where (D|k )j denotes the jth eigenvalue of D|k . Now we estimate the norm G (t) |k , k = 1, . . . , n.


G |k = I + t D|k1 + t M|k1/2 1 + t D|k1 + t M|k1/2 1 + t d + 8t cmax .
(t)
2 2 2 2 2 2
2

144
Unauthenticated
Download Date | 8/23/17 6:40 PM
I. Farag, . Havasi, R. Horvth

 p
Here cmax = maxt[0,T ] 1/ (t)(t) , the maximal speed of the electromagnetic waves in the computational domain,
= min {x, y, z}. The value d is an upper bound for the norms kD|k1 k2 , k = 1, . . . , n. This upper bound does exist
because D|k1 is diagonal and the functions in the diagonal are bounded from above in view of the twice continuous
differentiability and the estimation (t), (t) a0 . The third term was obtained using the formula kM|k1/2 k22 =

(M|k1/2 )T M|k1/2 . Because in one row or column of M|k1/2 there are at most four nonzero elements, which can
be estimated from above by cmax /, in one row of (M|k1/2 )T M|k1/2 there are at most 16 nonzero elements, which can
be estimated from above by 4 (cmax /)2 . Then the estimation kM|k1/2 k2 8cmax / can be obtained with Gershgorins
theorem. Let us notice that all estimations are independent of the iteration step k.
At last, using (7), we have the following estimation:

 n

(t) 1 (t) 1 (t) t cmax
F |n G |n . . . F (t) |1 G |1 G (t) |n 2 G (t) |n1 2 . . . G (t) |1 2 1 + d + 8t

2 2
     
d cmax d cmax
exp nt +8 exp T +8 ,
2 2


thus C (T ) = exp T (d/2 + 8cmax /) is an appropriate choice.
As a second step we show that the method is consistent proving that its local approximation error is O(t 3 ). This will
ensure the second order convergence. The local error at the nth time level, denoted by n+1 can be calculated as
follows.

1
n+1 = y((n + 1)t) F (t) |n+1 G (t) |n+1 y(nt)
 1 (t)
= exp M+D ((n + 1)t; nt) y(nt) F (t) |n+1 G |n+1 y(nt)
 1 (t) 
G |n+1 y(nt) + O(t 3 ).

= exp M+D,1 (n + 1)t; nt F |n+1
(t)

The exponential function is approximated with its third partial sum

Z (n+1)t Z (n+1)t 2
+ O(t 3 ).
 1
exp M i +Di ,1 (n + 1)t; nt = I + (M i (s) + D i (s)) ds + (M i (s) + D i (s)) ds
nt 2 nt

Then the integrals with M i (t) are approximated by the midpoint rule, while integrals with D i (t) with the trapezoidal
rule. Thus we have

    2
D|n + D|n+1 D|n + D|n+1
+ O(t 3 ).
 1
exp M i +Di ,1 (n + 1)t; nt = I + t M|n+1/2 + + t M|n+1/2 +
2 2 2

1
Substituting this into the formula of the local error n+1 , expanding F (t) |n+1 into Neumann series and using the
fact that D|n+1 = D|n + O(t) (the derivative of D i is bounded) we get that all terms with t 2 and those with lower
power cancel each other. Thus based on the boundedness of the solution the local error is of order O(t 3 ). This is what
we wanted to prove.

Remark 5.9.
The scheme in Theorem 5.8 is essentially explicit because only the inverse of a diagonal matrix must be computed, that
is no solution of systems of linear algebraic equations is needed.

Now the new second order accurate time integration scheme for the Maxwell equations with time-dependent material
parameters can be defined by combining the second order numerical scheme of Theorem 5.8 with the second order
StrangMarchuk splitting. For the sake of simplicity we formulate the scheme first for homogeneous equations, that is
for the case when f (t) 0 in (5) (there are no free currents present in the computational space).

145
Unauthenticated
Download Date | 8/23/17 6:40 PM
Numerical solution of the Maxwell equations in time-varying media using Magnus expansion

Thus we split system (5) as

w 01 (t) = (M 1 (t) + D 1 (t)) w 1 (t), w 02 (t) = (M 2 (t) + D 2 (t)) w 2 (t), (8)

w 1 (0) = w(0) is given, then we apply the StrangMarchuk splitting scheme. The split subproblems are solved with the
numerical scheme in Theorem 5.8. The time step t of the numerical scheme is chosen to be the splitting time step .
This results in the scheme

1 1 1
yn+1 = F 1 |n+1 G1 |n+1 F 2 |n+1 G 2 |n+1 F 1 |n+1/2 G1 |n+1/2 yn ,
(t/2) (t/2) (t) (t) (t/2) (t/2)

where y0 is given through the initial condition. For the inhomogeneous case, that is when f (t) is not zero, system (8)
must be extended with the system w 03 (t) = f (t), which is solved numerically with some second order quadrature rule
such as the midpoint or trapezoidal rule.

Remark 5.10.
If the material parameters do not depend on time, then the scheme simplifies to the classical FDTD scheme. This is why
the new scheme, similarly to the FDTD, is a conditionally stable scheme.

6. Numerical tests
In order to justify the theoretical results obtained in the previous sections, we investigate first the time integration of
the lossless and source free one-dimensional system

(E) H (H) E
= , = , (9)
t x t x

where, for the sake of simplicity, the material parameters are chosen as (t) = 1 and (t) = (1 + t)2 . We solve the
problem on the spatial interval [0, ]. Let us suppose that component E is zero at the boundary points (perfect conductor
boundaries) and the initial conditions are defined as

cos x
E(0, x) = sin x, H(0, x) = .
2

It can be checked by differentiation that the exact solution of the system is


1 3 ln(1 + t)
E(t, x) = p cos sin x,
(1 + t)3 2
!
3 ln(1 + t) 3 ln(1 + t)
H(t, x) =
1
cos + 3 sin cos x.
2 1+t 2 2

In the spatial discretization, the interval [0, ] is divided into N subintervals (Yee cells), which results in the mesh size
x = /N. The electric field components are discretized at the points x = ix, i = 1, . . . , N 1, while the magnetic
components at the points x = x(i 1/2), i = 1, . . . , N. The global error of the electric field is measured at time instant
T = 5 and listed in Table 1 for different time steps and mesh sizes. The test results show well the second order accuracy
of the new scheme. In order to be more precise we notice that we measure here the error of the spatial and the temporal
discretization together. This can be done because the exact solution of the system is known. Because the staggered
grid spatial discretization has second order accuracy, the numerical tests show that the order of the temporal accuracy
is at least two.

146
Unauthenticated
Download Date | 8/23/17 6:40 PM
I. Farag, . Havasi, R. Horvth

N t Error Order
10 5/20 5.5324e-4
20 5/40 1.4173e-4 1.9648
40 5/80 3.5649e-5 1.9912
80 5/160 8.9258e-6 1.9978
160 5/320 2.2323e-6 1.9995
320 5/640 5.5813e-7 1.9999
640 5/1280 1.3953e-7 2.0000

Table 1. The error, measured in l2 norm, of the electric field component at time level T = 5 for problem (9).

For the investigation of the inhomogeneous case let us consider the system

(E) H (H) E
= J, = , (10)
t x t x

where the material parameters and the electric source density are defined as follows: = 1, (t) = et and J(t, x) =

et (sin t cos t) sin t sin x. The initial conditions are set to be

E(0, x) = sin x, H(0, x) = 0.

The exact solution of the system is

E(t, x) = cos t sin x, H(t, x) = sin t cos x.

The spatial discretization of the problem and the error computation is performed like in the previous example. The
results are listed in Table 2 and show again the second order accuracy of the method.

N t Error Order
10 5/20 1.3811e-3
20 5/40 3.4118e-4 2.0172
40 5/80 8.5018e-5 2.0047
80 5/160 2.1238e-5 2.0011
160 5/320 5.3086e-6 2.0002
320 5/640 1.3271e-6 2.0001
640 5/1280 3.3176e-7 2.0001

Table 2. The error, measured in l2 norm, of the electric field component at time level T = 5 for problem (10).

7. Conclusion
In this paper we defined a new finite difference scheme for the numerical solution of the Maxwell equations with
time-dependent material parameters. The scheme is explicit like the classical FDTD scheme and in the case of time-
independent parameters it simplifies to the classical scheme. The new method is more complicated compared to the
stationary case in the sense that the iteration matrices must be recomputed at every time step. The new scheme was
constructed by applying the operator splitting method and the Magnus series expansion. The subsystems were then
solved numerically. The scheme is second order accurate both in spatial and temporal coordinates.

147
Unauthenticated
Download Date | 8/23/17 6:40 PM
Numerical solution of the Maxwell equations in time-varying media using Magnus expansion

Acknowledgements
The paper is supported by the Hungarian Scientific Research Fund OTKA grant No. K67819. The European Union
and the European Social Fund have provided financial support to the project under the grant agreement no. TMOP
4.2.1./B-09/1/KMR-2010-0003. This work is connected to the scientific program of the Development of quality-oriented
and harmonized R+D+I strategy and functional model at BME project. This project is supported by the New Hungary
Development Plan (Project ID: TMOP-4.2.1/B-09/1/KMR-2010-0002).

References

[1] Bagrinovski K.A., Godunov S.K., Difference schemes for multidimensional problems, Dokl. Akad. Nauk. SSSR, 1957,
115, 431433 (in Russian)
[2] Botchev M., Farag I., Havasi ., Testing weighted splitting schemes on a one-column transport-chemistry model,
International Journal of Environment and Pollution, 2004, 22(1-2), 316
[3] Botchev M.A., Farag I., Horvth R., Application of operator splitting to the Maxwell equations including a source
term, Appl. Numer. Math., 2009, 59(3-4), 522541
[4] Csoms P., Farag I., Error analysis of the numerical solution of split differential equations, Math. Comput. Modelling,
2008, 48(7-8), 10901106
[5] Csoms P., Farag I., Havasi ., Weighted sequential splittings and their analysis, Comput. Math. Appl., 2005, 50(7),
10171031
[6] Farag I., Havasi ., Horvth R., On the order of operator splitting methods for non-autonomous systems (submitted)
[7] Fante R., Transmission of electromagnetic waves into time-varying media, IEEE Trans. Antennas and Propagation,
1971, 19(3), 417424
[8] Felsen L., Whitman G., Wave propagation in time-varying media, IEEE Trans. Antennas and Propagation, 1970,
18(2), 242253
[9] Harfoush F.A., Taflove A., Scattering of electromagnetic waves by a material half-space with a time-varying conduc-
tivity, IEEE Trans. Antennas and Propagation, 1991, 39(7), 898906
[10] Horvth R., Uniform treatment of numerical time-integrations of the Maxwell equations, In: Proceedings Scientific
Computing in Electrical Engineering, Eindhoven, June 2328, 2002, Math. Ind., 4, Springer, Berlin, 2003, 231239
[11] Hundsdorfer W., Verwer J., Numerical Solution of Time-Dependent Advection-Diffusion-Reaction Equations,
Springer Ser. Comput. Math., 33, Springer, Berlin, 2003
[12] Karlsfeld S., Oteo J.A., Recursive generation of higher-order terms in the Magnus expansion, Phys. Rev. A, 1989,
39(7), 32703273
[13] Lee J.H., Kalluri D.K., Three-dimensional FDTD simulation of electromagnetic wave transformation in a dynamic
inhomogeneous magnetized plasma, IEEE Trans. Antennas and Propagation, 1999, 47(7), 11461151
[14] Magnus W., On the exponential solution of differential equations for a linear operator, Comm. Pure Appl. Math.,
1954, 7(4), 649673
[15] Marchuk G.I., Splitting Methods, Nauka, Moscow, 1988 (in Russian)
[16] Moan P.C., Oteo J.A., Ros J., On the existence of the exponential solution of linear differential systems, J. Phys. A,
1999, 32(27), 51335139
[17] Strang G., On the construction and comparison of difference schemes, SIAM J. Numer. Anal., 1968, 5(3), 506517
[18] Taflove A., Hagness S.C., Computational Electrodynamics: the Finite-Difference Time-Domain Method, 3rd ed.,
Artech House, Boston, 2005
[19] Taylor C.D., Lam D.-H., Shumpert T.H., Electromagnetic scattering in time varying, inhomogeneous media, Interaction
Notes, 41, Mississippi State University, State College, Mississippi, 1968
[20] Vorgul I., On Maxwells equations in non-stationary media, Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng.
Sci., 2008, 366(1871), 17811788
[21] Wu R., Gao B.-Q., The analysis of 3 dB microstrip directional coupler in time-varying media by FDTD method, In:
2nd International Conference on Microwave and Millimeter Wave Technology, 2000, ICMMT, Beijing, 2000, 375378

148
Unauthenticated
Download Date | 8/23/17 6:40 PM
I. Farag, . Havasi, R. Horvth

[22] Yee K.S., Numerical solution of initial boundary value problems involving Maxwells equations in isotropic media,
IEEE Trans. Antennas and Propagation, 1966, 14(3), 302307
[23] Zhang Y., Gao B.-Q., Propagation of cylindrical waves in media of time-dependent permittivity, Chinese Phys. Lett.,
2005, 22(2), 446449
[24] Zlatev Z., Dimov I., Computational and Numerical Challenges in Environmental Modelling, Stud. Comput. Math., 13,
Elsevier, Amsterdam, 2006

149
Unauthenticated
Download Date | 8/23/17 6:40 PM