Академический Документы
Профессиональный Документы
Культура Документы
where K is Q, R, or C,
E x = Ax + f,
Vi+1 := A1 (EVi ) Kn ,
W0 := {0},
Wi+1 := E 1 (AWi ) Kn .
Received by the editors March 1, 2011; accepted for publication (in revised form) by C.-H. Guo
February 7, 2012; published electronically May 3, 2012.
http://www.siam.org/journals/simax/33-2/82627.html
Institut f
ur Mathematik, Technische Universit
at Ilmenau, Weimarer Strae 25, 98693 Ilmenau,
Germany (thomas.berger@tu-ilmenau.de). This authors work was supported by DFG grant Il25/9.
AG Technomathematik, Technische Universit
at Kaiserslautern, Erwin-Schr
odinger-Str. Geb. 48,
67663 Kaiserslautern, Germany (trenn@mathematik.uni-kl.de). This authors work was supported
by DFG grant Wi1458/10 while at the University of W
urzburg.
336
337
We will show (see Theorem 3.2 and Remark 3.3) that the Wong sequences are sucient to completely characterize the solution behavior of the DAE (1.1) including the
characterization of consistent initial values as well as constraints on the inhomogeneity
f.
The Wong sequences can be traced back to Dieudonne [14]; however, his focus is
only on the rst of the two Wong sequences. Bernhard [10] and Armentano [3] used
the Wong sequences to carry out a geometric analysis of matrix pencils. In [27] the
rst Wong sequence is introduced as a fundamental geometric tool in the characterization of the subspace of consistent initial conditions of a regular DAE. Both Wong
sequences are introduced in [26] where the authors obtain a quasi-Kronecker staircase
form; however, they did not consider both Wong sequences in combination so that
the important role of the spaces V W , V + W , EV AW , EV + AW (see
Denition 2.1, Figure 2.1, and Theorem 2.3) is not highlighted. They also appear
in [1, 2, 20, 35]. In control theory modied versions of the Wong sequences (where
im B is added to EVi and AWi , resp.) have been studied extensively for not necessarily regular DAEs (see, e.g., [1, 4, 5, 6, 15, 22, 24, 28, 29]) and they have been
found to be the appropriate tool to construct invariant subspaces, the reachability
space, and provide a Kalman decomposition, just to name a few features. However, it
seems that their relevance for a complete solution theory of DAEs (1.1) associated to
a singular matrix pencil has been overlooked. We therefore believe that our solvability
characterizations solely in terms of the Wong sequences are new.
The Wong sequences directly lead to a quasi-Kronecker triangular form (QKTF),
i.e.,
sEP AP
,
0
sER AR
sE A
=
0
0
sEQ AQ
where sER AR is a regular matrix pencil. sEP AP is the underdetermined pencil
and sEQ AQ is the overdetermined pencil (Theorem 2.3). With only a little more
eort (invoking solvability of generalized Sylvester equations) we can get rid of the
o-diagonal blocks and obtain a quasi-Kronecker form (QKF) (Theorem 2.6). From
the latter it is easy to obtain the Kronecker canonical form (KCF) (Corollary 2.8)
and hence another contribution of our work is a new proof for the KCF. We have to
admit that our proof does not reach the elegance of the proof of Gantmacher [16];
however, Gantmacher does not provide any geometrical insight. At the other end of
the spectrum, Armentano [3] uses the Wong sequences to obtain a result similar to ours
(the QKTF but with more structured diagonal block entries); however, his approach
is purely geometrical so that it is not directly possible to deduce the transformation
matrices which are necessary to obtain the QKTF or QKF. Our result overcomes
this disadvantage because it presents geometrical insights and, at the same time, is
constructive. Furthermore, our work is self-contained in the sense that only results
on regular matrix pencils are assumed to be well known.
Dierent from other authors, we do not primarily aim to decouple the regular
part of the matrix pencil, because (1) the decoupling into three parts which have
the solution properties existence, but nonuniquess (underdetermined part), existence and uniqueness (regular part), and uniqueness, but possible nonexistence
(overdetermined part) seems very natural, and (2) the regular part can be further
decoupled if necessaryagain with the help of the Wong sequences as we showed in
[9]. Furthermore, we are also not aiming for a staircase form as is common in the
338
numerical literature; here the matrix pencils sEP AP , sER AR and sEQ AQ
have no further structure, which has the advantage that the transformation matrices
can be chosen to be simple (in any specied way).
Another advantage of our approach is that we respect the domain of the entries in
the matrix pencil, e.g., if our matrices are real-valued, then all transformations remain
real-valued. This is not true for results about the KCF, because, due to possible
complex eigenvalues and eigenvectors, even in the case of real-valued matrices it is
necessary to allow for complex transformations and complex canonical forms. This is
often undesirable, because if one starts with a real-valued matrix pencil, one would
like to get real-valued results. Therefore, we formulated our results in such a way
that they are valid for K = Q, K = R, and K = C. Especially for K = Q it was
also necessary to recheck known results, whether or not their proofs are also valid
in Q. We believe that the case K = Q is of special importance because this allows
the implementation of our approach in exact arithmetic which might be feasible if
the matrices are sparse and not too big. In fact, we believe that the construction of
the QK(T)F is also possible if the matrix pencil sE A contains symbolic entries
as is common for the analysis of electrical circuits, where one might just add the
symbol R into the matrix instead of a specic value of the corresponding resistor.
However, we have not formalized this, but our running example will show that it
is no problem to keep symbolic entries in the matrix. This is a major dierence of
our approach compared to those available in literature which often aim for unitary
transformation matrices (due to numerical stability) and are therefore not suitable
for symbolic calculations.
p
iL
po
po
io
iC
i+
L
iF
RF
p+
iG
RG
I
iV
iT
pT
iR
pT
Fig. 1.1. An electrical circuit with sources and an open terminal used as the origin of the DAE
(1.2) (used as a running example).
and i+ = 0 = i .
Collecting all dening equations of the circuit we obtain 13 equations for 14 state
339
(1.2)
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-C
0
0
0
0
0
0
0
0
0
0
0
C
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
= 00
0x
0
1
0
0
0
0
0
0
0
0
0
0
0
-1
1
0
0
0
-1
0
0
0
0
0
0
0
-1
0
0
0
0
0
0
0
0
-1
0
0
0
-1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-1
0
0
0
0
0
0
0
0
1
0
0
-1
0
0
0
0
0
0
0
0
0
1
0
0
1
0
0
0
0
0 0
RG 0
0 RF
0 0
0 0
0 0
0 0
0 0
1 -1
0 -1
0 0
0 0
0
0
0
0
R
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
-1
0
0
0
1
0
0
0
0
0
0
0
0
1
1
0
0
0
0
0
0
0
0 x
0
0
0
0
1
0
0 0
00
00 00
0 0
0 0 I
+
00 00 V .
1 0
0 0
00
00
01
The coecient matrices are not square, hence the corresponding matrix pencil sE A
cannot be regular, and standard tools cannot be used to analyze this description of
the circuit.
The paper is organized as follows. In section 2 we present our main results, in
particular how the Wong sequences directly yield the QKTF (Theorem 2.3) and the
minimal indices associated with the pencil (Theorem 2.9). Afterward, we show how
the QKF can be used to fully characterize the solution behavior of the corresponding DAE in section 3. The proofs of the main results are carried out in section 5.
Preliminary results are presented and proved in section 4.
We close the introduction with the nomenclature used in this paper.
N
Q, R, C
either Q, R, or C
Gln (K)
K[s]
mn
I or In
AS
A1 S
A S
:= (A )1 S
:= x Kn s S : x s = 0 , the orthogonal complement of S Kn
S
rankC (E A)
DpwC
Vi+1 := A1 (EVi ) Kn ,
W0 := {0},
Wi+1 := E 1 (AWi ) Kn .
340
Let V := iN Vi and W := iN Wi be the limits of the Wong sequences.
It is easy to see that the Wong sequences are nested, terminate, and satisfy
k N j N : V0 V1 Vk = Vk +j = V = A1 (EV ) ker A
(2.1)
N j N : W0 ker E = W1 W = W +j = W = E 1 (AW )
as well as
AV EV
(2.2)
and
EW AW .
R0 00 00
0 0 0
0 0 0
V1 = im 00 00 00 = V
0 0 0
1 0 0
0 1 0
001
0 -1 1
-1 1 -1
and
1 0 0 0 0 0 0 0 0 0 0 0
W1 =
0
00
0
0
im 00
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0,
0
0
0
0
0
0
1
1 0 0 0 0 0 0 0 0 0 0 0 0
010 0000000 000
00 00 01 01 00 00 00 00 00 00 00 00 00
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0
W2 = im 00 00 00 00 00 01 01 00 00 00 00 00 00 = W .
0 0 0 0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 0 1 0 0 0
000 0000000 100
000 0000000 010
000 0000000 001
We carried out the calculation with MATLAB and its Symbolic Tool Box and the
following short function for calculating the preimage:
Listing 1
MATLAB function for calculating a basis of the preimage A1 (im S) for some matrices A and S.
function V = g e t P r e I m a g e(A , S )
[ m1 , n1 ]= size ( A ); [ m2 , n2 ]= size ( S );
if m1 == m2
H = null ([ A , S ]);
V = c o l s p a c e( H (1: n1 ,:));
else
error ( Both matrices must have same number of rows );
end ;
Before stating our main result we repeat the result concerning the Wong sequences
and regular matrix pencils.
Theorem 2.2 (the regular case, see [9]). Consider a regular matrix pencil sE
A Kmn [s], i.e., m = n and det(sE A) K[s] \ {0}. Let V and W be the
limits of the corresponding Wong sequences. Choose any full rank matrices V and W
such that im V = V and im W = W . Then T = [V, W ] and S = [EV, AW ]1 are
invertible and put the matrix pencil sE A into quasi-Weierstra form,
I 0
J 0
(SET, SAT ) =
,
,
0 N
0 I
341
EV AW = {0},
V + W = Kn ,
EV + AW = Kn .
These properties do not hold anymore for a general matrix pencil sEA; see Figure 2.1
for an illustration of the situation.
Kn
Km
V + W
EV + AW
V W
nP
EV AW
mP
mR
nR
nQ
mQ
Fig. 2.1. The relationship of the limits V and W of the Wong sequences of the matrix
pencil sE A Kmn [s] in the general case; the numbers nP , nR , nQ , mP , mR , mQ N denote the
(dierence of the) dimensions of the corresponding spaces.
We are now ready to present our rst main result which states that the knowledge
of the spaces V and W is sucient to obtain the QKTF, which already captures
most structural properties of the matrix pencil sE A. With the help of the Wong
sequences Armentano [3] already obtained a similar result; however, his aim was to
obtain a triangular form where the diagonal blocks are in canonical form. Therefore,
his result is more general than ours, however, the price is a more complicated proof
and it is also not clear how to obtain the transformation matrices explicitly.
Theorem 2.3 (quasi-Kronecker triangular form (QKTF)). Let sE A Kmn [s]
and consider the corresponding limits V and W of the Wong sequences as in Definition 2.1. Choose any full rank matrices P1 KnnP , P2 KmmP , R1 KnnR ,
R2 KmmR , Q1 KnnQ , Q2 KmmQ such that
im P1 = V W ,
V W im R1 = V + W ,
(V + W ) im Q1 = Kn ,
im P2 = EV AW ,
EV AW im R2 = EV + AW ,
(EV + AW ) im Q2 = Km .
Then Ttrian = [P1 , R1 , Q1 ] Gln (K) and Strian = [P2 , R2 , Q2 ]1 Glm (K) transform
sE A in QKTF:
AP AP R AP Q
EP EP R EP Q
ER ERQ , 0
AR ARQ ,
(2.3) (Strian ETtrian , Strian ATtrian ) = 0
0
0
EQ
0
0
AQ
342
where
(i) EP , AP KmP nP , mP < nP , are such that rankC (EP AP ) = mP for all
C {},
(ii) ER , AR KmR nR , mR = nR , with sER AR regular, i.e., det(sER AR )
0,
(iii) EQ , AQ KmQ nQ , mQ > nQ , are such that rankC (EQ AQ ) = nQ for all
C {}.
The proof is carried out in section 5.
Remark 2.4. The sizes of the blocks in (2.3) are uniquely given by the matrix
pencil sE A because they depend only on the subspaces constructed by the Wong
sequences and not on the choice of bases thereof. It is also possible that mP = 0 (or
nQ = 0) which means that there are matrices with no rows (or no columns). On the
other hand, if nP = 0, nR = 0, or mQ = 0, then the P -blocks, R-blocks, or Q-blocks
are not present at all. Furthermore, it is easily seen that if sE A fullls (i), (ii), or
(iii) itself, then sE A is already in QKTF with Ttrian = P1 = I, Ttrian = R1 = I, or
Ttrian = Q1 = I, and Strian = P21 = I, Strian = R21 = I, or Strian = Q1
2 = I.
Remark 2.5. From Lemma 4.4 we know that E(V W ) = EV AW =
A(V W ); hence
EV AW = E(V W ) + A(V W ).
Furthermore, due to (2.2),
EV + AW = E(V + W ) + A(V + W ).
Hence the subspace pairs (V W , EV AW ) and (V + W , EV + AW ) are
reducing subspaces of the matrix pencil sE A in the sense of [37] and are in fact
the minimal and maximal reducing subspaces.1
In our example (1.2) we have
V W = V , V + W = W
and, with K :=
RG +RF
RG
EV AW = EV =
1
00
0
0
im
00 ,
0
0
0
0
0
EV +AW = AW =
1 0 0
0 1 0
00 00 10
0 0 0
0 0 0
0 0 0
im
00 00 00
0 0 0
0 0 0
0 0 0
R
-K 0 - RF
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
1
0
0
0
0
0
0
0
0
0 0
0 0
0 0
0 0
0 0
1 0
0 1
0 0
-1 0
0 0
0 0
0 0
0 -RF
R 0 0 0 0 0 0 0 0 0 0 0 0 0
00 00 00 01 01 00 00
0 0 00 0 0 0 0 0 0 0 0 01
0 0 0 0 0 1 0
0 0 00 0 0 1 0 0 0 0 0 00
0 0 0 0 0 0 1
0 0 00 0 0 0 1 0 0 0 0 00
[P1 , R1 , Q1 ] = 0 0 0 0 0 0 0 0 1 0 0 0 0 0 , [P2 , R2 , Q2 ] =
00 00 00 00 00 -10 00
0 0 00 0 0 0 0 0 1 0 0 00
1 0 00 0 0 0 0 0 0 1 0 00
00 00 00 00 00 00 00
0 1 00 0 0 0 0 0 0 0 0 00
0 0 0 0 0 0 0
0 0 10 0 0 0 0 0 0 0 0 00
R
0 -1 1 0 0 0 0 0 0 0 0 1 0 0
0 -K - F 1 0 0 -R
-1 1 -1 0 0 0 0 0 0 0 0 0 1 0
1 We
RG
0 0
0 0
0 0
0 0
0 0
0 0
0 0
1 0
0 0
0 1
0 0
0 0
K RF
0 0
0 0
0 0
0 0
0 0
0 0
0 0
1 0
0 0
0 1
0 0
0 0
K RF
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0.
0
0
0
1
0
00
00
00
00
00
00
00
00
01
00
00
10
00
0
0
0
0
0
0
0
0 .
0
0
0
0
1
343
With this choice we obtain the following QKTF for our example:
0 -1 1 0 0 0 0 0 0 0 0 1 0 0
CR 0 0 0 0 -C 0 0 0 0 0 0 0 0
00 010000 0 0 0 000
0 0000 0 000000 0 L
0 0 0 0 0 0 0 0 0 0 0 0 0 0 00 00 00 00 -11 -10 00 00 R0GR0 00 00 00 00
F
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 R 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 0 0 0 0 0 0
(E, A)
=
0 0 0 0 0 0 0 0 0 0 0 0 0 0 , 0 0 0 1 -1 0 0 0 0 0 0 0 0 0 .
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 -1 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 1 0 0
00 00 00 00 00 00 00 00 00 00 00 00 00 00 0 0 0 0 0 0 0 0 0 0 1 1 1 0
0 0000 0 000000 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 LK
0 0 0 0 0 0 0 0 0 0 0 0 0 -1
00 000000 0 0 0 000
The QKTF is already useful for the analysis of the matrix pencil sE A and the
associated DAE E x = Ax + f . However, a complete decoupling of the dierent parts,
i.e., a block triangular form, is more satisfying from a theoretical viewpoint and is
also a necessary step to obtaining the KCF as a corollary. In the next result we show
that we can transform any matrix pencil sE A into a block triangular form, which
we call quasi-Kronecker form (QKF) because all the important features of the KCF
are captured. In fact, it turns out that the diagonal blocks of the QKTF (2.3) already
are the diagonal blocks of the QKF.
Theorem 2.6 (QKF). Using the notation from Theorem 2.3 the following equations are solvable for matrices F1 , F2 , G1 , G2 , H1 , H2 of appropriate size:
(2.4a)
0 = ERQ + ER F1 + F2 EQ ,
0 = ARQ + AR F1 + F2 AQ ,
(2.4b)
0 = EP R + EP G1 + G2 ER ,
0 = AP R + AP G1 + G2 AR ,
(2.4c)
0 = (EP Q + EP R F1 ) + EP H1 + H2 EQ ,
0 = (AP Q + AP R F1 ) + AP H1 + H2 AQ ,
I G2 H2
I
F2 Strian = [P2 , R2 P2 G2 , Q2 P2 H2 R2 F2 ]1
S := 0
0
0
I
and
I
T := Ttrian 0
0
G1
I
0
H1
F1 = [P1 , R1 + P1 G1 , Q1 + P1 H1 + R1 F1 ].
I
EP
AP
0
0
0
0 , 0 AR
(2.5)
(SET, SAT ) = 0 ER
0
0 EQ
0
0
0
0 ,
AQ
where the block diagonal entries are the same as for the QKTF (2.3). In particular,
the QKF (without the transformation matrices S and T ) can be obtained with only the
Wong sequences (i.e., without solving (2.4)). Furthermore, the QKF (2.5) is unique
in the following sense:
(2.6) (E, A)
= (E , A )
, AR ), (EQ , AQ )
, AQ ),
(EP , AP )
= (EP , AP ), (ER , AR )
= (ER
= (EQ
344
where EP , AP , ER
, AR , EP , AP are the corresponding blocks of the QKF of the matrix
pencil sE A .
The proof is carried out in section 5. In order to actually nd solutions of (2.4),
the following remark might be helpful.
Remark 2.7. Matrix equations of the form
0 = M + P X + Y Q,
0 = R + SX + Y T
for given matrices M, P, Q, R, S, T of appropriate size can be written equivalently as
a standard linear system
Q I
T I
I P
I S
vec(X)
vec(M )
=
,
vec(Y )
vec(R)
where denotes the Kronecker product of matrices and vec(H) denotes the vectorization of the matrix H obtained by stacking all columns of H over each other.
For our example (1.2) we already know the QKF, because, as mentioned in Theorem 2.6, the diagonal blocks are the same as for the QKTF. However, we do not yet
know the nal transformation matrices which yield the QKF. Therefore, we have to
nd solutions of (2.4):
0- 1
0
F1 =
0
00
0
0 ,
0
0
0
0
00
0
F2 =
00
0
0
0
0
0
0
0
0 0
0 0
0 0
G1 =
0 0
0 0
0 0
1
R
00 0 0 00 0
0 00 0 0 00 0
0 00 0 0 00 0
,
H1 =
0
0
0
H2 = [ 0 0 ] .
The transformation matrices S and T which put our example into a QKF are then
0 0 01 0
T =
0 0 00
R0 00 00 00
0 0 00
0 0 00
0 0 00
0 0 00
0 0 00
1 0 00
0 1 00
0 0 10
0 -1 1 0
-1 1 -1 0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
1
R 0
0 0
0 0
0 0
-1
R 0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0 1
0
0
0
0
0
and S =
00
0
0
00
0
0
0
0
0
0
0
0
0
0
0
K0
-1
-1
RGK RGK
-RF
1
RGK
K
1
0
0
0
0
0
0
0
0
0
RF
RG
0
1
0
0
0
0
0
0
0
0
-1
00
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
-1
K
-RF
K
0
0
0
0
1
0
0
0
0
0
RF
0 0
1
K
RF
K
1 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
1 0 0
0 0 1
0 0 0
0 0 0
0 1 0
-K 0 -RF
-1 0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
1
RGK
-1
K
0
0
0
0
0
0
0
0
0
0
1
345
where
P (s) = s
1
..
.
J (s) = sI
..
.
0
1
..
N (s) = s
1
..
.
1
Q (s) = s
..
..
..
..
0
..
.
..
.
1
(+1)
[s], N,
K
0
C [s], N, C,
I K [s], N,
..
. 1
0
0 . . .
K(+1) [s], N.
.
.
. 1
0
1
0
..
The numbers i , i , i , and i in Corollary 2.8 are usually called (degrees of)
elementary divisors and minimal indices and play an important role in the analysis of
matrix pencils; see, e.g., [16, 23, 24, 25]. More precisely, 1 , . . . , b are the degrees of
the nite elementary divisors, 1 , . . . , c are the degrees of the innite elementary divisors, 1 , . . . , a are the columns minimal indices, and 1 , . . . , d are the row minimal
indices. The minimal indices completely determine (under the standing assumption
that 0 1 a and 0 1 d ) the singular part of the KCF. The
following result shows that the minimal indices can actually be determined via the
Wong sequences. Another method for determining the minimal indices via the control
theoretic version of the Wong sequences can be found in [24, Prop. 2.2]; however, the
minimal indices there correspond to a feedback canonical form.
Theorem 2.9 (minimal indices). Consider the Wong sequences (2.1) and the
i := (EVi1 ) , i = 1, 2, . . . , and
notation of Corollary 2.8. Let K = V W , W
= (AW ) (EV ) . Then
K
a = dim(W1 K),
1 K),
d = dim(W
i := dim(W
i+2 K)
dim(W
i+1 K),
it holds that
1 = = a0 = 0,
ai1 +1 = = ai = i,
i = 1, 2, . . . ,
1 = = d
0 = 0,
d
i1 +1 = = d
i = i,
i = 1, 2, . . . .
and
346
Furthermore, the minimal indices are invariant under matrix pencil equivalence. In
particular, the KCF is unique.
The proof is carried out in section 5. This proof uses the KCF and, therefore, in
particular it uses Lemma 4.12 and Corollary 4.13, which provide an explicit method
to obtain the KCF for the full rank matrix pencils sEP AP and sEQ AQ in
the QKF (2.5). However, if one is only interested in the singular part of the KCF
(without the necessary transformation), the above result shows that knowledge of the
Wong sequences is already sucient; there is no need to actually carry out the tedious
calculations of Lemma 4.12 and Corollary 4.13.
i1 =
i is possible for some i. In that case there
Note that i1 = i or
are no minimal indices of value i because the corresponding index range is empty.
= 0 for some index i or i, then i+j = 0 and
Furthermore, once i = 0 or
i
= 0 for all j 0 and
347
U (s) Knn [s] is called unimodular if and only if it is invertible within the ring
Knn [s], i.e., there exists V (s) Knn [s] such that U (s)V (s) = I.
Lemma 3.1 (existence of unimodular inverse). Consider a matrix pencil sE A
Kmn [s], m
= n, such that rank E A = min{m, n} for all C. Then there exist
polynomial matrices M (s) Knm [s] and K(s) Kn m [s], n , m N, such that, if
m < n, [M (s), K(s)] is unimodular and
(sE A)[M (s), K(s)] = [Im , 0],
or, if m > n, [ M(s)
K(s) ] is unimodular and
M (s)
I
(sE A) = n .
0
K(s)
0
0
0
sEP AP
I 0 0
I 0 0
0
sI J
0
0
0
0
sN I
0
0 0 I
0 0 I
0
0
0
sEQ AQ
I 0 0
, fQ
) := 0 SR 0 Sf , where the splitting corresponds to the block
and let (fP , fJ , fN
0 0 I
sizes in (3.1). According to Lemma3.1 choose unimodular matrices [MP (s), KP (s)]
MQ (s)
(nQ +(mQ nQ ))mQ
[s] such that
RnP (mP +(nP mP )) [s] and KQ
(s) R
(sEP AP )[MP (s), KP (s)] = [I, 0]
and
MQ (s)
I
.
(sEQ AQ ) =
0
KQ (s)
I 0 0
If this is the case, then an initial value x0 = T 0 TR 0 (x0P , x0J , x0N , x0Q ) is
0 0 I
consistent at t0 R, i.e., there exists a solution of the initial value problem
(3.2)
E x = Ax + f,
x(t0 ) = x0 ,
if and only if
(3.3)
x0Q
d
= MQ ( dt
)(fQ ) (t0 )
and
x0N
n 1
N
k=0
d k
N k ( dt
) (fN ) (t0 ) .
348
I
0 0
0 TR 0
0 0 I
xP , xJ , xN , xQ of the initial
d
d
)(fP ) + KP ( dt
)(u 0 ),
xP = MP ( dt
! J xP
J(t0 ) 0
J
xJ = e
xJ + e t0 e fJ ,
,
"nN 1 k d k
N ( dt ) (fN ),
xN = k=0
d
xQ = MQ ( dt
)(fQ ),
where ux0P S nP mP is such that the initial condition at t0 for xP is satisfied (which
is always possible due to Lemma 4.17), but apart from that, arbitrary.
The proof is carried out in section 5. !
Note that the antiderivative operator t0 : S S, f F as used in Theorem 3.2
d
F = f and F (t0 ) = 0 (for S = C
is uniquely dened by the two properties dt
this is well known and for S = DpwC this is shown in [34, Prop. 3]; see also [33,
Prop. 2.3.6]).
We want to use Theorem 3.2 to characterize the solutions of our example DAE
(1.2). We rst observe that the regular part can be brought into quasi-Weierstra form
s0 I by premultiplying with SR = A1
R . In particular, the J-part is nonexistent,
which means that the circuit contains no classical dynamics. We choose
1
0
0
1
0
MQ (s)
0
1
[MP (s), KP (s)] = 0
=
and
.
KQ (s)
LKs 1
CRs 1
1
V
[1, 1, K, 0, 0, R1G , R1G , 0, R1G , R1G ] ,
Furthermore, fP = RGV+RF , fJ = , fN = K
fQ = [I, V ] , TR = I; hence the DAE (1.2) is solvable if and only if
d
d
0 = KQ ( dt
)(fQ ) = LK dt
I +V
or, equivalently,
d
V = LK dt
I ,
i.e., the voltage source must be proportional to the change of current provided by the
current source. In that case, the initial value must fulll
d
)(fQ )(0)] ,
x(0) = T [, , , fN (0) , MQ ( dt
iF =
V (0)
RF +RG ,
(0)
=
(0)
+
i
(0)
, u2 (0) = io (0)
i
1
V
o
R
CR
RF +RG
349
and
pT (u1 ) = V + Ru1 ,
iV (u1 , u2 ) = u2
V
RF +RG
iR (u1 ) = u1 +
V
R,
io (u2 ) = u2 ,
iT (u1 ) = u1
V
R
CRu 1 .
0 0
0 SR 0
0 0 I
I 0 0
Strian (sE A)Ttrian 0 TR 0
0 0 I
sEP AP sEP J AP J
0
sI J
=
0
0
0
0
sEP N AP N
0
sN I
0
sEP Q AP Q
sEJQ AJQ
.
sEN Q AN Q
sEQ AQ
The corresponding conditions for the Q-part remain the same; in the condition for the
d
AN Q )(xQ ), in the J-part
N -part the inhomogeneity fN is replaced by fN (EN Q dt
d
AJQ )(xQ ), and in the P -part the
the inhomogeneity fJ is replaced by fJ (EJQ dt
d
d
inhomogeneity fP is replaced by fP (EP J dt
AP J )(xJ ) (EP N dt
AP N )(xN )
d
(EP Q dt AP Q )(xQ ).
4. Useful lemmas. In this section we collect several lemmas which are needed
to prove the main results. Since we use results from dierent areas we group the
lemmas accordingly into subsections.
4.1. Standard results from linear algebra.
Lemma 4.1 (orthogonal complements and (pre-)images). For any matrix M
Kpq we have
(i) for all subspaces S Kp it holds that (M 1 S) = M (S );
(ii) for all subspaces S Kq it holds that (M S) = M (S ).
Proof. Property (i) is shown, e.g., in [7, Property 3.1.3]. Property (ii) follows
from considering (i) for M , S instead of M, S and then taking orthogonal complements.
Lemma 4.2 (rank of matrices). Let A, B Kmn with im B im A. Then for
almost all c K ,
rank A = rank(A + cB),
or, equivalently,
im A = im(A + cB).
In fact, rank A < rank(A + cB) can only hold for at most r = rank A many values of
c.
Proof. Consider the Smith form [32] of A,
r 0
U AV =
0 0
with invertible matrices U Kmm and V Knn and r = diag(1 , 2 , . . . , r ),
i K \ {0}, r = rank A. Write
B11 B12
U BV =
,
B21 B22
350
where B11 Krr . Since im B im A, it follows that B21 = 0 and B22 = 0. Hence,
we obtain the following implications:
rank(A + cB) < rank A rank[r + cB11 , cB12 ] < rank[r , 0] = r
rank(r + cB11 ) < r det(r + cB11 ) = 0.
Since det(r + cB11 ) is a polynomial in c of degree at most r but not the zero polynomial (since det(r )
= 0), it can have at most r zeros. This proves the claim.
Lemma 4.3 (dimension formulae). Let S Kn be any linear subspace of Kn and
M Kmn . Then
dim M S = dim S dim(ker M S).
Furthermore, for any two linear subspaces S, T of Kn we have
dim(S + T ) = dim S + dim T dim(S T ).
Proof. See any textbook on linear algebra for the proof.
4.2. The Wong sequences. The next lemma highlights an important property
of the intersection of the limits of the Wong sequences.
Lemma 4.4 (property of V W ). Let sE A Kmn [s] and V , W be the
limits of the corresponding Wong sequences. Then
E(V W ) = EV AW = A(V W ).
Proof. Clearly, invoking (2.2),
E(V W ) EV EW EV AW and A(V W ) AV AW EV AW ;
hence it remains to show the converse subspace relationship. To this end we choose
x EV AW , which implies the existence of v V and w W such that
Ev = x = Aw;
hence
v E 1 {Aw} E 1 (AW ) = W ,
w A1 {Ev} A1 (EV ) = V .
and
= (AW ) .
V
i+1
(EVi ) = W
i ,
and (AWi ) = V
351
=
Lem. 4.1(ii)
Lem. 4.1(i)
EA1 (EVi )
E (A1 (EVi ))
E A (EVi )
#
$
i+1 = W
i+2 ,
E A W
352
x+1 =
"
i=0
0 (0, 0, . . . , 0,
1 (0, 0, . . . , 0,
2 (0, 0, . . . , x0 ,
0, x0 , x1 , . . . , x , x+1 , . . . , xk1 , xk ),
x0 , x1 , . . . , x , x+1 , . . . , xk1 , xk , 0),
x1 , . . . , x , x+1 , . . . , xk1 , xk , 0, 0),
..
.
xk ,
0,
0, . . . , 0),
. . . , 0, 0)
"
and denote its sum by (z0 , z1 , . . . , zk+ ). Note that by construction z = i=0 i xi =
x+1 . Now consider the singular chain (v0 , v1 , . . . , vk++1 ) := (x0 , x1 , . . . , xk , 0, . . . , 0)
(0, z0 , z1 , . . . , z+k ), which has the property that v+1 = x+1 z = 0. In particular
(v0 , v1 , . . . , v ) and (v+2 , v+3 , . . . , vk++1 ) are both singular chains. Furthermore (we
abbreviate i I with i ),
v0
I
x0
0
0
v1
x1
I
0
0
-
--1 -
I
0
0
v2
x2
..
.
.
..
.
.
.
= -1 -2 - I
;
vk -0 -1 -2 - I
xk
0 - - - -
I
0
1
3
vk+1
0
..
..
.
.
.
.
.
.
.
.
0
0
-0 -1 -3
-
I
vk++1
0
hence span{v0 , v1 , . . . , vk++1 } = span{x0 , x1 , . . . , xk } = span{v0 , v1 , . . . , vk }. In particular
span{vk+1 , vk+2 , . . . , vk++1 } span{v0 , v1 , . . . , vk };
hence, by applying Lemma 4.2, there exists c
= 0 such that (note that < k)
(4.2)
=
v+1 =0
(4.2)
span{v+2 , v+3 , . . . , vk }
+ span{cvk+1 + v0 , cvk+2 + v1 , . . . , cvk++1 + v }
im([v0 , v1 , . . . , vk ] + c [vk+1 , vk+2 , . . . , vk++1 , 0, . . . , 0])
im[v0 , v1 , . . . , vk ]
span{v0 , v1 , . . . , vk } = span{x0 , x1 , . . . , xk }.
Altogether, we have obtained a shorter singular chain which spans the same subspace
as the original singular chain. Repeating this procedure until one obtains a linearly
independent singular chain proves the claim.
353
Corollary 4.8 (basis of the singular chain manifold). Consider a matrix pencil
sE A Kmn [s] and let the singular chain manifold be given by
&
%
K := x Rn k, i N sing. chain (x0 , . . . , xi1 , x = xi , xi+1 , . . . , xk ) (Kn )k+1 ,
i.e., K is the set of all vectors x appearing somewhere in some singular chain of
sE A. Then there exists a linearly independent singular chain (x0 , x1 , . . . , xk ) of
sE A such that
K = span{x0 , . . . , xk }.
Proof. First note that K is indeed a linear subspace of Kn , since the scalar
multiple of every chain is also a chain and the sum of two chains (extending the
chains appropriately with zero vectors) is again a chain.
Let y 0 , y 1 , . . . , y be any basis of K. By the denition of K, for each i = 0, 1, . . . ,
there exist chains (y0i , y1i , . . . , yki i ), which contain y i . Let (v0 , v1 , . . . , vk ) with k = k0 +
k1 +. . .+k being the chain which results by concatenating the chains (y0i , y1i , . . . , yki i ).
Clearly, span{v0 , . . . , vk } = K; hence, Lemma 4.7 yields the claim.
The following result can, in substance, be found in [3]. However, the proof therein
is dicult to follow, involving quotient spaces and additional sequences of subspaces.
Our presentation is much more straightforward and simpler.
Lemma 4.9 (singular chain manifold and the Wong sequences). Consider a
matrix pencil sE A Kmn [s] with the limits V and W of the Wong sequences.
Let the singular chain manifold K be given as in Corollary 4.8; then
V W = K.
Proof. Step 1. We show K V W .
Let (x0 , . . . , xk ) be a singular chain. Clearly we have x0 A1 (E{0}) = ker A
Exl = 0.
Ex
l = A
x(l 1) , . . . , E x
2 = A
x1 ,
Ex
1 = 0,
354
n (s)
0
0
I
V (s) 0
0
I
P (s)
U (s) 0 m (s)
=
Q(s)
0
I
0
0
V (s).
I
355
4.5. KCF for full rank pencils. In order to derive the KCF as a corollary
of the QKF and also for the proof of the solvability of (2.4), we need the following
lemma, which shows how to obtain the KCF for the special case of full rank pencils.
Lemma 4.12 (KCF of full rank rectangular pencil, m < n case). Let sE A
Kmn [s] be such that m < n, and let l := n m. Then rankC (E A) = m for
all C {} if and only if there exist numbers 1 , . . . , l N and matrices
S Glm (K), T Gln (K) such that
S(sE A)T = diag(P1 (s), . . . , Pl (s)),
where P (s), N, is as in Corollary 2.8.
Proof. Suciency is clear; hence it remains to show necessity.
If m = 0 and n > 0, then nothing is to show since sE A is already in the
diagonal form with 1 = 2 = = l = 0. Hence assume m > 0 in the following.
The main idea is to reduce the problem to a smaller pencil sE A Km n [s] with
rankC (E A ) = m < n < n for all C {}. Then we can inductively use
the transformation to the desired block diagonal structure for the smaller pencil to
obtain the block diagonal structure for the original pencil.
By assumption E does not have full column rank; hence there exists a column
operation T1 Gln (K) such that
0
..
ET1 = .
0
..
.
.. .
.
There are two cases now: Either the rst column of AT1 is zero or it is not. We
consider the two cases separately.
Case 1. The rst column of AT1 is zero.
Let ET1 =: [0, E ] and AT1 =: [0, A ]. Then, clearly, rankC (EA) = rankC (E
A ) = m := m for all C {}. Furthermore, with n := n 1, it follows that
n m . Seeking a contradiction, assume n = m . Then the full rank matrix E is
1
square and hence invertible. Let C be any eigenvalue of the matrix E A . Thus
1
1
0 = det(I E A ) = det(E ) det(E A ), and hence rankC (E A ) < m ,
a contradiction. Altogether, this shows that sE A Km n [s] is a smaller pencil
which satises the assumption of the lemma; hence we can inductively use the result
of the lemma
' 1for
(sE A with transformation matrices S and T . Let S := S
0
and T := T1 0 T ; then S(sE A)T has the desired block diagonal structure which
coincides with the block structure of sE A apart from one additional P0 block.
Case 2. The rst column of AT1 is not zero.
Then there exists a row operation S1 Glm (K) such that
1
0
S1 (AT1 ) = .
..
0
..
.
.. .
.
Since E has full row rank, the rst row of S1 ET1 cannot be the zero row; hence,
there exists a second column operation T2 Gln (n) which does not change the rst
356
0
0
1 0
..
.
.. .
.
Now let T3 Gln (K) be a column operation which adds multiples of the rst column
to the remaining columns such that
1 0 0 0
0
0 1 0 0
1 0 0 0
0
0
and S1 AT1 T2 T3 =: .
S1 ET1 T2 T3 =: .
,
.
..
E
A
.
0
0
AXB CXD = E.
357
(4.4)
0 = F + CY + ZB.
358
i = 1, . . . , n1 , j = 1, . . . , n2 ,
where X0ij and E0ij are the corresponding subblocks of X0 and E0 , respectively. Note
that (using the notation of Corollary 2.8) by assumption each pencil sC0i Ai0 is a
P (s), J (s), or N (s) block, and all pencils sB0j D0j are Q (s) blocks. If sC0i Ai0 is
a P (s) block, we consider a reduced equation by deleting the rst column of sC0i Ai0 ,
which results in a regular J (s) block with spectrum {0}, and by deleting the last
row of sB0j D0j , which results in a regular N (s) block with spectrum {}. Hence,
we use Lemma 4.15 to obtain solvability of the reduced problem. Filling the reduced
solution for X0ij by a zero row and a zero column results in a solution for the original
problem. If sC0i Ai0 is a J (s) block, we can apply the same trick (this time we
delete only the last row of sB0j D0j ) to arrive at the same conclusion, as sC0i Ai0
has only nite spectrum. Finally, if sC0i Ai0 is a N (s) block with spectrum {},
we have to reduce sB0j D0j by deleting the rst row, which results in a J (s) block
with spectrum {0}. This concludes the proof.
4.7. Solutions of DAEs. In order to prove Theorem 3.2 we need the following
lemmas, which characterize the solutions of DAEs in the case of full rank pencils. As
in section 3 we restrict ourselves to the case K = R.
Lemma 4.17 (full row rank pencils). Let sE A Rmn [s] such that m < n
and rankC (E A) = m for all C {}. According to Lemma 3.1 choose
M (s) Rnm [s] and K(s) Rn(nm) [s] such that (sE A)[M (s), K(s)] = [I, 0]
and [M (s), K(s)] is unimodular. Consider the DAE E x = Ax + f and the associated
solution space S = C or S = DpwC . Then, for all inhomogeneities f S m , x S n
is a solution if and only if there exists u S nm such that
d
d
x = M ( dt
)(f ) + K( dt
)(u).
Furthermore, all initial value problems have a solution, i.e., for all x0 Rn , t0 R,
and all f S m there exists a solution x S n such that
x(t0 ) = x0 .
d
d
)(f ) + K( dt
)(u) solves E x = Ax + f for
Proof. Step 1. We show that x = M ( dt
nm
.
any u S
This is clear since
d
d
d
(E dt
A) M ( dt
)(f ) + K( dt
)(u) = f + 0 = f.
Step 2. We show that any solution x of the DAE can be represented as above.
d
d 1
), K( dt
)] x S nm , which is well dened due
To this end let u := [0, I][M ( dt
to the unimodularity of [M (s), K(s)]. Then
f=
d
(E dt
A)x =
d
(E dt
d
d
A)[M ( dt
), K( dt
)]
)
*
d
d 1
[I, 0][M ( dt
), K( dt
)] x
d
d 1
[0, I][M ( dt
), K( dt
)] x
d
d 1
= [I, 0][M ( dt
), K( dt
)] x,
359
d
K( dt
)u
d
d
[M ( dt
), K( dt
)]
*
d
d 1
), K( dt
)] x
[I, 0][M ( dt
= x.
d
d 1
[0, I][M ( dt
), K( dt
)] x
d
)(f )(t0 ).
K0 u0 + K1 u1 + + Kp up = x0 M ( dt
Let
u(t) := u0 + (t t0 )u1 +
(t t0 )2
(t t0 )p
u2 + +
up ,
2
p!
t R.
(4.5)
d
d
K( dt
)(u)(t0 ) = K0 u0 + K1 u1 + + Kp up = x0 M ( dt
)(f )(t0 ),
d
d
which implies that the solution x = M ( dt
)(f ) + K( dt
)(u) satises
(4.6)
d
d
)(f )(t0 ) + K( dt
)(u)(t0 ) = x0 .
x(t0 ) = M ( dt
Remark 4.18. A careful analysis of the proof of Lemma 4.17 reveals that for the
solution formula the full rank of E A for = is not necessary. The latter is
only necessary to show that all initial value problems have a solution.
Lemma 4.19 (full column rank pencils). Let sE A Rmn [s] such that m > n
and rankC (E A) = n for all C {}. According to Lemma 3.1 choose
360
M(s)
I
M (s) Rnm [s] and K(s) R(mn)m [s] such that [ M(s)
K(s) ](sE A) = [ 0 ] and [ K(s) ]
is unimodular. Then, for f S m , x S n is a solution of E x = Ax + f if and only if
d
x = M ( dt
)(f )
and
d
K( dt
)(f ) = 0.
AP1 = P2 AP ,
AR1 = P2 AP R + R2 AR ,
AKn Km ,
and
im P2 AP = im AP1 = im P2 ;
361
w (E A)xi = 0.
(5.1)
ker A K = ker A W ,
362
zi+1 = A+
P (EP zi + EP R yi AP R yi+1 ),
zi
AP
zi
1
0
Axi = A[P1 , R1 ]
= ATtrian yi = Strian
yi
0
0
AP zi + AP R yi
1
AR yi
= Strian
0
AP R
AR
0
i = 0, . . . , k,
AP Q
zi
ARQ yi
AQ
0
and, analogously,
EP zi + EP R yi
1
;
ER yi
Exi = Strian
0
hence Ax0 = 0 and Exi = Axi+1 for i = 0, . . . , k. Note that, if we set xk+1 = P1 zk+1 ,
then xk+1 V W W , and identically as shown in the rst part of Step
2 of the proof of Lemma 4.9 there exist xk+2 , . . . , xk , k > k such that Exk+1 =
= Axk , Exk = 0 and, therefore, (x0 , x1 , . . . , xk ) is a singular chain
Axk+2 , . . . , Exk1
then
Qi = I
Q
i
and
i = (im[Pi , Ri ])
im Q
for i = 1, 2.
0
InP
0
InR = 0
Q
1 [P1 , R1 ] = [0, 0, InQ ]
0
0
for i = 1 and analogously for i = 2. We will show in the following that
2 = Q
1 E ,
EQ
Q
2 = Q
1 A ,
A Q
Q
2 = V
W
,
im Q
A V
;
1 = E V
im Q
then the arguments from Step 2 can be applied to sEQ
A
Q and the claim is shown.
363
2 = Q
2 = Q
1 E and A Q
1 A .
Step 4a. We show E Q
Q
Q
Using (2.3) we obtain
E E E
P
PR
PQ
;
E = Q
[P2 , R2 , Q2 ] 0 ER ERQ [P1 , R1 , Q1 ]1= [0 0 EQ ][P1 , R1 , Q1 ]1= EQ Q
Q
2
2
1
+
,. 0 0 EQ
=[0 0 I]
2 = Q
2 = Q
1 E . Analogous arguments show that A Q
1 A .
hence E Q
Q
Q
2 = V
W
.
Step 4b. We show im Q
By construction and Lemma 4.5,
2 = (im[P2 , R2 ]) = (EV + AW ) = (EV ) (AW ) = V
W
.
im Q
A V
.
1 = EV
Step 4c. We show im Q
Lemma 4.5 applied to (E , A ) gives
) = W
(E V
) = V
and (A W
or, equivalently,
= W
EV
= V .
and A W
Hence
1 = (im[P1 , R1 ]) = (V + W ) = V W = A W
E V
.
im Q
This concludes the proof of our rst main result.
5.2. Proof of Theorem 2.6: The QKF. By the properties of the pencils
sEP AP , sER AR and sEQ AQ there exist K and full rank matrices NP , NR ,
MR , and MQ such that (EP AP )NP = I, (ER AR )NR = I, MR (ER AR ) = I,
and MQ (EQ AQ ) = I. Hence Lemma 4.14 shows that it suces to consider
solvability of the following generalized Sylvester equations:
(5.2a) ER X1 AQ AR X1 EQ = ERQ (ERQ ARQ )MQ EQ ,
(5.2b) EP X2 AR AP X2 ER = EP R (EP R AP R )MR ER ,
EP X3 AQ AP X3 EQ = (EP Q + EP R F1 )
((EP Q + EP R F1 ) (AP Q + AP R F1 )) MQ EQ ,
(5.2c)
where F1 is any solution of (2.4a), whose existence will follow from solvability of
(5.2a). Furthermore, the properties of sEP AP , sEQ AQ , and sER AR imply that
Lemma 4.16 is applicable to (5.2) (where (5.2b) must be considered in the (conjugate)
transposed form) and ensures existence of solutions. Now, a simple calculation shows
that for any solution of (2.4) the second part of the statement of Theorem 2.6 holds.
Finally, to show uniqueness in the sense of (2.6) assume rst that (E, A)
=
(E , A ). Then there exist invertible matrices S and T such that (E , A ) = (S ET ,
S AT ). It is easily seen that the Wong sequences Vi , Wi , i N, of the pencil sE A
fulll
(5.3)
Vi = T
Vi ,
Wi = T
Wi ,
E Vi = S EVi ,
A Wi = S AWi .
364
Hence, using the notation of Theorem 2.3, there exist invertible matrices MP , MR ,
MQ , NP , NR , NQ such that
[P1 , R1 , Q1 ] = T
[P1 MP , R1 MR , Q1 MQ ],
EP
AP
0 AR
= [P2 , R2 , Q2 ]1 (sE A)[P1 , R1 , Q1 ]
s 0 ER
0
0 EQ
0
0 AQ
1
NP
0
0
0
0
MP
= 0 NR
MR
0 [P2 , R2 , Q2 ]S (sE A)T [P1 , R2 , Q2 ] 0
0
0
0 NQ
0
0
MQ
1
0
0
MP
0
0
NP
MR1
0
0 [P2 , R2 , Q2 ](sE A )[P1 , R2 , Q2 ] 0
= 0 NR
1
0
0 NQ
0
0
MQ
NP EP MP1
0
NR ER
MR1
= s
1
0
0
NQ EQ
MQ
NP AP MP1
.
0
NR AR MR1
1
0
0
NQ AQ MQ
Hence the necessity part of (2.6) is shown. Suciency follows from the simple observation that equivalence of the QKFs implies equivalence of the original matrix
pencils.
5.3. Proof of Theorem 2.9: The minimal indices. First consider two equivalent matrix pencils sE A and sE A and the relationship of the corresponding
Wong sequences (5.3). From this it follows that the subspace dimension used for the
i , i = 0, 1, 2, . . . , are invariant under equivalence transcalculations of a, d, i and
formations. Hence it suces to show the claim when the matrix pair is already in
KCF as in Corollary 2.8. In particular, uniqueness of the KCF then also follows from
this invariance.
Note that the Wong sequences of a matrix pencil in QKF (2.5) fulll
I
0
0
I
0
0
Vi = 0 ViP I ViR 0 ViQ , Wi = 0 WiP I WiR 0 WiQ ,
0
0
I
0
0
I
where ViP , WiP , ViR , WiR , ViQ , WiQ , i N, are the Wong sequences corresponding
to the matrix pencils sEP AP , sER AR , and sEQ AQ . Furthermore, due to
Theorem 2.6 and the uniqueness result therein, it follows that
I 0 0
V W = im 0 0 0 .
0 0 0
Altogether it is, therefore, sucient to show the claim for the singular block
sEP AP with its Wong sequences ViP and WiP . The result for the singular block
365
sEQ AQ follows by considering the (conjugate) transpose and invoking Lemma 4.5.
sEP AP = s
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
1
0
0
i = 2, 3, . . . .
366
W6P = W5P , . . . .
367
[11] K.- W. E. Chu, The solution of the matrix equations AXB CXD = E and (Y A DZ, Y C
BZ) = (E, F ), Linear Algebra Appl., 93 (1987), pp. 93105.
m, The generalized Schur decomposition of an arbitrary pencil
[12] J. W. Demmel and B. K
agstro
A B: Robust software with error bounds and applications. Part I: Theory and Algorithms, ACM Trans. Math. Software, 19 (1993), pp. 160174.
m, The generalized Schur decomposition of an arbitrary pen[13] J. W. Demmel and B. K
agstro
cil A B: Robust software with error bounds and applications. Part II: Software and
Applications, ACM Trans. Math. Software, 19 (1993), pp. 175201.
[14] J. Dieudonn
e, Sur la r
eduction canonique des couples des matrices, Bull. Soc. Math. France,
74 (1946), pp. 130146.
[15] H. Eliopoulou and N. Karcanias, Properties of reachability and almost reachability subspaces
of implicit systems: The extension problem, Kybernetika, 31 (1995), pp. 530540.
[16] F. R. Gantmacher, The Theory of Matrices (Vols. I & II), Chelsea, New York, 1959.
ndez and M. Gasso
, Explicit solution of the matrix equation AXB CXD = E,
[17] V. Herna
Linear Algebra Appl., 121 (1989), pp. 333344.
[18] S. Iwata and M. Takamatsu, On the Kronecker canonical form of mixed matrix pencils, SIAM
J. Matrix. Anal. Appl., 32 (2011), pp. 4471.
[19] L. Kronecker, Algebraische Reduction der Schaaren bilinearer Formen, Sitzungsber. Akad.
Berlin, (1890), pp. 12251237.
[20] M. Kuijper, First-Order Representations of Linear Systems, Birkh
auser Boston, Boston, 1994.
[21] P. Kunkel and V. Mehrmann, Dierential-Algebraic Equations. Analysis and Numerical
Solution, EMS Publishing House, Z
urich, Switzerland, 2006.
[22] F. L. Lewis, A tutorial on the geometric analysis of linear time-invariant implicit systems,
Automatica J. IFAC, 28 (1992), pp. 119137.
[23] J. J. Loiseau, Some geometric considerations about the Kronecker normal form, Internat. J.
Control, 42 (1985), pp. 14111431.
aldiran, M. Malabre, and N. Karcanias, Feedback canonical forms
[24] J. J. Loiseau, K. Ozc
of singular systems, Kybernetika, 27 (1991), pp. 289305.
[25] M. Malabre, Generalized linear systems: Geometric and structural approaches, Linear Algebra Appl., 122/123/124 (1989), pp. 591621.
and P. M. Van Dooren, An improved algorithm for the computation of structural
[26] C. Oara
invariants of a system pencil and related geometric aspects, Systems Control Lett., 30
(1997), pp. 3948.
[27] D. H. Owens and D. L. Debeljkovic, Consistency and Liapunov stability of linear descriptor
systems: A geometric analysis, IMA J. Math. Control & Information, 2 (1985), pp. 139
151.
aldiran and L. Haliloc
lu, Structural properties of singular systems, Kybernetika, 29
[28] K. Ozc
(1993), pp. 518546.
[29] K. M. Przyluski and A. M. Sosnowski, Remarks on the theory of implicit linear continuoustime systems, Kybernetika, 30 (1994), pp. 507515.
[30] A. Sandovici, H. de Snoo, and H. Winkler, The structure of linear relations in Euclidean
spaces, Linear Algebra Appl., 397 (2005), pp. 141169.
[31] L. Schwartz, Th
eorie des Distributions a
` valeurs vectorielles, I and II, Hermann, Paris, 1957,
1958.
[32] H. J. S. Smith, On systems of linear indeterminate equations and congruences, Philos. Trans.
Roy. Soc. London, 151 (1861), pp. 293326.
[33] S. Trenn, Distributional Dierential Algebraic Equations, Ph.D. thesis, Institut f
ur Mathematik, Technische Universit
at Ilmenau, Universit
atsverlag Ilmenau, Ilmenau, Germany,
2009.
[34] S. Trenn, Regularity of distributional dierential algebraic equations, Math. Control Signals
Systems, 21 (2009), pp. 229264.
[35] A. J. van der Schaft and J. M. Schumacher, The complementary-slackness class of hybrid
systems, Math. Control Signals Systems, 9 (1996), pp. 266301.
[36] P. Van Dooren, The computation of Kroneckers canonical form of a singular pencil, Linear
Algebra Appl., 27 (1979), pp. 103140.
[37] P. M. Van Dooren, Reducing subspaces: Denitions, properties, and algorithms, in Matrix
Pencils, Lecture Notes in Math. 973, B. K
agstr
om and A. Ruhe, eds., Springer-Verlag,
Berlin, 1983, pp. 5873.
[38] A. Varga, Computation of Kronecker-like forms of a system pencil: applications, algorithms
and software, in Proceedings of the IEEE International Symposium on Computer-Aided
368