Вы находитесь на странице: 1из 7

5346: D00222032 2 (2013923)

Contents Ex. 1 Solution Ex. 2 Solution Ex. 3 Solution Ex. 4 Solution Ex. 5 (Bonus) Solution References 1 1 3 3 4 4 5 5 6 6 7

(1) (Symplectic linear algebra from [1], Homework 1) Let (V 2n , ) be a symplectic vector space, and U V be a subspace. The symplectic orthogonal of U is dened to be U = {v V | (v, u) = 0 for any u U }. Prove the following statements: (a) (b) (c) (d) dim U + dim U = dim V . (U ) = U . . U2 For any two symplectic subspaces U1 and U2 , U1 U2 if and only if U1 U is symplectic 1 if and only if V = U U .

A subspace U is called isotropic if U U . That is to say, vanishes on U U . A subspace U is called coisotropic if U U . (e) U is Lagrangian 2 if and only if U is isotropic and dim U = Solution. (a) We begin by noting that this result looks very much like the conclusion of the ranknullity theorem for linear maps on vector spaces, which is typically given in terms of the kernel and the image of a linear map. So, we will follow this hint and look for a map whose kernel and image satisfy: Im = U and ker = U . Fortunately, such a map is fairly easy to nd. Firstly, given a symplectic form on V , there is an induced isomorphism between V and V . Indeed, we have a map : V V dened by : v (v, ), and the fact that it is an isomorphism follows immediately from the non-degeneracy condition for . Moreover, by restricting this
1The 2The 1 2

dim V .

denition introduced in class required that |U U be non-degenerate. denition introduced in class required that U = U .
1

5346, map to U we obtain a map = |U : U U , which is certainly surjective since is. This tells us that Im = U , and since V (and hence U ) is nite-dimensional we have that Im = U . Now we simply need to check its kernel. We have ker = {v V | (v ) = 0} = {v V | |U (v, u) = 0 for all u U } = U , as desired. Thus, by the rank-nullity theorem dim V = dim ker + dim Im = dim U + dim U . (b) To begin with, consider u U . Given any v U , by denition we have that (u, v ) = 0; hence u (U ) , i.e. U (U ) . Now suppose that there exists some u (U ) \ U . Then dim (U ) > dim U , and so applying part (a) twice to the subspaces U and U , respectively, we nd that dim V = dim U + dim (U ) > dim U + dim U = dim V, a contradiction. Thus, (U ) \ U = , i.e. (U ) = U .
. The latter condition means that (u, x) = 0 for (c) Suppose that U1 U2 and u U2 any x U2 , but this means that for any v U1 U2 we have that (u, v ) = 0, i.e. and u U1 we have U1 . Conversely, assuming that U2 U1 ; thus U2 u U1 that (u, v ) = 0 for any v U1 , in particular for any v U2 . Employing (b) shows that this, however, means that u (U2 ) = U2 , and so we have that U1 U2 .

(d) First, suppose that U is symplectic, i.e. |U is non-degenerate. Given any u U U , we have that (u, v ) = 0 for all v U (since u U ), but then the nondegeneracy of |U implies that u = 0; hence U U = {0}. Combining this with the result of (a) and standard linear algebra theorems, we then have that V = U U . Conversely, suppose that V = U U . Given a nonzero u U , we can utilize the non-degeneracy of on V in order to nd a nonzero vu V so that (vu , u) = 0. Using the fact that vu V = U U we can decompose it as vu = xu,v + yu,v , where xu,v U and yu,v U . Then, by the linearity of we have that
$$ 0 = (vu , u) = (xu,v , u) + $ ( yu,v , u) = (xu,v , u), $ $

where the cancellation follows from the fact that yu,v U . By applying this argument to each nonzero u U , we have established that |U U is non-degenerate. (e) () Suppose U is Lagrangian. The fact that that it is isotropic is trivial: U = U U U . Furthermore, from (a) we have that dim V = dim U + dim U = 2 dim U and so dim U =
1 2

dim V , as desired.

1 () Suppose that U U and that dim U = 2 dim V . Using (a) once again, and the obvious fact that U U dim U dim U , we nd that

dim V = 2 dim U dim U + dim U = dim V ; therefore we have equality throughout, i.e.
2 U + dim U e dim U = dim
2

dim U = dim U .

5346, Since U is assumed to be a subspace of U , this, of course, implies that U = U , i.e. U is Lagrangian.

(2) (Symplectic group, from [2], Lemma 2.20) Denote by In the n n identity matrix. Let Jn be the following 2n 2n matrix Jn = The symplectic group is dened to be Sp(n) = {B GL(2n, R) | B T Jn B = Jn }. Prove the following statements (note that the use of multiplicity below refers to the algebraic multiplicity, not the geometric multiplicity): (a) For any B Sp(n), is an eigenvalue if and only if 1 is as well. Moreover, they have the same multiplicities. [Hint: Is B T related to B 1 in some way?] (b) If 1 is an eigenvalue of B Sp(n), then it must occur with even multiplicity. [Hint: Start with 1.] (c) Find an element of SL(4, R) that does not belong to Sp(2). Solution. (a) To begin with, note that for B Sp(n) and (B ) (with, say, eigenvector v ), then we have that Bv = v 1 v = B 1 v, so (B ) 1 (B 1 ). Next, recall that it was shown in the previous assignment (see my solution to Exercise 1(a)) that a matrix has the same eigenvalues as its transpose. Furthermore, we have that B T Jn B = Jn
1 B T = Jn B 1 Jn ,

0 In In 0

in other words, B T and B 1 are similar matrices, which implies (by standard linear algebra) that they have the same eigenvalues. Indeed, given (B 1 ) with corresponding eigenvector v , we have that
1 B T (Jn v ) = Jn B 1 Jn (Jn v ) = Jn (B 1 v ) = (Jn v ),

and given (B T ) with corresponding eigenvector v , we have that


1 1 T 1 1 1 B 1 (Jn v ) = Jn B Jn (Jn v ) = Jn (B T v ) = (Jn v ).

Putting all of this together, we have shown that (B ) 1 (B 1 ) 1 (B T ) = (B ). If (B ) has (algebraic) multiplicity k , then simply apply this argument to each copy of to get k copies of 1 ; hence their multiplicities are equal. (b) Let 1 , , k denote the (not necessarily distinct) eigenvalues of B Sp(B ) which are not equal to 1 or 1, and let 1 and 1 have algebraic multiplicity n and m, repsectively (note that either of these, or both, could be zero). In class we saw that Sp(n) SL(2n, R) (see also Exercise 5 below), and so 1 = det(B ) = 1 k (1)n (1)m = (1)m , where the last equality follows from 2(a) which says that each eigenvalue can be paired with its inverse, so the product 1 k reduces to 1. This, of course, implies
3

5346, that m is even, i.e. 1 has even multiplicity (say, m = 2q ). Lastly, since the i s are all paired, we must have that k is even (say, k = 2p), so total # of eigenvalues = 2n = k + n + m = 2p + n + 2q i.e. 1 also has even multiplicity. (c) Let 2 0 0 1/2 B= 0 0 0 0 Then clearly we have det(B ) = 2 however 2 0 0 1/2 B T J2 B = 0 0 0 0 2 0 0 1/2 = 0 0 0 0 0 0 0 0 = 6 0 0 1/6 so B Sp(2). 2 0 0 0 1 0 0 0 0 0 0 0 0 1 0 1/2 0 0 0 3 0 1 0 0 0 0 0 1 0 0 0 1/3 0 0 3 0 0 0 0 0 0 1/3 0 0 0 0 3 0 2 0 0 1/2 0 0 0 1/3 6 0 0 1/6 = J2 , 0 0 0 0 0 0 0 0 3 0 0 1/3 1 1 3 =1 2 3 B SL(4, R); 0 0 0 0 . 3 0 0 1/3 n = 2(n p q ),

(3) True or false. No justications are needed.


2 d (a) T F d There exists a one-dimensional subspace of (R , 0 ) that is not Lagrangian. 2n d (b) T d F Any hyperplane (i.e. a codimension one subspace) of (R , 0 ) is coisotropic.

d (c) T F d Any element of Sp(n) is diagonalizable.

Solution. (Justication is not needed... but here it is anyways) (a) False: in fact, there are no one-dimensional subspaces of any two-dimensional symplectic vector space (V, ). To see why, consider such a subspace U V , and choose a basis {e1 } for U . We proved in Exercise 1(e) above that U is Lagrangian if and only if it is isotropic and has half the dimension of the whole space. The latter is certainly true, so we need to show that U must be isotropic. If not, i.e. U U , then there exists some u = e1 U so that u U . This, however, means that there must exist at least one element u = e1 U such that (u, u ) = 0. If this is the case, then we have 0 = (u, u ) = (e1 , e1 ) = (e1 , e1 ) = 0,
4

5346, a contradiction (the last equality follows form the fact that is skew symmetric). Thus, we must have u U , so U U ; hence U is Lagrangian by 1(e). (b) True: in fact, this is true in general. Indeed, given any codimension one subspace U of a symplectic vector space (V, ), we have that dim V = dim U + dim U = dim V 1 + dim U , and so we must have that dim U = 1. Now suppose U U , then U U = {0}. But then from our proof of part (d) of Exercise (1) above, we have that U is a symplectic subspace. But this means that |U U is non-degerate, which is a contradiction: U is a one-dimensional subspace and is antisymmetric, so it must vanish on U U . Thus, we must have that U U , i.e. U is coisotropic. (c) False: for example, consider: B= Then we have that 1 1 0 1 .

B T J1 B =

1 0 1 1

0 1 1 0

1 1 0 1

1 0 1 1

0 1 1 1

0 1 1 0

= J1 ,

so B Sp(1). A standard result in linear algebra, however, says that if an n n matrix is diagonalizable, then it has n distinct eigenvalues, but det(B I ) = det 1 1 0 1 = (1 )2 ,

and so 1 is the only eigenvalue of B . Therefore B cannot be diagonalizable.

(4) Suppose that W is a nite-dimensional vector space, and let W be its dual space. For any subspace U of W , set A(U ) = {f W | f (u) = 0 for any u U } as the annihilator of U . Show that U A(U ) is a Lagrangian subspace of (W W , can ). You can nd the denition of (W W , can ) in [1], Problem 9 of Homework 1.

Proof. First, consider (u, ) U A(U ). Then for any (u , ) U A(U ) we have that can ((u, ), (u , )) = (u) (u ) = 0 0 = 0, since both and annihilate U . This means that (u, ) [U A(U )]can , and so U A(U ) is an isotropic subspace. With this in mind, Exercise 1(e) above implies that now showing dim (U A(U )) = 1 dim (W W ) will prove that U A(U ) is Lagrangian. In order to show this, we rst 2 prove the following lemma:
5

5346, Lemma. Let U W be a subspace of an n-dimensional vector space W . Denote by {e1 , , ek } the basis of U and extend it to a basis for W , denoted by {e1 , , ek , f1 , , fnk }. Furthermore, let {e 1 , , ek , f1 , , fnk } be the dual basis. Then k A(U ) = span {fi }n i=1 . Proof. Consider an arbitrary element x = ( m k we have that em U and so 0 = x(em ) =
i i

ai e i +

j b j fj )

A(U ). Then for 1

ai e i (em ) +
j

bj f j (em ) = am ;

k On the other hand, given y = hence we have A(U ) span {fi }n i=1 . nk span {fi }i=1 and an arbitrary u = j bj ej U we have that

ai fi

y (u) =
i j

ai bj fi (ej ) = 0;

k thus we have the other containment and so A(U ) = span {fi }n i=1 .

Applying this to our situation, the above implies, in particular, that dim (A(U )) = dim W dim U. Finally, using the fact that dimension is additive over the cartesian product of vector spaces, we have that dim (U A(U )) = dim U + dim (A(U )) = dim U U + dim W dim 1 = (2 dim W ) 2 1 = (dim W + dim W ) 2 1 = (dim W + dim W ) 2 =1 dim (W W ), 2 as desired. As mentioned above, this fact combined with the fact that U A(U ) is isotropic and Exercise 1(e) implies that it is Lagrangian.

(5) (Bonus) Prove that Sp(n) SL(2n, R) using block matrices. You can nd some of the properties of block matrices at http://en.wikipedia.org/wiki/Determinant# Block matrices. Solution. We can begin by simply taking the determinant of both sides of the dening equation for B Sp(n): det(Jn ) = det(B T Jn B ) = det(B T ) det(Jn ) det(B ). In case it wasnt alread painfully clear, using the fth property of block matrices from the above link we have that In 0n = 0n = 0n In det(Jn ) = det(0n 0n (In )In ) = det(In ) det(In ) = 1,
6

5346, where 0n denotes the n n zero matrix (of course). Substituting this above and using the previously proven fact that det(B T ) = det(B ), we have 1 = det(B )2 det(B ) = 1. Now we need to rule out the case det(B ) = 1. To do so, rst notice that we now know B is invertible, so we can rewrite the dening equation for Sp(n) as B T Jn = Jn B 1 . This means that (B 1 )T Jn B 1 = (B T )1 B T Jn = Jn , and so B 1 Sp(n) as well. Moreover, this yields
1

(B ) Jn B

1 T

1 = Jn

1 (B 1 )1 Jn ((B 1 )T )1 = Jn

B (Jn )(B T = Jn BJn B T = Jn ,

and so B Sp(n) also satises the latter equation. Form here, let us write B in terms of n n block matrices: B1 B2 B= . B3 B4 Then, assuming det(B1 ) = 0, the third property in the above link, coupled with other previously proven properties of the determinant, yields
1 1 det(B ) = det(B1 ) det(B4 B3 B1 B2 ) = det(B1 ) det((B4 B3 B1 B2 )T ) 1 T T T 1 T = det(B1 (B4 B3 B1 B2 )T ) = det(B1 B4 B1 B2 (B1 ) B3 ).

In order to make sense of this, let us rst look at what the new form of the dening equation for B derived above tells us in terms of these n n block matrices: BJn B T = Jn B1 B2 B3 B4 B1 B2 B3 B4 0 In In 0
T T B4 B2 T T B3 B1 T T B3 B1 T T B4 B2

0 In In 0

0 In In 0 = 0 In In 0

T T T T + B2 B3 B1 B4 + B2 B1 B1 B2 T T T T B3 B2 + B4 B1 B3 B4 B4 B3 T T B1 B2 = B2 B1 T T B3 B4 = B4 B3 T T T T B1 B4 B2 B3 = In = B4 B1 B3 B2

Substituting these above yields


T T T 1 T det(B ) = det(B1 B4 B1 B2 (B1 ) B3 ) T T $$ T 1 T = det(B1 B4 B2$ B1 (B1 ) B3 ) $

$ $

T T ) = det(B1 B4 B2 B3

= det(In ) = 1.

References [1] A. Cannas da Silva. Lectures on Symplectic Geometry. Lecture Notes in Mathematics No. 1764. Springer, 2008. [2] D. McDu and D. Salamon. Introduction to Symplectic Topology. Oxford Mathematical Monographs. Oxford University Press, 2nd edition, 1999.
7

Вам также может понравиться