Вы находитесь на странице: 1из 43

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/267982491

Introduction to Bilinear Forms

Article · November 2014

CITATIONS READS

0 8,905

1 author:

Prasanth G. Narasimha-Shenoi
Government College Chittur
29 PUBLICATIONS   86 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Transit Functions on Graphs and Betweenness with Applications to Community Detection View project

Transit functions on graphs (and posets) View project

All content following this page was uploaded by Prasanth G. Narasimha-Shenoi on 09 November 2014.

The user has requested enhancement of the downloaded file.


Introduction to Bilinear Forms

Project submitted to the Department of Science &


Technology
as part of the Summer Vacation Research Work in
Mathematics

of

Sruthy Murali

DST Inspire Scholar

171/2009

Under the supervision of

Dr. G N Prasanth
Department of Mathematics
Government College Chittur
Palakkad - 678104
2014
DECLARATION

I, Sruthy Murali, do hereby declare that the project entitled “Introduction to


Bilinear Forms” is a bonafide record of project work done by me.

Sruthy Murali
Dr. G N Prasanth1 DEPARTMENT OF MATHEMATICS
Assistant Professor Government College, Chittur
PALAKKAD - 678104
KERALA, INDIA

CERTIFICATE

This is to certify that the project entitled “Introduction to Bilinear Forms”


submitted to the Department of Science and Technology as part of the Summer
Vacation Research Work is a bonafide record of project work carried out by Sruthy
Murali under my supervision. This project will also be submitted to the University
of Calicut for the partial fulfilment of award of Master’s Degree of Science in Math-
ematics.

Dr. G N Prasanth

1
Mob:919447565939, E-mail:prasanthgns@gmail.com
ACKNOWLEDGEMENT

First and foremost, I express my profound gratitude to my guide and mentor


Dr. G N Prasanth whose able guidance and seamless encouragement helped me to
study many results and in successfully completing this work.
I thank Prof. K K Chidambaran, Head, Department of Mathematics, Govern-
ment College Chittur, for the support rendered to me throughout this project work.
I am grateful to Dr. Reji T, Department of Mathematics, Government College
Chittur, for the fruitful and encouraging discussions that helped me to finalize this
project work.
I express my sincere thank to the faculty of the Department of Mathematics,
Government College Chittur, for their valuable suggestions, encouragement and
advices.
I thank Principal, Government College Chittur, for giving me the permission
to do this project as part of Summer Vacation Research Work of Department of
Science and Technology’s INSPIRE Scholarship.
I am very much thankful to the Department of Science & Technology, Govt. of
India, for selecting me under the INSPIRE Program.

Sruthy Murali
Contents

1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2. Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

3. Bilinear Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

4. Types of Bilinear Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

5. Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

6. Bilinear Forms on Finite Dimensional Vector Spaces . . . . . . . . . . . . . 28


6.1 Matrices of Bilinear Forms in Different Ordered Bases . . . . . . . . . 32
6.2 Rank of a Bilinear Form . . . . . . . . . . . . . . . . . . . . . . . . . 34
6.3 Symmetric Bilinear Forms . . . . . . . . . . . . . . . . . . . . . . . . 37
6.3.1 Diagonalization of Bilinear Forms . . . . . . . . . . . . . . . . 38

7. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Introduction
1
Linear algebra is that branch of mathematics which treats the common properties
of algebraic system which consists of a set, together with a reasonable notion of a
‘linear combination’ of elements in the set. This project is a brief survey of bilinear
forms which is a generalization of so called inner products on real or complex spaces

to an arbitrary field K. Inner products are maps which are not completely linear, in
the sense that they are linear in its first argument and conjugate linear in the second
argument. So naturally a question arises is there any similar maps which includes
inner products as well in its collection and can be considered as a generalization of

inner products. These bilinear forms includes certain types of inner products in its
collection and these are linear in both of its arguments. This made me to read and
study extra in bilinear forms.
The second chapter includes the basic definitions, notations and examples which
are needed to support the study of bilinear forms. The third chapter consists of

a study on bilinear forms on arbitrary vector spaces and some examples. In the
fourth chapter, various types of bilinear forms are discussed. The fifth chapter deals
with the orthogonality of bilinear forms. In the sixth chapter the main study is on
bilinear forms on finite dimensional vector spaces. Subsequently matrix of bilinear

form corresponding to an ordered basis is defined and a study is made on how these
matrix changes when the basis is changed. Rank of a bilinear form and its relation
1. Introduction 7

to its matrix are discussed. Then the main study is restricted on symmetric bilinear

forms and made a characterization of diagonalizable bilinear forms.


The main aim of this project is to provide an introduction to bilinear forms
and some of its basis properties and characterizations. We referred these books
[1, 2, 3, 6], and used some of the theorems and proofs as it is needed in the sequel,

and some proofs we have elaborated.


Preliminaries
2
The main purpose of this chapter is to give a brief account of useful concepts and
facts which are required. Most of the definitions are taken from [2, 3, 5].

Definition 2.1 (Field): A field is a non empty set K along with functions
+ : K × K → K and . : K × K → K such that

1. (K, +) is an abelian group, that is

(a) k1 + k2 = k2 + k1 , ∀ k1 , k2 ∈ K,

(b) k1 + (k2 + k3 ) = (k1 + k2 ) + k3 , ∀ k1 , k2 and k3 ∈ K,

(c) there exists a unique element 0 called the zero element of K such that

k + 0 = k = 0 + k, ∀ k ∈ K, and

(d) to every k ∈ K, there corresponds a unique element −k ∈ K such that


k + (−k) = 0 = (−k) + k.

2. (K − {0}, .) is an abelian group, that is

(a) k1 .k2 = k2 .k1 , ∀ k1 , k2 ∈ K,

(b) k1 .(k2 .k3 ) = (k1 .k2 ).k3 , ∀ k1 , k2 and k3 ∈ K,

(c) there exists a unique element 1 ∈ K called the unit element such that

k.1 = k for every scalar k ∈ K and


2. Preliminaries 9

(d) to every k ∈ K, there corresponds a unique element k −1 (or k1 ) called the

inverse of k such that k.k −1 = 1 = k −1 .k.

3. ‘.’ is distributive with respect to ‘+’, that is, k1 .(k2 + k3 ) = k1 .k2 + k1 .k3 ,
∀ k1 , k2 and k3 ∈ K.

Example 2.2: The set Q of all rational numbers with the usual addition and mul-

tiplication is a field, and the same is true of the set R of all real numbers and the
set C of all complex numbers.

Throughout this project the elements of a field K are referred to as scalars.

Definition 2.3 (Characteristic of a field): The characteristic of a field K de-


noted by charK is the least positive integer n such that nk = 0, ∀ k ∈ K. If no

such integer exists, we say that K has characteristic zero.

Definition 2.4 (Vector space): Let K be a field. A vector space over K is a non
empty set V of elements called vectors, along with a function + : V × V → V , called
addition, and a function . : K × V → V , called the scalar multiplication, such that
for every x, y and z ∈ V and k1 , k2 ∈ K,

1. x + y = y + x,

2. x + (y + z) = (x + y) + z,

3. there exists a unique vector 0 in V such that x + 0 = x,

4. there exists a unique vector −x ∈ V such that x + (−x) = 0,

5. k.(x + y) = k.x + k.y,

6. (k1 + k2 ).x = k1 .x + k2 .x,

7. 1.x = x , and
2. Preliminaries 10

8. (k1 k2 ).x = k1 .(k2 .x).

Note: We use kx in place of k.x.

Example 2.5: Let P be the set of all polynomials, with complex coefficients, in
a single variable t. To make P into a complex vector space, we interpret vector

addition and scalar multiplication as the ordinary addition of two polynomials and
the multiplication of a polynomial by a complex number; the zero vector in P is the
polynomial with all coefficients zero.

Example 2.6: Let Cn , n = 1, 2, . . ., be the set of all n−tuples of complex numbers.


If x = (x1 , . . . , xn ) and y = (y1 , . . . , yn ) are elements of Cn , we write, by definition

x + y = (x1 + y1 , . . . , xn + yn ),

kx = (kx1 , . . . , kxn )

0 = (0, . . . , 0)

−x = (−x1 , . . . , −xn )

Then Cn is a complex vector space over C.

Example 2.7 (The space of m × n matrices, K m × n ): Let K be any field and


let m and n be two positive integers. Let K m × n be the set of all m × n matri-
ces over the field K. The sum of two vectors A and B in K m × n is defined by,

(A + B)ij = Aij + Bij and the scalar multiplication is defined by (cA)ij = cAij . With
these operations K m × n is a vector space over K.

Definition 2.8 (Subspace of a vector space): Let V be a vector space over a


field K. A subspace of V is a subset W of V which is itself a vector space over K
with the operations of vector addition and scalar multiplication on V .

Example 2.9: If V is any vector space, V is a subspace of V ; the subset consisting


of the zero vector alone is a subspace of V , called the zero subspace of V .
2. Preliminaries 11

Example 2.10: An n × n matrix A = [aij ] over a field K is symmetric(skew sym-

metric) if aij = aji (aij = −aji ) for each i and j. The symmetric (skew symmetric)
matrices form a subspace of the space of all n × n matrices over K, see example 2.7.

Definition 2.11 (Linear combination): Let V be a vector space over a field K.


A vector y in V is said to be a linear combination of the vectors x1 , . . . , xn in V
provided that there exist scalars k1 , . . . , kn in K such that,
n
P
y = k1 x1 + . . . + kn xn = ki xi .
i=1

Definition 2.12 (Span of a set): Let V be a vector space over a field K. Let
S ⊆ V . The subspace spanned by S is defined to be the set of all linear combinations

of vectors in S.

Definition 2.13 (Linear dependence of vectors): Let V be a vector space over


a field K. A finite set {xi } of vectors is linearly dependent if there exists a set {ki } of
P
scalars, not all zero, such that ki xi = 0. Otherwise they are linearly independent.
i

Definition 2.14 (Basis): A basis in a vector space V over a field K is a set B of

linearly independent vectors such that every vector in V is a linear combination of


elements of B. The dimension of a vector space is the number of vectors in any of
its basis. A vector space V is said to be finite dimensional if it has a finite basis.

Example 2.15: A basis in Cn is the set of vectors xi , i = 1, . . . , n defined by the


condition that the j−th coordinate of xi is δij . That is Cn over C is finite dimensional
and has dimension n.

Example 2.16: Now in P, the space of polynomials, with complex coefficients,


in a variable t, the set {xn }, where xn (t) = tn , n = 0, 1, 2, . . ., is a basis; every

polynomial is by definition, a linear combination of a finite number of xn . Moreover


P has no finite basis, for, given any set of polynomials, we can find a polynomial
of higher degree than any of them; this latter polynomial is obviously not a linear
combination of the former ones.
2. Preliminaries 12

Definition 2.17 (Ordered basis): If V is a finite dimensional vector space, an

ordered basis for V is a finite sequence of vectors which are linearly independent
and any vector in V can be written as a linear combination of these vectors.

Remark 2.18: Now suppose V is a finite dimensional vector space over a field K
and that B = {x1 , . . . , xn } is an ordered basis for V . Given a vector x ∈ V , there is
Pn
a unique n − tuple (k1 , . . . , kn ) of scalars such that ki xi . The n − tuple is unique,
i=1
n
P n
P
because if we also have ℓi xi , then (ki − ℓi )xi = 0 and the linear independence
i=1 i=1
of the xi ’s tells us that ki − ℓi = 0 for each i and hence ki = ℓi . This ki is called the
ith coordinate of x relative to the ordered basis B = {x1 , . . . , xn }. We shall call the
 T
matrix X = k1 , . . . , kn , the coordinate matrix of x relative to the ordered
basis B and denoted by [x]B .

Definition 2.19 (Sum of two vector spaces): Let V and W be vector spaces
over a field K. Then the sum V + W is the space of all sums x + y such that x ∈ V
and y ∈ W . That is, V + W = {x + y : x ∈ V and y ∈ W }.

Theorem 2.20: If W1 and W2 are finite dimensional subspaces of a vector space


V , then W1 + W2 is finite dimensional and

dim (W1 + W2 ) = dim W1 + dim W2 − dim (W1 ∩ W2 ).

Definition 2.21 (Linear independence of subspaces): Let W1 , . . . , Wk be sub-


spaces of the vector space V . We say that W1 , . . . , Wk are independent if

x1 + . . . + xk = 0, xi ∈ Wi implies that each xi = 0.

Definition 2.22 (Direct sum): Let V be a vector space over a field K and W1
and W2 be two subspaces of V . Then V is said to be the direct sum of W1 and W2
denoted by V = W1 ⊕ W2 , if V = W1 + W2 and W1 ∩ W2 = {0}.
2. Preliminaries 13

Example 2.23: Let n be a positive integer and let K be a subfield of the field

of complex numbers C and let V be the space of all n × n matrices over K. Let
W1 be the subspace of all symmetric matrices and let W2 be the space of all skew
symmetric matrices, then V = W1 ⊕ W2 .

Definition 2.24 (Linear Transformations): Let V and W be vector spaces over


a field K. A linear transformation from V into W is a function T from V into W
such that, T (kx + y) = kT (x) + T (y), ∀x, y ∈ V and k ∈ K.

Note : We use T x instead of T (x).

Example 2.25: If V is any vector space, the identity map I, defined by Ix = x,


is a linear transformation from V into V . The zero map 0, defined by 0x =0, is a
linear transformation from V into V .

Note : A linear transformation T from a vector space V into itself is called a linear
operator on V .

Example 2.26 (Differentiation operator): Let K be a field and let V be the


space of polynomial functions f from K into K, given by f (x) = c0 + c1 x + . . . + ck xk .
Let (Df )(x) = c1 + 2c2 x + . . . + kck xk-1 . Then D is a linear operator on V .

Remark 2.27: The collection of all linear transformations from V into W forms a
vector space naturally by the addition and scalar multiplication defined by
(T +U)(x) = T x+Ux and (kT )(x) = kT x, where T and U are linear transformations

from V into W, x ∈ V, k ∈ K. We denote this space by L(V, W ).

Remark 2.28: If V is a vector space over a field K, a linear transformation f from

V into the scalar field K is called a linear functional on V . The collection of all
linear functionals on V forms a vector space by the addition and scalar multipli-
cation defined above. We denote this space by V * and call it the dual space of V,
V * = L(V, K). If V is finite dimensional so is V * . Let B ={x1 , . . . , xn } be a basis
2. Preliminaries 14

for V . For each i consider the linear functional fi defined by, fi (xj ) = δij . Then

B* ={f1 , . . . , fn } is a basis for V * . For x ∈ V , we have x = k1 x1 + . . . + kn xn for


some scalars ki , i = 1, . . . , n.
Then for each i, fi (x) = fi (k1 x1 + . . . + kn xn ) = k1 fi (x1 ) + . . . + knfi (xn ) = ki . That
is fi (x) is nothing but the ith coordinate of x in the ordered basis B.

Definition 2.29: Let V and W be vector spaces over the field K and let T be a
linear transformation from V into W . The null space of T is the set of all vectors x
in V such that T x = 0.

If V is finite dimensional, the rank of T is the dimension of the range of T and the
nullity of T is the dimension of the null space of T .

Theorem 2.30 (Rank - Nullity theorem): Let V and W be vector spaces over
a field K and let T be a linear transformation from V into W . Suppose that V is

finite dimensional. Then, rank(T )+ nullity (T ) = dim V .

Definition 2.31 (Inner product): Let V a vector space over a field K (R or C).
An inner product on V is a function which assigns to each ordered pair of vectors

x, y in V , a scalar hx, yi in K such that the following conditions are satisfied:

1. hx, yi = hy, xi, the bar denoting the complex conjugation;

2. hk1x1 + k2 x2 , yi = k1 hx1 , yi + k2 hx2 , yi,

3. hx, xi ≥ 0; hx, xi = 0 if and only if x = 0,


where x, x1 , x2 , y ∈ V and k1 , k2 ∈ K.

An inner product space is a vector space with an inner product defined on it.

Example 2.32: On K n (K = R or C) there is an inner product which we call the


standard inner product. It is defined on x = (x1 , . . . , xn ) and y = (y1 , . . . , yn ) by
P P
hx, yi = xj yj . When K = R, this may be written as, hx, yi = xj yj and this h, i
j j
is called the Euclidean inner product.
Bilinear Forms
3
An inner product space V over K(R or C) is a mapping from V × V to K which is
linear in the first coordinate and conjugate linear in the second; such a mapping is
called a sesquilinear form. In this chapter, we extend the notion of inner products
to vector spaces over an arbitrary field K. Major results of this chapter are from

[1, 3, 4].

Definition 3.1 (Bilinear forms): A bilinear form on a pair of vector spaces V


and W over a field K is a function B : V × W → K satisfying the following
conditions:

B(x1 + x2 , y) = B(x1 , y) + B(x2 , y)

B(x, y1 + y2 ) = B(x, y1 ) + B(x, y2 )


B(kx, y) = kB(x, y) = B(x, ky)

for x, x1 , x2 ∈ V, y, y1 , y2 ∈ W, k ∈ K. If V = W, then B is called a bilinear form


on V .

Example 3.2: Let A be an m × n matrix and let B : Rm × Rn → R be defined


by B(x, y) = xT Ay for x ∈ Rm , y ∈ Rn . Then B is clearly a bilinear form. In
particular, if m = n, A = In , the identity matrix, then it shows that the Euclidean
inner product on Rn is a bilinear form. Generally for any inner product space V on
3. Bilinear Forms 16

the set of real numbers R, the function B : V × V → R defined by B(x, y) = hx, yi

is a bilinear form on V .

Example 3.3: Let V and W be vector spaces over K with finite bases {x1 , . . . , xm }
and {y1 , . . . , yn}, respectively. Let A = (aij ) be a fixed m×n matrix with coefficients
in K. Let x = k1 x1 + . . . + km xm ∈ V, y = ℓ1 y1 + . . . + ℓn yn ∈ W . Define
m P
P n
B(x, y) = aij ki ℓj . Then B is a bilinear form on the pair (V, W ).
i=1 j=1

Example 3.4: Let V be a vector field over the field K and let f1 and f2 be two
linear functionals on V . Define B : V × V → K as B(x, y) = f1 (x)f2 (y). Then B is
a bilinear form on V .

Example 3.5: Let V be a vector space over K, and V * the dual vector space.

Let B(x, f ) = f (x), f ∈ V * , x ∈ V . Then B is a bilinear form on the pair of vector


spaces (V, V * ).

Example 3.6: Let m and n be two positive integers and K be a field. Let V be
the vector space of all m × n matrices over K. Let A be a fixed m × m matrix over
K. Define BA = tr(X T AY ), where tr denotes the trace of the matrix X T AY which

is the sum of its diagonal entries. Then BA is a bilinear form on V .

Proposition 3.7: Let V and W be vector spaces over a field K. Then the set of
all bilinear forms on the pair (V, W ) is a subspace of the space of all functions from
V × W into K.

Proof. The set of all bilinear forms on the pair (V, W ) is a subset of the space of

all functions from V × W into K. Now in order to prove that it forms a subspace,
it is enough to prove that if B1 and B2 are any two bilinear forms on (V, W ), then
kB1 + B2 for any k ∈ K is a bilinear form. Let B1 and B2 be two bilinear forms on
3. Bilinear Forms 17

(V, W ). Clearly kB1 + B2 is a function from V × W into K. Now for any ℓ ∈ K,

(kB1 + B2 )(ℓx1 + x2 , y) = kB1 (ℓx1 + x2 , y) + B2 (ℓx1 + x2 , y)

= k[ℓB1 (x1 , y) + B1 (x2 , y)] + [ℓB2 (x1 , y) + B2 (x2 , y)]

= kℓB1 (x1 , y) + kB1 (x2 , y) + ℓB2 (x1 , y) + B2 (x2 , y)

= ℓ[kB1 (x1 , y) + B2 (x1 , y)] + kB1 (x2 , y) + B2 (x2 , y)

= ℓ[(kB1 + B2 )(x1 , y) + (kB1 + B2 )(x2 , y)]

for x1 , x2 ∈ V , y ∈ W . In a similar way we can prove that

(kB1 + B2 )(x, ℓy1 + y2 ) = ℓ[(kB1 + B2 )(x, y1 ) + (kB1 + B2 )(x, y2 )]

for x ∈ V , y1 , y2 ∈ W . Hence kB1 + B2 is bilinear form. Hence the collection of all

bilinear forms on (V, W ) forms a subspace of the space of all functions from V × W
into K. We denote this subspace by L(V, W, K). ✷

Definition 3.8 (Non degenerate bilinear form): A bilinear form B : (V, W ) →


K is said to be non degenerate provided that: B(x, y) = 0 for all y ∈ W implies
x = 0, and B(x, y) = 0 for all x ∈ V implies y = 0.

Remark 3.9: The usual arguments using the vector space axioms, applied to bi-

linear forms, show that B(0, y) = B(x, 0) = 0 for all x, y in V and W , respectively,
the degeneracy condition asserts that the equation B(x, y) = 0 for all y, holds only
in the unique case when x = 0.

Definition 3.10 (Duality): A pair of vector spaces V and W are said be dual

with respect to a bilinear form B : V × W → K provided that B is non degenerate.


Types of Bilinear Forms
4
In this chapter main discussion is on bilinear forms on V . Here we will discuss some
facts that is, will B(x, y) = 0 imply B(y, x) = 0? Does B(x, y) = B(y, x) hold for
all pairs x, y ∈ V ? We arrange them into different types accordingly as these facts.
Major results of this chapter are from [6]. Let B be a bilinear form on a vector

space V over a field K.

Definition 4.1: A vector x ∈ V is said to be orthogonal to y ∈ V with respect to


B if B(x, y) = 0.

Note that in general B(x, y) = 0 need not imply that B(y, x) = 0. That is the or-

thogonality relation may not be symmetric with respect to B. Consider the following
example.

Example 4.2: Let A = (aij ) ∈ Rn×n such that a12 = 1 and a21 = 0. Consider
the bilinear form BA : Rn × Rn → R given by BA (x, y) = xT Ay. Consider the
standard basis of Rn , {ǫi }, ǫi = (0, . . . , 1, . . . , 0), 1 occurring in the ith position.

Then BA (ǫ2 , ǫ1 ) = a21 = 0 and BA (ǫ1 , ǫ2 ) = a12 = 1. That is ǫ2 is orthogonal to ǫ1


but ǫ1 is not orthogonal to ǫ2 .

Definition 4.3: The bilinear form B is said to be reflexive if the orthogonality


relation is symmetric (with respect to B), that is for x, y ∈ V , B(x, y) = 0 implies
B(y, x) = 0.
4. Types of Bilinear Forms 19

Definition 4.4: The bilinear form B is said to be symmetric if B(x, y) = B(y, x)

for all x, y ∈ V . B is said to be skew symmetric if B(x, y) = −B(y, x), for all x,
y ∈ V.

Definition 4.5: B is said to be alternating if B(x, x) = 0 for all x ∈ V .

Remark 4.6: From definitions, a symmetric or a skew symmetric bilinear form is


reflexive. An alternating bilinear form is skew symmetric and hence reflexive. Be-

cause we have the relation, B(x + y, x + y) = B(x, x) + B(x, y) + B(y, x) + B(y, y)


for all x, y ∈ V , B is alternating implies B(x + y, x + y) = 0, B(x, x) = 0 and
B(y, y) = 0. From this it follows that B(x, y) = −B(y, x). Hence B is skew sym-
metric and hence reflexive.

Proposition 4.7: A bilinear form is reflexive if and only if it is either symmetric

or alternating.

Proof. Let B be a bilinear form on a vector space V over a field K. First assume
that B is reflexive. That is B(x, y) = 0 implies B(y, x) = 0 for all x, y ∈ V . We
have for x, y, z ∈ V ,

B(x, B(x, y)z − B(x, z)y) = B(x, z)B(x, y) − B(x, z)B(x, y) = 0 (4.1)

and so by reflexivity B(B(x, y)z − B(x, z)y, x) = 0, that is

B(x, y)B(z, x) − B(x, z)B(y, x) = 0 (4.2)

In particular, for z = x, we have

B(x, x)(B(x, y) − B(y, x)) = 0 (4.3)

Now assume that B is not symmetric. We will prove that B is alternating. If for
x ∈ V , there exists y ∈ V such that B(x, y) 6= B(y, x), then from equation 4.3 it
4. Types of Bilinear Forms 20

follows that B(x, x) = 0. Let x ∈ V such that B(x, y) = B(y, x) for all y ∈ V .

Choose v, w ∈ V such that B(v, w) 6= B(w, v). Then from equation 4.2 replacing x
by v, y by w and z by x,

B(v, w)B(x, v) − B(v, x)B(w, v) = 0, (4.4)

that is, B(x, v)(B(v, w) − B(w, v) = 0, and so B(x, v) = B(v, x) = 0. In a similar


way B(x, w) = 0 = B(w, x). Now B(v, x + w) = B(v, w) 6= B(w, v) = B(x + w, v),

and so again from 4.3, we have B(x + w, x + w) = 0 from which it follows that
B(x, x) = 0, and so B is alternating. Converse part follows from the remark 4.6. ✷

Proposition 4.8: A bilinear form is reflexive if and only if it is either symmetric


or skew symmetric.

Proof. Let B be a bilinear form on a vector space V over K. Assume B is


reflexive. Then by the proposition 4.7 and remark 4.6 together imply that B is

symmetric or skew symmetric. Conversely assume B is either symmetric or skew


symmetric. Clearly it follows that B(x, y) = 0 implies B(y, x) = 0 in either case for
all x, y ∈ V . Hence B is reflexive. ✷

Proposition 4.9: Let B be a bilinear form on a field K. Then B is alternating


implies B is skew symmetric. Converse holds if K is a field of characteristic not
equals 2.

Proof. B is alternating implies B is skew symmetric follows from the Remark 4.6.
Now assume B is skew symmetric. Then for all x ∈ V , B(x, x) = −B(x, x), that is
2B(x, x) = 0 and hence B(x, x) = 0 since charK 6= 2 and so alternating. ✷
Orthogonality
5
Recollecting the orthogonality relation, that is if V is a vector space over a field K,
given two vectors x, y we say that x is orthogonal to y with respect to a bilinear

form B on V if B(x, y) = 0. We would like to describe vectors in a vector space


that are orthogonal to everything else in that vector space. This set of vectors is
referred to as radicals. Since orthogonality relation is not commutative we need to
be more specific. Given a vector space V and a bilinear form B on V we define the

left and right radicals as follows:

Definition 5.1: Let V be a vector space over a field K and let B be a bilinear form
on V . We define the left radical of V denoted by radL (V ) as:

radL (V ) = {x ∈ V : B(x, y) = 0, ∀y ∈ V }.

Similarly the right radical of V denoted by radR (V ) as:

radR (V ) = {x ∈ V : B(y, x) = 0, ∀y ∈ V }.

Remark 5.2: While in the definition of a radical we include elements of V that


are orthogonal to all other elements of V , we may be more specific and seek only
elements of V that are orthogonal to all elements of some subset S of V . Let B be
5. Orthogonality 22

a bilinear form on V .

⊥L (S) = {x ∈ V : B(x, y) = 0, ∀y ∈ S}

and
⊥R (S) = {x ∈ V : B(y, x) = 0, ∀y ∈ S}.

It would be nice if we didn’t have to deal with distinguishing left and right

orthogonality. Assume B is a reflexive bilinear form that is B(x, y) = 0 implies


B(y, x) = 0, ∀ x, y ∈ V . So given a reflexive bilinear form B and a subset S of V ,
we can write,

⊥L (S) = ⊥R (S) = S ⊥ = {x ∈ V : B(x, y) = 0, ∀y ∈ S}.

If W is a subspace of V , then we call W ⊥ the orthogonal complement of W .

Theorem 5.3: Let V be a finite dimensional vector space over a field K and let B
be a symmetric bilinear form on V . For each subspace W of V , we have

1. W ⊥ is a subspace.

2. V = {0}⊥ .

3. V ⊥ = {0} if and only if B is non degenerate.

4. If dim V = n and dim W = m, then dim W ⊥ ≥ n − m.

5. The restriction of B to W is non degenerate if and only if W ∩ W ⊥ = {0}.

6. V = W ⊕ W ⊥ if and only if the restriction of B to W is non degenerate.

7. If B is non degenerate on V , then V = W ⊕ W ⊥


and so dim V = dim W + dim W ⊥ .
5. Orthogonality 23

Proof.

1. We have W ⊥ = {x ∈ V : B(x, y) = 0, ∀y ∈ W }. Let x1 , x2 ∈ W ⊥ . Then


B(x1 , y) = 0 and B(x2 , y) = 0, ∀y ∈ W . Now by the linearity of B we have,
B(kx1 + x2 , y) = 0, ∀y ∈ V, k ∈ K ⇒ kx1 + x2 ∈ W ⊥ . Hence W ⊥ is a
subspace.

2. We have {0}⊥ = {x ∈ V : B(x, 0) = 0} = V , since B(x, 0) = 0, ∀x ∈ V by


remark 3.9.

3. First assume that V ⊥ = {0}. To prove B is non degenerate. We have,


V ⊥ = {x ∈ V : B(x, y) = 0, ∀y ∈ V } = {0}. That is

B(x, y) = 0, ∀y ∈ V ⇒ x = 0. (5.1)

Since B is symmetric B(x, y) = B(y, x), ∀x, y ∈ V , hence

B(y, x) = 0, ∀x ∈ V ⇒ y = 0. (5.2)

From equations 5.1 and 5.2 we have B is non degenerate. Converse follows
immediately from the definition of non degeneracy.

4. Let {y1 , · · · , ym } be a basis for W and consider the mapping,


g : x → (B(x, y1 ), · · · , B(x, ym )). So,

g(x) = 0 ⇒ (B(x, y1 ), . . . , B(x, ym)) = (0, . . . , 0)

⇒ B(x, yi ) = 0, i = 1, . . . , m.

⇒ B(x, y) = 0, ∀y ∈ W.

⇒ x ∈ W ⊥.
5. Orthogonality 24

That is null space of g is a subspace of W ⊥ . That is

nullity of g ≤ dim W ⊥ . (5.3)

Since g is a linear transformation and V is finite dimensional by Rank - Nullity


theorem, we have dim V = dim R(g) + dim N(g) and so,

dim N(g) = dimV − dim R(g)

≥ dim V − m ( R(g) is a subspace of K m )

= n − m. (5.4)

Then from equations 5.3 and 5.4 we have n − m ≤ dim W ⊥ .

5. Assume first that the restriction of B to W is non degenerate. Let z ∈ W ∩W ⊥ .


Then z ∈ W ⊥ . Then B(z, y) = 0, ∀y ∈ W ⇒ z = 0 (B is non degenerate on
W ). So W ∩ W ⊥ = {0}. Converse follows immediately.

6. Assume V = W ⊕ W ⊥ . Then W ∩ W ⊥ = {0} and so by the result (5) we have


B|W is non degenerate.

Conversely assume B|W is non degenerate. To prove V = W ⊕ W ⊥ . By


result (5), we have W ∩ W ⊥ = {0}. It is enough to prove V = W + W ⊥ .
That is to prove V = span {W ∪ W ⊥ } ⇔ dim W + dim W ⊥ = dim V , since
W ∩ W ⊥ = {0} (by theorem 2.20). By result (4) we have if dim V = n and

dim W = m, then dim W ⊥ ≥ n−m. So dim W + dim W ⊥ ≥ m+n−m = n =


dim V . That is dim W + dim W ⊥ = dim V and hence V = W + W ⊥ .

7. It follows from (6). ✷


5. Orthogonality 25

Proposition 5.4: Let B be a reflexive bilinear form on a vector space V over a

field K. Let U, U1 and U2 be subspaces of V . Then,

1. If U1 ⊆ U2 , then U2⊥ ⊆ U1⊥ .

2. (U1 + U2 )⊥ = U1⊥ ∩ U2⊥

3. (U1 ∩ U2 )⊥ = U1⊥ + U2⊥ , if V is finite dimensional, B is non degenerate.

4. U ⊆ U ⊥⊥ , the inequality can be strict. If V is finite dimensional and B is non


degenerate, then U ⊥⊥ = U.

Proof.

1. Let U1 and U2 be subspaces of V such that U1 ⊆ U2 . By definition,

U1⊥ = {x ∈ V : B(x, y) = 0, ∀y ∈ U1 } & U2⊥ = {x ∈ V : B(x, y) = 0, ∀y ∈ U2 }.


Let x ∈ U2⊥ . Therefore B(x, y) = 0 ∀y ∈ U2 . But since U1 ⊆ U2 , we have
B(x, y) = 0 ∀y ∈ U1 . This implies x ∈ U1⊥ . Thus U2⊥ ⊆ U1⊥ .

2. We have U1 ⊆ (U1 + U2 ) and U2 ⊆ (U1 + U2 ), then by the result (1) we have,


(U1 + U2 )⊥ ⊆ U1⊥ and (U1 + U2 )⊥ ⊆ U2⊥ and so,

(U1 + U2 )⊥ ⊆ U1⊥ ∩ U2⊥ . (5.5)

Now let x ∈ U1⊥ ∩ U2⊥ . Then x ∈ U1⊥ and x ∈ U2⊥ . Now x ∈ U1⊥ implies
B(x, y) = 0 ∀y ∈ U1 and x ∈ U2⊥ implies B(x, y) = 0 ∀y ∈ U2 . Let

z ∈ U1 + U2 . Then z = z1 + z2 , z1 ∈ U1 and z2 ∈ U2 .
So B(x, z) = B(x, z1 + z2 ) = B(x, z1 ) + B(x, z2 ) = 0 + 0 = 0. Since z was
arbitrary B(x, z) = 0 ∀z ∈ U1 + U2 . Hence x ∈ (U1 + U2 )⊥ , and so,

U1⊥ ∩ U2⊥ ⊆ (U1 + U2 )⊥ (5.6)


5. Orthogonality 26

From the equations 5.5 and 5.6 the result follows.

3. We have (U1 ∩ U2 ) ⊆ U1 and (U1 ∩ U2 ) ⊆ U2 we have by result (1),


U1⊥ ⊆ (U1 ∩ U2 )⊥ and U2⊥ ⊆ (U1 ∩ U2 )⊥ . Since (U1 ∩ U2 )⊥ is a subspace we
have,

U1⊥ + U2⊥ ⊆ (U1 ∩ U2 )⊥ . (5.7)

Since V is finite dimensional, we have,

dim (U1⊥ + U2⊥ ) = dim U1⊥ + dim U2⊥ − dim (U1⊥ ∩ U2⊥ ) (by theorem 2.20)

= dim U1⊥ + dim U2⊥ − dim (U1 + U2 )⊥

(by result (2) above)

= (dim V − dim U1 ) + (dim V − dim U2 )

− (dim V − dim (U1 + U2 ))

(by case 7 of the theorem 5.3.)

= dim V − (dim U1 + dim U2 − dim (U1 + U2 ))

= dim V − dim (U1 ∩ U2 )

= dim (U1 ∩ U2 )⊥ (5.8)

From equations 5.7 and 5.8 the result follows.

4. Let x ∈ U. Now for every y ∈ U ⊥ , we have B(x, y) = 0 and so x ∈ U ⊥⊥ . So


U ⊆ U ⊥⊥ . We have from case 5 of the theorem 5.3, U ⊥ ∩ U ⊥⊥ = {0} since B
is non degenerate and so dim U ⊥ + dim U ⊥⊥ = dim V .

So dim U ⊥⊥ = dim V − dim U ⊥ = dim V −(dim V − dim U) = dim U. Hence


U = U ⊥⊥ . ✷
5. Orthogonality 27

Generally the equality in case 4 of the proposition 5.4 need not hold. Consider the

following examples.

Example 5.5: Let V be the space of all continuous functions on [0, 1]. Define a
Z 1
bilinear form B on V by B(f, g) = f g, where f , g ∈ V . Let W be the subspace
0
of all functions such that f (0) = 0. Then W ⊥ = {0} and so W ⊥⊥ = V . That is

W 6= W ⊥⊥ , W is a proper subspace of W ⊥⊥ . That is the equality need not hold


always when V is infinite dimensional.

Example 5.6: Consider the inner product space ℓ2 of all square summable real
sequences with inner product defined by,

P
for x = (x1 , . . . , xn , . . .), y = (y1 , . . . , yn , . . .) ∈ V , hx, yi = xi yi . Let B be the
i=1
bilinear form on V by B(x, y) = hx, yi, x, y ∈ V . Let Ek be the sequence whose
k-th entry is 1 and all other entries are zero, and let M = {Ek : k = 1, 2, . . .}. Let
W = span M, that is W is nothing but the space of all those sequences whose only
finite number of entries are nonzero. Then W ⊥ = {0} and so as in the previous

example W 6= W ⊥⊥ .
Bilinear Forms on Finite Dimensional Vector
6
Spaces

In this chapter we treat bilinear forms on finite dimensional vector spaces. The

matrix of a bilinear form in an ordered basis is introduced, and the isomorphism


between the space of bilinear forms and the space of n × n matrices is established.
The rank of a bilinear form is defined, and non-degenerate bilinear forms are in-
troduced. Also in the succeeding subsections a discussion is made on symmetric
bilinear forms and their diagonalization. The book [3] is mostly referred in this

chapter. Throughout this chapter the dimension of a vector space is denoted by n


and a basis of an n dimensional vector space by {α1 , . . . , αn }.
In the example 3.6 when n = 1, the matrix X T AY is 1 × 1, that is a scalar, and the
bilinear form is simply BA (X, Y ) = X T AY =
PP
xi aij yj . We will presently show
i j
that every bilinear form on an n-dimensional vector space is of this type, that is BA
for some n × n matrix A.

Example 6.1: Let V be a finite dimensional vector space over a field K and let
B ={α1 , . . . , αn } be an ordered basis for V . Suppose B is a bilinear form on V .
6. Bilinear Forms on Finite Dimensional Vector Spaces 29

If x = x1 α1 + . . . + xn αn and y = y1 α1 + . . . + ynαn are vectors in V , then

X
B(x, y) = B( xi αi , y)
i
X
= xi B(αi , y)
i
X X
= xi B(αi , y j αj )
i j
XX
= xi yj B(αi , αj ).
i j

xi aij yj = X T AY , where X and Y are


PP
If we let aij = B(αi , αj ), then B(x, y) =
i j
the coordinate matrices of x and y in the ordered basis B. Thus every bilinear form
on finite dimensional vector space V over a filed K is of this type

B(x, y) = [x]T
B A[y]B (6.1)

for some n × n matrix A over K. Conversely given any n × n matrix A, the

equation 6.1 defines a bilinear form on V , such that aij = B(αi , αj ).

Definition 6.2 (Matrix of a bilinear form): Let V be a finite dimensional vec-


tor space, and let B = {α1 , . . . , αn } be an ordered basis for V . If B is a bilinear
form on V , the matrix of B in the ordered basis B is the n × n matrix A with entries
aij = B(αi , αj ). We shall denote this matrix by [B]B .

Theorem 6.3: Let V be a finite dimensional vector space over a field K. For each
ordered basis B of V , the function which associates with each bilinear form on V its
matrix in the ordered basis B is an isomorphism of the space L(V, V, K) onto the
space of n × n matrices over K.

Proof. Let B = {α1 , . . . , αn } be an ordered basis for V . Let f maps for each

bilinear form B on V , its corresponding [B]B . That is f : L(V, V, K) → K n × n . To


prove f is an isomorphism.
6. Bilinear Forms on Finite Dimensional Vector Spaces 30

Claim: To prove f is one-one.

Assume [B1 ]B = [B2 ]B . This implies B1 (αi , αj ) = B2 (αi , αj ) ∀ i, j. Now for any
bilinear form B on V , the values of B on V × V are determined by the values of B
on B × B. Hence B1 (αi , αj ) = B2 (αi , αj ) ∀ i, j ⇒ B1 ≡ B2 . Thus it is one one.
Clearly the map f is onto. Since if we are given an n×n matrix A, then the function

B defined as in equation 6.1 is a bilinear form on V .


Claim: The map f is linear.
That is to prove [kB1 + B2 ]B = k[B1 ]B + [B2 ]B , ∀ k ∈ K, where B1 , B2 are bilinear
forms on V . Now we have, (kB1 + B2 )(αi , αj ) = kB1 (αi , αj ) + B2(αi , αj ), ∀ i, j, since
kB1 + B2 is a bilinear form on V (by proposition 3.7).

So [kB1 + B2 ]B = k[B1 ]B + [B2 ]B , ∀ k ∈ K and for any bilinear forms B1 and B2


on V . Thus the map f is linear. Hence the map f is an isomorphism between the
spaces L(V, V, K) and the space of all n × n matrices. ✷

Proposition 6.4: If B = {α1 , . . . , αn } is an ordered basis for V and B * = {f1 , . . . , fn }


is the corresponding dual basis for the dual space V * , then the n2 bilinear forms

defined by,

Bij (x, y) = fi (x)fj (y), (6.2)

1 ≤ i, j ≤ n, forms a basis for the space L(V, V, K). In particular the dimension of
L(V, V, K) is n2 .

Proof. Given B = {α1 , . . . , αn } is an ordered basis for V and B* ={f1 , . . . , fn } is

the corresponding dual basis (by remark 2.28). The functions defined in equation 6.2
n
P Pn
are bilinear forms on V , see example 3.4. Let x, y ∈ V , x = ki αi and y = ℓj αj .
i=1 j=1
Then, Bij (x, y) = fi (x)fj (y) = ki ℓj . Let B be a bilinear form on V and let
A = (aij )n × n , where aij = B(αi , αj ) be the matrix of B in the ordered basis B.
6. Bilinear Forms on Finite Dimensional Vector Spaces 31

Then,

X
B(x, y) = B( ki αi , y)
i
X
= ki B(αi , y)
i
X X
= ki B(αi , ℓ j αj )
i j
X
= ki ℓj B(αi , αj )
i,j
X
= aij ki ℓj
i,j
X
= aij Bij (x, y)
i,j
X
= ( ai,j Bi,j )(x, y).
i,j

P
That is B ≡ aij Bij . So {Bij , i, j = 1, . . . , n} spans the space L(V, V, K). Now to
i,j
P
prove {Bij , i, j = 1, . . . , n} is linearly independent. If B ≡ bij Bij , then
i,j

X
B(αi , αj ) = bij Bij (αi , αj )
i,j
X
= bij fi (αi )fj (αj )
i,j
= bij

In particular if B is the zero bilinear form, that is B (αi , αj ) = 0, ∀ i, j, and hence all
P
the scalars bij = 0, ∀ i, j. Thus bij Bij ≡ 0 implies that all scalars are zero. Hence
i,j
they are linearly independent. Hence {Bij , i, j = 1, . . . , n} is a basis for L(V, V, K)
and so the dimension of L(V, V, K) is n2 . ✷

In other words the bilinear forms Bij has its matrix in the ordered basis B the matrix
‘unit’ E i,j whose only non zero entry is a 1 in row i and column j. Since these matrix

units comprise a basis for the space of n × n matrices, the forms Bij comprise a basis
for the space of bilinear forms.
6. Bilinear Forms on Finite Dimensional Vector Spaces 32

6.1 Matrices of Bilinear Forms in Different Ordered Bases

The map in the theorem 6.3 depends on the choice of a basis. Here in this section, we

will establish the relation connecting matrices of bilinear forms in different ordered
bases.

Theorem 6.5: Let V be a finite dimensional vector space over a field K and let B =
′ ′ ′
{α1 , . . . , αn } and B = {α1 , . . . , αn } be ordered bases for V . Suppose B is a bilinear
form on V . Then there exists an invertible matrix P such that [B]B′ = P T [B]B P .


Proof. Given two ordered bases B and B and a bilinear form B on V . Consider
Pn
the unique scalars Pij (by remark 2.18) such that αj′ = Pij αi , j = 1, . . . , n. Let
i=1
 T  T

x ∈ V , X = [x]B = x1 , . . . , xn and X = [x]B′ = x1 , . . . , xn
′ ′
be the

coordinate matrices of x in the ordered basis B and B respectively. Then,

n
X n
X n
X
′ ′ ′
x= xj αj = xj ( Pij αi )
j=1 j=1 i=1
n
XX n

= (Pij xj )αi
j=1 i=1
Xn X n

= ( Pij xj )αi
i=1 j=1


That is X = P X , where P = [Pij ]n×n .
Claim: P is invertible.

Since B and B are linearly independent sets

X X ′
X=0 ⇔ xi αi = x = 0 = xi αi ,

⇔ xi = 0 ∀i,

⇔ X = 0.


That is P X = 0 has only the trivial solution. So P is invertible. That is we have
obtained an n × n invertible matrix P such that [x]B = P [x]B′ , ∀x ∈ V . Now for
6. Bilinear Forms on Finite Dimensional Vector Spaces 33

any x,y ∈ V ,

B(x, y) = [x]T
B [B]B [y]B

= (P [x]B′ )T [B]B (P [y]B′ )

= [x]T
B′
P T [B]B P [y]B′

= [x]T
B′
(P T [B]B P )[y]B′

By the definition and uniqueness of the matrix representing B in the ordered basis
B we have, [B]B′ = P T [B]B P . ✷

Definition 6.6 (Congruence of matrices): Two n×n matrices A and B are said

to be congruent if there exists an n × n invertible matrix P such that B = P T AP .

Note: Since P and P T are invertible, congruent matrices have same rank.

Proposition 6.7: Let V be a finite dimensional vector space over a field K and let
B be a bilinear form on V . Then the following are equivalent.

1. B is non degenerate, that is matrix of B in any ordered basis is invertible.

2. radR (V ) = {0}.

3. radL (V ) = {0}.

Proof. 1 ⇔ 2.
Let B = {β1 , . . . , βn } be an ordered basis for V and A = [B]B = [aij ]n×n , where
aij = B(βi , βj ) be the matrix of B with respect to B. Assume A is invertible and
rad (V ) 6= {0}. That is there is a non zero x ∈ radR (V ). Let X = [x]B =
 R T
x1 , . . . , xn . Now AX 6= 0, since AX = 0 ⇒ X = 0 (Since A is invertible) ⇒
x = 0 which is a contradiction. Then there exists y ∈ V and y 6= 0 such that
[y]B A[x]B 6= 0 (Since if ∀y ∈ V is such that [y]B A[x]B = 0 ⇒ AX = A[x]B = 0)
/ V which is a contradiction.
⇒ B(y, x) 6= 0 ⇒ x ∈
6. Bilinear Forms on Finite Dimensional Vector Spaces 34

Conversely assume radR (V ) = {0} and assume A is not invertible. Then AX = 0


 T
has a non trivial solution say X0 = [x]B = x10 . . . xn0 . Let x0 = x10 β1 +
· · · + xn0 βn , so that X0 = [x0 ]B . We have,

AX0 = 0

⇒ [y]B AX0 = 0, ∀ y ∈ V

⇒ B(y, x0 ) = 0, ∀ y ∈ V

⇒ x0 ∈ radR (V ) with x0 6= 0, which is a contradiction.

In a similar manner we can prove that 1 ⇔ 3. ✷

6.2 Rank of a Bilinear Form

We have obtained that the matrices of a bilinear form corresponding to different

ordered bases are congruent and so they have the same rank. So we can define the
rank of a bilinear form.

Let V be a vector space over a field K and B be a bilinear form on V . That

is B : V × V → K. If we fix the first argument of B, then B is actually a


mapping from V into K. That is B is linear functional on V . That is for each
fixed x ∈ V we have a linear functional on V denoted by LB (x) and defined by,
(LB (x))(y) = B(x, y), ∀y ∈ V . Consider the map x → LB (x) from V to V ∗ . Then

LB : V → V ∗ is a linear transformation. For if x1 ,x2 , y ∈ V , k ∈ K,

LB (kx1 + x2 )(y) = B(kx1 + x2 , y)

= kB(x1 , y) + B(x2 , y)

= (kLB (x1 ))(y) + (LB (x2 ))(y)

= (kLB (x1 ) + LB (x2 ))(y).


6. Bilinear Forms on Finite Dimensional Vector Spaces 35

That is LB (kx1 + x2 ) ≡ kLB (x1 ) + LB (x2 ), ∀x1 ,x2 ∈ V and k ∈ K. So LB : V → V ∗

is a linear transformation. In a similar way if we fix the second argument of B, we


have a linear functional RB (y). Or RB is also a linear transformation from V to V ∗ .

Theorem 6.8: Let B be a bilinear form on a finite dimensional vector space V .


Let RB and LB be the linear transformations from V to V ∗ defined by,

(LB (x))(y) = B(x, y) = (RB (y)(x).

Then rank(LB )=rank(RB ).

Proof. To prove rank(LB )=rank(RB ) we will prove that nullity of LB = nullity


of RB . Let B be an ordered basis for V and let A = [B]B . If x and y are vectors
in V with coordinate matrices X and Y respectively in the ordered basis B, then

B(x, y) = X T AY . Now to find the dimension of the null space of RB , that is to find
the dimension of the space {y ∈ V : RB (y) ≡ 0}.
Now, RB (y) ≡ 0 ⇒ (RB (y))(x) = 0, ∀x ∈ V . That is B(x, y) = 0, ∀ n × 1 matrix
X, or we have AY = 0. That is RB ≡ 0 implies the system AY = 0. So the
dimension of null space of RB is nothing but the dimension of the solution space of

the system AY = 0. A symmetric argument shows that the null space of LB implies
the solution space of AT X = 0. Now A and AT have same column rank. Therefore,

nullity of RB = dimension of the solution space of AX = 0

= dim V − column rank of A

= dim V − column rank of AT

= nullity of LB .

So by Rank - Nullity theorem, rank of RB = rank of LB . ✷


6. Bilinear Forms on Finite Dimensional Vector Spaces 36

Definition 6.9 (Rank of a bilinear form): Let B be a bilinear form on a finite

dimensional vector space V . Then, rank of B is defined as the integer r such that
r = rank LB = rank RB .

Corollary 6.10: The rank of a bilinear form on a finite dimensional space V is


equal to the rank of the matrix of the form in any ordered basis.

Proof. We have from the definition that rank of a bilinear form B is equal to rank

of LB = rank of RB . Let B be an ordered basis for V and A = [B]B .

rank of RB = dim V − nullity of RB

= dim V − dim of the solution space of AY = 0

(by proof of the theorem 6.8)

= dim V − (dim V − row rank of A)

= rank of A (row rank of A = column rank of A = rank of A)

That is rank of B = rank of [B]B . Since matrices of a bilinear form in any ordered

basis are congruent, they have the same rank. ✷

Corollary 6.11: Let B be a bilinear form on a finite dimensional space V , then


the following are equivalent.

1. rank of B = n.

2. For each non zero x in V , there is a y in V such that B(x, y) 6= 0.

3. For each non zero y in V there is an x in V such that B(x, y) 6= 0.

Proof. 1 ⇒ 2.
We have rank of B = n ⇒ rank of LB = n. We have LB is a linear transfor-
mation from V to V ∗ and dimV = dimV ∗ . Then by Rank Nullity theorem (see
6. Bilinear Forms on Finite Dimensional Vector Spaces 37

theorem 2.30), rank of LB = n ⇒ nullity of LB = 0. That is null space of LB is

the zero space. So if x 6= 0 in V , LB (x) 6≡ 0. That is there exists y ∈ V such that


(LB (x))(y) 6= 0. That is B(x, y) 6= 0.
2 ⇒ 3.
rank of B = n ⇒ rank of LB = rank of RB = n ⇒ nullity of RB = 0. Now the result

follows as in the above proof.


3⇒1
nullity of RB = 0 ⇔ rank of RB = n ⇔ rank of B = n. ✷

Remark 6.12: A bilinear form on a vector space V is non degenerate or non sin-
gular if it satisfies conditions 2 and 3 of the corollary 6.11. In particular if V is
finite dimensional, the a bilinear form B is non degenerate provided B satisfies any

one of the statements in the corollary 6.11. In particular B is non degenerate iff its
matrix in some (every) ordered basis for V is a non singular matrix.

Example 6.13: Let V = Rn and let B be the bilinear form defined as in exam-
ple 3.2, the Euclidean inner product. Let B = {ǫ1 , . . . , ǫn }, ǫj = (0, . . . , 1, . . . , 0),
where 1 occurs in the jth position be the standard basis of Rn . We have

 1 if i = j

B(ǫi , ǫj ) = δij =
 0 if i 6= j

So, [B]B = In × n , the identity matrix of order n. Since In × n is invertible, the

bilinear form B is non degenerate.

6.3 Symmetric Bilinear Forms

Let V be a vector space over a field K. A bilinear form B on V is said to be


symmetric if B(x, y) = B(y, x), ∀ x, y ∈ V . If the space is finite dimensional,
6. Bilinear Forms on Finite Dimensional Vector Spaces 38

then B is symmetric iff for any ordered basis B of V , [B]B is symmetric, that is

[B]T
B = [B]B .

Definition 6.14: Let B be a bilinear form defined on a vector space V over a field

K. Then the quadratic form associated with B is the function Q : V → K defined


by, Q(x) = B(x, x), ∀ x ∈ V .

One important class of symmetric bilinear forms consists of the inner products on
real vector spaces. If V is a real vector space, an inner product on V is a symmetric
bilinear form B on V which satisfies B(x, x) > 0 if x 6= 0.

Definition 6.15: Let B be a bilinear form on a real vector space V , then B is said

to be positive definite if B(x, x) > 0 if x 6= 0.

Thus, an inner product on a real vector space is a positive definite, symmetric


bilinear form on that space.

6.3.1 Diagonalization of Bilinear Forms

In this section we will answer when a bilinear form on a finite dimensional vector
space is diagonalizable. That is if B is a bilinear form on a finite dimensional vector
space V , whether there is an ordered basis B for V in which B is represented by
a diagonal matrix. Here we will prove that this is possible if and only if B is a
symmetric bilinear form on a vector space over a field of characteristic not equal to

2. One part is obvious since if B is diagonalizable, then matrix of B corresponding


to that basis is diagonal and a diagonal matrix is always symmetric.

Lemma 6.16: Let V be a vector space over a field of characteristic not equal to 2.
Let B be a non trivial symmetric bilinear form on V . Then there exists x ∈ V such
that B(x, x) 6= 0.

Proof. Since B is non trivial there exists x, y ∈ V such that B(x, y) 6= 0. If


B(x, x) 6= 0 or B(y, y) 6= 0 then we are done. So assume both B(x, x) = 0 and
6. Bilinear Forms on Finite Dimensional Vector Spaces 39

B(y, y) = 0. Let z = x + y. Then B(z, z) = 2B(x, y) 6= 0, since B(x, y) 6= 0 and

field is of characteristic not equal to 2.

Theorem 6.17: Let V be a finite dimensional vector space over a field of charac-
teristic not equal to 2, and let B be symmetric bilinear form on V . Then there is
an ordered basis for V in which B is represented by a diagonal matrix.

Proof. We prove the result by the induction on the dimension n of the space V .
Our aim is to find an ordered basis B = {α1 , . . . , αn } such that B(αi , αj ) = 0 for
i 6= j. If B ≡ 0, then matrix of B in any ordered basis is the zero matrix which is
diagonal. Also if the dimension of the space is 1, then the matrix of B is 1 × 1 which

is also diagonal. Thus suppose B 6≡ 0 and n > 1. Then by the lemma 6.16, there is
a vector 0 6= x ∈ V such that B(x, x) 6= 0. Let W be the one dimensional subspace
of V spanned by x, and let W ⊥ be the set of all y in V such that B(x, y) = 0. Since
B(x, x) 6= 0, with respect to the basis B = {x} of W , [B|W ]B is invertible and so
B|W is non degenerate, then by the case 6 of the theorem 5.3, we have V = W ⊕W ⊥ .

Now the restriction of B to W ⊥ is a symmetric bilinear form on W ⊥ . Since W ⊥ has


dimension n − 1, we may assume by induction that W ⊥ has a basis {α2 , . . . , αn }
such that B(αi , αj ) = 0, i 6= j (i ≥ 2, j ≥ 2). Putting α1 = x, we obtain a basis
{α1 , . . . , αn } for V such that B(αi , αj ) = 0 for i 6= j. ✷

Corollary 6.18: Let K be a subfield of the field of complex numbers, and let A be

a symmetric n × n matrix over K. Then there is an invertible n × n matrix P over


K such that P T AP is diagonal.

Proof. We have given a symmetric n × n matrix [aij ]n × n . Take the standard basis
of K n over K, B = {ǫ1 , . . . , ǫn }, ǫj = (0, . . . , 1, . . . , 0). Define a bilinear form B on
K n as in equation 6.1. Clearly [B]B = A. Since A is symmetric B is also symmetric.

Then by the theorem 6.17, there is an ordered basis B in which [B]B′ is diagonal.
6. Bilinear Forms on Finite Dimensional Vector Spaces 40

Now by the theorem 6.5 there is an invertible matrix P such that [B]B′ = P T AP .

That is P T AP is diagonal. ✷

We have characterized the diagonalization of bilinear forms on vector spaces over


fields of characteristic not equal to 2. In general theorem 6.17 need not hold if the
field is of characteristic two.

Example 6.19: Let K = Z2 and V = K 2 with the standard  basis B. Let B be a


 0 1 
symmetric bilinear form represented by the matrix: A =   with respect to
1 0
B. We will assume that B is diagonalizable and arrive at a contradiction. Suppose

that B is diagonalizable. Then there exists a basis B for which D = [B]B′ is diago-

nal. That is there exists an invertible matrix


 P such that D= P T AP
 . Since P is
 1 0   a b 
invertible, rank D = rank A. Thus D =  . Let P =  . Then,
0 1 c d
       
 1 0   a c   1 0   a b   ac + ac bc + ad 
 =   = .
0 1 b d 0 1 c d bc + ad bd + bd

Here ac + ac = bd + bd = 0 in Z2 . This implies 1 = 0 which is a contradiction..


Conclusion
7
This project is an introductory approach to the study of bilinear forms and some
of its properties and varieties. Starting with the definition of a bilinear form in a
vector space over an arbitrary field, examples and some results concerning bilinear
forms are discussed. Then some types of bilinear forms including symmetric, skew

symmetric, alternating etc. are discussed and a brief study is made on its charac-
terisations. Next we discussed the orthogonality of bilinear forms and some of its
properties. The main discussion of bilinear forms in this project is made on finite
dimensional vector spaces. Here we have defined the matrix of a bilinear form with
respect to an ordered basis for the vector space, rank of a bilinear form and how

the matrices are related when the basis is changed. Finally, the symmetric bilinear
forms and a characterisation of diagonalization of bilinear forms with the field hav-
ing characteristic not equal two are discussed. Though as said in the beginning, this
is an introductory approach, but we could explore some of the beauty of bilinear

forms to a great extent and how it stands as a generalization of inner products on


vector spaces over the real or complex fields.
Bibliography

[1] Charles W Curtis, Linear algebra: An introductory approach, 1984.

[2] Paul R Halmos, Finite dimensional vector spaces, no. 7, Princeton University
Press, 1947.

[3] Kenneth Hoffman and Ray Kunze, Linear algebra. 1971.

[4] S Lang, Linear algebra addison-wesley, Reading (1968).

[5] BV Limaye, Functional analysis. 1981.

[6] V. Sahai and V. Bist, Linear algebra, Narosa Publishing House, 2002.

View publication stats

Вам также может понравиться