Академический Документы
Профессиональный Документы
Культура Документы
net/publication/267982491
CITATIONS READS
0 8,905
1 author:
Prasanth G. Narasimha-Shenoi
Government College Chittur
29 PUBLICATIONS 86 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
Transit Functions on Graphs and Betweenness with Applications to Community Detection View project
All content following this page was uploaded by Prasanth G. Narasimha-Shenoi on 09 November 2014.
of
Sruthy Murali
171/2009
Dr. G N Prasanth
Department of Mathematics
Government College Chittur
Palakkad - 678104
2014
DECLARATION
Sruthy Murali
Dr. G N Prasanth1 DEPARTMENT OF MATHEMATICS
Assistant Professor Government College, Chittur
PALAKKAD - 678104
KERALA, INDIA
CERTIFICATE
Dr. G N Prasanth
1
Mob:919447565939, E-mail:prasanthgns@gmail.com
ACKNOWLEDGEMENT
Sruthy Murali
Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2. Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3. Bilinear Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
5. Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
7. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Introduction
1
Linear algebra is that branch of mathematics which treats the common properties
of algebraic system which consists of a set, together with a reasonable notion of a
‘linear combination’ of elements in the set. This project is a brief survey of bilinear
forms which is a generalization of so called inner products on real or complex spaces
to an arbitrary field K. Inner products are maps which are not completely linear, in
the sense that they are linear in its first argument and conjugate linear in the second
argument. So naturally a question arises is there any similar maps which includes
inner products as well in its collection and can be considered as a generalization of
inner products. These bilinear forms includes certain types of inner products in its
collection and these are linear in both of its arguments. This made me to read and
study extra in bilinear forms.
The second chapter includes the basic definitions, notations and examples which
are needed to support the study of bilinear forms. The third chapter consists of
a study on bilinear forms on arbitrary vector spaces and some examples. In the
fourth chapter, various types of bilinear forms are discussed. The fifth chapter deals
with the orthogonality of bilinear forms. In the sixth chapter the main study is on
bilinear forms on finite dimensional vector spaces. Subsequently matrix of bilinear
form corresponding to an ordered basis is defined and a study is made on how these
matrix changes when the basis is changed. Rank of a bilinear form and its relation
1. Introduction 7
to its matrix are discussed. Then the main study is restricted on symmetric bilinear
Definition 2.1 (Field): A field is a non empty set K along with functions
+ : K × K → K and . : K × K → K such that
(a) k1 + k2 = k2 + k1 , ∀ k1 , k2 ∈ K,
(c) there exists a unique element 0 called the zero element of K such that
k + 0 = k = 0 + k, ∀ k ∈ K, and
(c) there exists a unique element 1 ∈ K called the unit element such that
3. ‘.’ is distributive with respect to ‘+’, that is, k1 .(k2 + k3 ) = k1 .k2 + k1 .k3 ,
∀ k1 , k2 and k3 ∈ K.
Example 2.2: The set Q of all rational numbers with the usual addition and mul-
tiplication is a field, and the same is true of the set R of all real numbers and the
set C of all complex numbers.
Definition 2.4 (Vector space): Let K be a field. A vector space over K is a non
empty set V of elements called vectors, along with a function + : V × V → V , called
addition, and a function . : K × V → V , called the scalar multiplication, such that
for every x, y and z ∈ V and k1 , k2 ∈ K,
1. x + y = y + x,
2. x + (y + z) = (x + y) + z,
7. 1.x = x , and
2. Preliminaries 10
Example 2.5: Let P be the set of all polynomials, with complex coefficients, in
a single variable t. To make P into a complex vector space, we interpret vector
addition and scalar multiplication as the ordinary addition of two polynomials and
the multiplication of a polynomial by a complex number; the zero vector in P is the
polynomial with all coefficients zero.
x + y = (x1 + y1 , . . . , xn + yn ),
kx = (kx1 , . . . , kxn )
0 = (0, . . . , 0)
−x = (−x1 , . . . , −xn )
(A + B)ij = Aij + Bij and the scalar multiplication is defined by (cA)ij = cAij . With
these operations K m × n is a vector space over K.
metric) if aij = aji (aij = −aji ) for each i and j. The symmetric (skew symmetric)
matrices form a subspace of the space of all n × n matrices over K, see example 2.7.
Definition 2.12 (Span of a set): Let V be a vector space over a field K. Let
S ⊆ V . The subspace spanned by S is defined to be the set of all linear combinations
of vectors in S.
ordered basis for V is a finite sequence of vectors which are linearly independent
and any vector in V can be written as a linear combination of these vectors.
Remark 2.18: Now suppose V is a finite dimensional vector space over a field K
and that B = {x1 , . . . , xn } is an ordered basis for V . Given a vector x ∈ V , there is
Pn
a unique n − tuple (k1 , . . . , kn ) of scalars such that ki xi . The n − tuple is unique,
i=1
n
P n
P
because if we also have ℓi xi , then (ki − ℓi )xi = 0 and the linear independence
i=1 i=1
of the xi ’s tells us that ki − ℓi = 0 for each i and hence ki = ℓi . This ki is called the
ith coordinate of x relative to the ordered basis B = {x1 , . . . , xn }. We shall call the
T
matrix X = k1 , . . . , kn , the coordinate matrix of x relative to the ordered
basis B and denoted by [x]B .
Definition 2.19 (Sum of two vector spaces): Let V and W be vector spaces
over a field K. Then the sum V + W is the space of all sums x + y such that x ∈ V
and y ∈ W . That is, V + W = {x + y : x ∈ V and y ∈ W }.
Definition 2.22 (Direct sum): Let V be a vector space over a field K and W1
and W2 be two subspaces of V . Then V is said to be the direct sum of W1 and W2
denoted by V = W1 ⊕ W2 , if V = W1 + W2 and W1 ∩ W2 = {0}.
2. Preliminaries 13
Example 2.23: Let n be a positive integer and let K be a subfield of the field
of complex numbers C and let V be the space of all n × n matrices over K. Let
W1 be the subspace of all symmetric matrices and let W2 be the space of all skew
symmetric matrices, then V = W1 ⊕ W2 .
Note : A linear transformation T from a vector space V into itself is called a linear
operator on V .
Remark 2.27: The collection of all linear transformations from V into W forms a
vector space naturally by the addition and scalar multiplication defined by
(T +U)(x) = T x+Ux and (kT )(x) = kT x, where T and U are linear transformations
V into the scalar field K is called a linear functional on V . The collection of all
linear functionals on V forms a vector space by the addition and scalar multipli-
cation defined above. We denote this space by V * and call it the dual space of V,
V * = L(V, K). If V is finite dimensional so is V * . Let B ={x1 , . . . , xn } be a basis
2. Preliminaries 14
for V . For each i consider the linear functional fi defined by, fi (xj ) = δij . Then
Definition 2.29: Let V and W be vector spaces over the field K and let T be a
linear transformation from V into W . The null space of T is the set of all vectors x
in V such that T x = 0.
If V is finite dimensional, the rank of T is the dimension of the range of T and the
nullity of T is the dimension of the null space of T .
Theorem 2.30 (Rank - Nullity theorem): Let V and W be vector spaces over
a field K and let T be a linear transformation from V into W . Suppose that V is
Definition 2.31 (Inner product): Let V a vector space over a field K (R or C).
An inner product on V is a function which assigns to each ordered pair of vectors
An inner product space is a vector space with an inner product defined on it.
[1, 3, 4].
is a bilinear form on V .
Example 3.3: Let V and W be vector spaces over K with finite bases {x1 , . . . , xm }
and {y1 , . . . , yn}, respectively. Let A = (aij ) be a fixed m×n matrix with coefficients
in K. Let x = k1 x1 + . . . + km xm ∈ V, y = ℓ1 y1 + . . . + ℓn yn ∈ W . Define
m P
P n
B(x, y) = aij ki ℓj . Then B is a bilinear form on the pair (V, W ).
i=1 j=1
Example 3.4: Let V be a vector field over the field K and let f1 and f2 be two
linear functionals on V . Define B : V × V → K as B(x, y) = f1 (x)f2 (y). Then B is
a bilinear form on V .
Example 3.5: Let V be a vector space over K, and V * the dual vector space.
Example 3.6: Let m and n be two positive integers and K be a field. Let V be
the vector space of all m × n matrices over K. Let A be a fixed m × m matrix over
K. Define BA = tr(X T AY ), where tr denotes the trace of the matrix X T AY which
Proposition 3.7: Let V and W be vector spaces over a field K. Then the set of
all bilinear forms on the pair (V, W ) is a subspace of the space of all functions from
V × W into K.
Proof. The set of all bilinear forms on the pair (V, W ) is a subset of the space of
all functions from V × W into K. Now in order to prove that it forms a subspace,
it is enough to prove that if B1 and B2 are any two bilinear forms on (V, W ), then
kB1 + B2 for any k ∈ K is a bilinear form. Let B1 and B2 be two bilinear forms on
3. Bilinear Forms 17
bilinear forms on (V, W ) forms a subspace of the space of all functions from V × W
into K. We denote this subspace by L(V, W, K). ✷
Remark 3.9: The usual arguments using the vector space axioms, applied to bi-
linear forms, show that B(0, y) = B(x, 0) = 0 for all x, y in V and W , respectively,
the degeneracy condition asserts that the equation B(x, y) = 0 for all y, holds only
in the unique case when x = 0.
Definition 3.10 (Duality): A pair of vector spaces V and W are said be dual
Note that in general B(x, y) = 0 need not imply that B(y, x) = 0. That is the or-
thogonality relation may not be symmetric with respect to B. Consider the following
example.
Example 4.2: Let A = (aij ) ∈ Rn×n such that a12 = 1 and a21 = 0. Consider
the bilinear form BA : Rn × Rn → R given by BA (x, y) = xT Ay. Consider the
standard basis of Rn , {ǫi }, ǫi = (0, . . . , 1, . . . , 0), 1 occurring in the ith position.
for all x, y ∈ V . B is said to be skew symmetric if B(x, y) = −B(y, x), for all x,
y ∈ V.
or alternating.
Proof. Let B be a bilinear form on a vector space V over a field K. First assume
that B is reflexive. That is B(x, y) = 0 implies B(y, x) = 0 for all x, y ∈ V . We
have for x, y, z ∈ V ,
B(x, B(x, y)z − B(x, z)y) = B(x, z)B(x, y) − B(x, z)B(x, y) = 0 (4.1)
Now assume that B is not symmetric. We will prove that B is alternating. If for
x ∈ V , there exists y ∈ V such that B(x, y) 6= B(y, x), then from equation 4.3 it
4. Types of Bilinear Forms 20
follows that B(x, x) = 0. Let x ∈ V such that B(x, y) = B(y, x) for all y ∈ V .
Choose v, w ∈ V such that B(v, w) 6= B(w, v). Then from equation 4.2 replacing x
by v, y by w and z by x,
and so again from 4.3, we have B(x + w, x + w) = 0 from which it follows that
B(x, x) = 0, and so B is alternating. Converse part follows from the remark 4.6. ✷
Proof. B is alternating implies B is skew symmetric follows from the Remark 4.6.
Now assume B is skew symmetric. Then for all x ∈ V , B(x, x) = −B(x, x), that is
2B(x, x) = 0 and hence B(x, x) = 0 since charK 6= 2 and so alternating. ✷
Orthogonality
5
Recollecting the orthogonality relation, that is if V is a vector space over a field K,
given two vectors x, y we say that x is orthogonal to y with respect to a bilinear
Definition 5.1: Let V be a vector space over a field K and let B be a bilinear form
on V . We define the left radical of V denoted by radL (V ) as:
radL (V ) = {x ∈ V : B(x, y) = 0, ∀y ∈ V }.
radR (V ) = {x ∈ V : B(y, x) = 0, ∀y ∈ V }.
a bilinear form on V .
⊥L (S) = {x ∈ V : B(x, y) = 0, ∀y ∈ S}
and
⊥R (S) = {x ∈ V : B(y, x) = 0, ∀y ∈ S}.
It would be nice if we didn’t have to deal with distinguishing left and right
Theorem 5.3: Let V be a finite dimensional vector space over a field K and let B
be a symmetric bilinear form on V . For each subspace W of V , we have
1. W ⊥ is a subspace.
2. V = {0}⊥ .
Proof.
B(x, y) = 0, ∀y ∈ V ⇒ x = 0. (5.1)
B(y, x) = 0, ∀x ∈ V ⇒ y = 0. (5.2)
From equations 5.1 and 5.2 we have B is non degenerate. Converse follows
immediately from the definition of non degeneracy.
⇒ B(x, yi ) = 0, i = 1, . . . , m.
⇒ B(x, y) = 0, ∀y ∈ W.
⇒ x ∈ W ⊥.
5. Orthogonality 24
= n − m. (5.4)
Proof.
Now let x ∈ U1⊥ ∩ U2⊥ . Then x ∈ U1⊥ and x ∈ U2⊥ . Now x ∈ U1⊥ implies
B(x, y) = 0 ∀y ∈ U1 and x ∈ U2⊥ implies B(x, y) = 0 ∀y ∈ U2 . Let
z ∈ U1 + U2 . Then z = z1 + z2 , z1 ∈ U1 and z2 ∈ U2 .
So B(x, z) = B(x, z1 + z2 ) = B(x, z1 ) + B(x, z2 ) = 0 + 0 = 0. Since z was
arbitrary B(x, z) = 0 ∀z ∈ U1 + U2 . Hence x ∈ (U1 + U2 )⊥ , and so,
dim (U1⊥ + U2⊥ ) = dim U1⊥ + dim U2⊥ − dim (U1⊥ ∩ U2⊥ ) (by theorem 2.20)
Generally the equality in case 4 of the proposition 5.4 need not hold. Consider the
following examples.
Example 5.5: Let V be the space of all continuous functions on [0, 1]. Define a
Z 1
bilinear form B on V by B(f, g) = f g, where f , g ∈ V . Let W be the subspace
0
of all functions such that f (0) = 0. Then W ⊥ = {0} and so W ⊥⊥ = V . That is
Example 5.6: Consider the inner product space ℓ2 of all square summable real
sequences with inner product defined by,
∞
P
for x = (x1 , . . . , xn , . . .), y = (y1 , . . . , yn , . . .) ∈ V , hx, yi = xi yi . Let B be the
i=1
bilinear form on V by B(x, y) = hx, yi, x, y ∈ V . Let Ek be the sequence whose
k-th entry is 1 and all other entries are zero, and let M = {Ek : k = 1, 2, . . .}. Let
W = span M, that is W is nothing but the space of all those sequences whose only
finite number of entries are nonzero. Then W ⊥ = {0} and so as in the previous
example W 6= W ⊥⊥ .
Bilinear Forms on Finite Dimensional Vector
6
Spaces
In this chapter we treat bilinear forms on finite dimensional vector spaces. The
Example 6.1: Let V be a finite dimensional vector space over a field K and let
B ={α1 , . . . , αn } be an ordered basis for V . Suppose B is a bilinear form on V .
6. Bilinear Forms on Finite Dimensional Vector Spaces 29
X
B(x, y) = B( xi αi , y)
i
X
= xi B(αi , y)
i
X X
= xi B(αi , y j αj )
i j
XX
= xi yj B(αi , αj ).
i j
B(x, y) = [x]T
B A[y]B (6.1)
Theorem 6.3: Let V be a finite dimensional vector space over a field K. For each
ordered basis B of V , the function which associates with each bilinear form on V its
matrix in the ordered basis B is an isomorphism of the space L(V, V, K) onto the
space of n × n matrices over K.
Proof. Let B = {α1 , . . . , αn } be an ordered basis for V . Let f maps for each
Assume [B1 ]B = [B2 ]B . This implies B1 (αi , αj ) = B2 (αi , αj ) ∀ i, j. Now for any
bilinear form B on V , the values of B on V × V are determined by the values of B
on B × B. Hence B1 (αi , αj ) = B2 (αi , αj ) ∀ i, j ⇒ B1 ≡ B2 . Thus it is one one.
Clearly the map f is onto. Since if we are given an n×n matrix A, then the function
defined by,
1 ≤ i, j ≤ n, forms a basis for the space L(V, V, K). In particular the dimension of
L(V, V, K) is n2 .
the corresponding dual basis (by remark 2.28). The functions defined in equation 6.2
n
P Pn
are bilinear forms on V , see example 3.4. Let x, y ∈ V , x = ki αi and y = ℓj αj .
i=1 j=1
Then, Bij (x, y) = fi (x)fj (y) = ki ℓj . Let B be a bilinear form on V and let
A = (aij )n × n , where aij = B(αi , αj ) be the matrix of B in the ordered basis B.
6. Bilinear Forms on Finite Dimensional Vector Spaces 31
Then,
X
B(x, y) = B( ki αi , y)
i
X
= ki B(αi , y)
i
X X
= ki B(αi , ℓ j αj )
i j
X
= ki ℓj B(αi , αj )
i,j
X
= aij ki ℓj
i,j
X
= aij Bij (x, y)
i,j
X
= ( ai,j Bi,j )(x, y).
i,j
P
That is B ≡ aij Bij . So {Bij , i, j = 1, . . . , n} spans the space L(V, V, K). Now to
i,j
P
prove {Bij , i, j = 1, . . . , n} is linearly independent. If B ≡ bij Bij , then
i,j
X
B(αi , αj ) = bij Bij (αi , αj )
i,j
X
= bij fi (αi )fj (αj )
i,j
= bij
In particular if B is the zero bilinear form, that is B (αi , αj ) = 0, ∀ i, j, and hence all
P
the scalars bij = 0, ∀ i, j. Thus bij Bij ≡ 0 implies that all scalars are zero. Hence
i,j
they are linearly independent. Hence {Bij , i, j = 1, . . . , n} is a basis for L(V, V, K)
and so the dimension of L(V, V, K) is n2 . ✷
In other words the bilinear forms Bij has its matrix in the ordered basis B the matrix
‘unit’ E i,j whose only non zero entry is a 1 in row i and column j. Since these matrix
units comprise a basis for the space of n × n matrices, the forms Bij comprise a basis
for the space of bilinear forms.
6. Bilinear Forms on Finite Dimensional Vector Spaces 32
The map in the theorem 6.3 depends on the choice of a basis. Here in this section, we
will establish the relation connecting matrices of bilinear forms in different ordered
bases.
Theorem 6.5: Let V be a finite dimensional vector space over a field K and let B =
′ ′ ′
{α1 , . . . , αn } and B = {α1 , . . . , αn } be ordered bases for V . Suppose B is a bilinear
form on V . Then there exists an invertible matrix P such that [B]B′ = P T [B]B P .
′
Proof. Given two ordered bases B and B and a bilinear form B on V . Consider
Pn
the unique scalars Pij (by remark 2.18) such that αj′ = Pij αi , j = 1, . . . , n. Let
i=1
T T
′
x ∈ V , X = [x]B = x1 , . . . , xn and X = [x]B′ = x1 , . . . , xn
′ ′
be the
′
coordinate matrices of x in the ordered basis B and B respectively. Then,
n
X n
X n
X
′ ′ ′
x= xj αj = xj ( Pij αi )
j=1 j=1 i=1
n
XX n
′
= (Pij xj )αi
j=1 i=1
Xn X n
′
= ( Pij xj )αi
i=1 j=1
′
That is X = P X , where P = [Pij ]n×n .
Claim: P is invertible.
′
Since B and B are linearly independent sets
X X ′
X=0 ⇔ xi αi = x = 0 = xi αi ,
′
⇔ xi = 0 ∀i,
′
⇔ X = 0.
′
That is P X = 0 has only the trivial solution. So P is invertible. That is we have
obtained an n × n invertible matrix P such that [x]B = P [x]B′ , ∀x ∈ V . Now for
6. Bilinear Forms on Finite Dimensional Vector Spaces 33
any x,y ∈ V ,
B(x, y) = [x]T
B [B]B [y]B
= [x]T
B′
P T [B]B P [y]B′
= [x]T
B′
(P T [B]B P )[y]B′
By the definition and uniqueness of the matrix representing B in the ordered basis
B we have, [B]B′ = P T [B]B P . ✷
Definition 6.6 (Congruence of matrices): Two n×n matrices A and B are said
Note: Since P and P T are invertible, congruent matrices have same rank.
Proposition 6.7: Let V be a finite dimensional vector space over a field K and let
B be a bilinear form on V . Then the following are equivalent.
2. radR (V ) = {0}.
3. radL (V ) = {0}.
Proof. 1 ⇔ 2.
Let B = {β1 , . . . , βn } be an ordered basis for V and A = [B]B = [aij ]n×n , where
aij = B(βi , βj ) be the matrix of B with respect to B. Assume A is invertible and
rad (V ) 6= {0}. That is there is a non zero x ∈ radR (V ). Let X = [x]B =
R T
x1 , . . . , xn . Now AX 6= 0, since AX = 0 ⇒ X = 0 (Since A is invertible) ⇒
x = 0 which is a contradiction. Then there exists y ∈ V and y 6= 0 such that
[y]B A[x]B 6= 0 (Since if ∀y ∈ V is such that [y]B A[x]B = 0 ⇒ AX = A[x]B = 0)
/ V which is a contradiction.
⇒ B(y, x) 6= 0 ⇒ x ∈
6. Bilinear Forms on Finite Dimensional Vector Spaces 34
AX0 = 0
⇒ [y]B AX0 = 0, ∀ y ∈ V
⇒ B(y, x0 ) = 0, ∀ y ∈ V
ordered bases are congruent and so they have the same rank. So we can define the
rank of a bilinear form.
= kB(x1 , y) + B(x2 , y)
B(x, y) = X T AY . Now to find the dimension of the null space of RB , that is to find
the dimension of the space {y ∈ V : RB (y) ≡ 0}.
Now, RB (y) ≡ 0 ⇒ (RB (y))(x) = 0, ∀x ∈ V . That is B(x, y) = 0, ∀ n × 1 matrix
X, or we have AY = 0. That is RB ≡ 0 implies the system AY = 0. So the
dimension of null space of RB is nothing but the dimension of the solution space of
the system AY = 0. A symmetric argument shows that the null space of LB implies
the solution space of AT X = 0. Now A and AT have same column rank. Therefore,
= nullity of LB .
dimensional vector space V . Then, rank of B is defined as the integer r such that
r = rank LB = rank RB .
Proof. We have from the definition that rank of a bilinear form B is equal to rank
That is rank of B = rank of [B]B . Since matrices of a bilinear form in any ordered
1. rank of B = n.
Proof. 1 ⇒ 2.
We have rank of B = n ⇒ rank of LB = n. We have LB is a linear transfor-
mation from V to V ∗ and dimV = dimV ∗ . Then by Rank Nullity theorem (see
6. Bilinear Forms on Finite Dimensional Vector Spaces 37
Remark 6.12: A bilinear form on a vector space V is non degenerate or non sin-
gular if it satisfies conditions 2 and 3 of the corollary 6.11. In particular if V is
finite dimensional, the a bilinear form B is non degenerate provided B satisfies any
one of the statements in the corollary 6.11. In particular B is non degenerate iff its
matrix in some (every) ordered basis for V is a non singular matrix.
Example 6.13: Let V = Rn and let B be the bilinear form defined as in exam-
ple 3.2, the Euclidean inner product. Let B = {ǫ1 , . . . , ǫn }, ǫj = (0, . . . , 1, . . . , 0),
where 1 occurs in the jth position be the standard basis of Rn . We have
1 if i = j
B(ǫi , ǫj ) = δij =
0 if i 6= j
then B is symmetric iff for any ordered basis B of V , [B]B is symmetric, that is
[B]T
B = [B]B .
Definition 6.14: Let B be a bilinear form defined on a vector space V over a field
One important class of symmetric bilinear forms consists of the inner products on
real vector spaces. If V is a real vector space, an inner product on V is a symmetric
bilinear form B on V which satisfies B(x, x) > 0 if x 6= 0.
Definition 6.15: Let B be a bilinear form on a real vector space V , then B is said
In this section we will answer when a bilinear form on a finite dimensional vector
space is diagonalizable. That is if B is a bilinear form on a finite dimensional vector
space V , whether there is an ordered basis B for V in which B is represented by
a diagonal matrix. Here we will prove that this is possible if and only if B is a
symmetric bilinear form on a vector space over a field of characteristic not equal to
Lemma 6.16: Let V be a vector space over a field of characteristic not equal to 2.
Let B be a non trivial symmetric bilinear form on V . Then there exists x ∈ V such
that B(x, x) 6= 0.
Theorem 6.17: Let V be a finite dimensional vector space over a field of charac-
teristic not equal to 2, and let B be symmetric bilinear form on V . Then there is
an ordered basis for V in which B is represented by a diagonal matrix.
Proof. We prove the result by the induction on the dimension n of the space V .
Our aim is to find an ordered basis B = {α1 , . . . , αn } such that B(αi , αj ) = 0 for
i 6= j. If B ≡ 0, then matrix of B in any ordered basis is the zero matrix which is
diagonal. Also if the dimension of the space is 1, then the matrix of B is 1 × 1 which
is also diagonal. Thus suppose B 6≡ 0 and n > 1. Then by the lemma 6.16, there is
a vector 0 6= x ∈ V such that B(x, x) 6= 0. Let W be the one dimensional subspace
of V spanned by x, and let W ⊥ be the set of all y in V such that B(x, y) = 0. Since
B(x, x) 6= 0, with respect to the basis B = {x} of W , [B|W ]B is invertible and so
B|W is non degenerate, then by the case 6 of the theorem 5.3, we have V = W ⊕W ⊥ .
Corollary 6.18: Let K be a subfield of the field of complex numbers, and let A be
Proof. We have given a symmetric n × n matrix [aij ]n × n . Take the standard basis
of K n over K, B = {ǫ1 , . . . , ǫn }, ǫj = (0, . . . , 1, . . . , 0). Define a bilinear form B on
K n as in equation 6.1. Clearly [B]B = A. Since A is symmetric B is also symmetric.
′
Then by the theorem 6.17, there is an ordered basis B in which [B]B′ is diagonal.
6. Bilinear Forms on Finite Dimensional Vector Spaces 40
Now by the theorem 6.5 there is an invertible matrix P such that [B]B′ = P T AP .
That is P T AP is diagonal. ✷
symmetric, alternating etc. are discussed and a brief study is made on its charac-
terisations. Next we discussed the orthogonality of bilinear forms and some of its
properties. The main discussion of bilinear forms in this project is made on finite
dimensional vector spaces. Here we have defined the matrix of a bilinear form with
respect to an ordered basis for the vector space, rank of a bilinear form and how
the matrices are related when the basis is changed. Finally, the symmetric bilinear
forms and a characterisation of diagonalization of bilinear forms with the field hav-
ing characteristic not equal two are discussed. Though as said in the beginning, this
is an introductory approach, but we could explore some of the beauty of bilinear
[2] Paul R Halmos, Finite dimensional vector spaces, no. 7, Princeton University
Press, 1947.
[6] V. Sahai and V. Bist, Linear algebra, Narosa Publishing House, 2002.