Вы находитесь на странице: 1из 6

Math 206, Spring 2016

Assignment 10 Solutions

Due: April 8, 2016

Part A.
(1) Below we give several sets V equipped with addition and scaling operations denoted by and . For
each, determine if V endowed with the operations and is a vector space. If so, verify the existence
of a neutral element, the existence of additive inverses, associativity of addition, and distributivity of
scalar multiplication over vector addition. If not, give a specific example of vectors failing one of the
axioms.
(a) V = {r R : r > 0}, with a b = ab (the expression on the right is the usual multiplication of real
numbers) and k a = ak (the expression on the right is the usual exponentiation of real numbers).
[You may use familiar properties of real arithmetic if you cite them properly.]
Solution. This is a vector space. We claim that 1 is the neutral element of V . To check this, let
r V be given (i.e., r is a positive real number). Then we have
1 r = 1r = r
using familiar rules from real multiplication. Hence 1 is indeed the neutral element.
Now we show that inverses under exist. So let r V be given; since r > 0, we have 1r is also a
positive real number. We then have
1
1
r = r = 1.
r
r
We have already shown that 1 is the neutral element in V , and so this expression shows that 1r is
the inverse of r under .
Next we verify associativity. So let a, b, c V be given. We then have
a (b c) = a bc

(definition of )

= a(bc)

(definition of )

= (ab)c

(real multiplication is associative)

= (ab) c

(definition of )

= (a b) c

(definition of ).

Finally we verify that scalar multiplication distributes over vector addition. So let a, b V be
given, and k R. We then have
k (a b) = k (ab)

(definition of )

= (ab)k

(definition of )

k k

=a b
k

=a b

(exponent rules for real arithmetic)


k

(definition of )

= (k a) (k b)

(definition of ).


nn

(b) V = {M R
matrix scaling.

: M is invertible}, with M1 M2 the usual matrix addition, and k M the usual

Solution. This is not a vector space. Recall that the identity matrix I is invertible; its also true
that I is invertible, since (I)(I) = (1)2 I = I. [Hence I isnt just invertible, but is in fact
its own inverse.] The given definition of would then give
I (I) = I + (I) = Z,
http://palmer.wellesley.edu/~aschultz/w16/math206

Page 1 of 6

Math 206, Spring 2016

Assignment 10 Solutions

Due: April 8, 2016

where Z is the n n matrix for which every entry is zero. But Z 6 V (since, for example, its rank
is 0 instead of n), and so V is not closed under .

(c) Suppose that V is a vector space with operations + and . Define new operations and by
v1 v2 = v1 + (v2 ) (where on the right side, + is the original operation on V and v2 is the
additive inverse of v2 under +), and k v = k v (i.e., the new scaling operation is the same as
the old scaling operation).
Solution.

This is not always a vector space. Let V = R and consider v =

1
1


. Then we

have


 
1
(v v) v =
+
1

 
1
v (v v) =

   
1
1

=
1
1
 
 
1
1
+
=
1
1

 

1
1
+
=
1
1
 
 

1
0
1
+
=
.
1
0
1

0
0

Hence is not associative.


[Note: one can prove a stronger result: if V is any vector space that contains more than just the
neutral element, then the operation defined above will also fail to be associative. The proof of
this proceeds much like the above calculation, though v is chosen as any nonzero vector in V . The
two calculations then yield v and v. One then has to argue that v 6= v, for which the axiom
1 v = v is useful.
Finally: its not correct to say that for all V , the operations and fail to make V into a vector
space. The reason is that if we let V = {0}, then the operations actually do make V into a
vector space. This happens essentially because addition and scaling on this vector space are always
stupid.]

(d) Suppose that V1 is a vector space under the operations of + and , and that V2 is a vector space
under the operations of  and . Define V with operations and by
V = V1 V2 = {(v1 , v2 ) : v1 V1 , v2 V2 }
(v1 , v2 ) (v10 , v20 ) = (v1 + v10 , v2  v20 )
k (v1 , v2 ) = (k v1 , k

v2 ).

Solution. This is a vector space. First we verify associativity of addition. Let v, w, u V be


given. This means that there are v1 , w1 , u1 V1 and v2 , w2 , u2 V2 so that v = (v1 , v2 ), w =
(w1 , w2 ) and u = (u1 , u2 ). Observe that
(v w) u = ((v1 , v2 ) (w1 , w2 )) (u1 , u2 )
= (v1 + w1 , v2  w2 ) (u1 , u2 )

(definition of )

= ((v1 + w1 ) + u1 , (v2  w2 )  u2 )

(definition of )

= (v1 + (w1 + u1 ) , v2  (w2  u2 ))

(associativity of +, )

= (v1 , v2 ) (w1 + u1 , w2  u2 )

(definition of )

= (v1 , v2 ) ((w1 , w2 ) (u1 , u2 ))

(definition of )

Hence + is associative.
Now we verify that there exists a neutral element. Let 01 be the neutral element of V1 , and 02
the neutral element of V2 . (These elements exist because V1 and V2 are vector spaces.) We claim
http://palmer.wellesley.edu/~aschultz/w16/math206

Page 2 of 6

Math 206, Spring 2016

Assignment 10 Solutions

Due: April 8, 2016

that (01 , 02 ) V acts neutrally. To prove this, let v V be given; as before, this means that
v = (v1 , v2 ) for some v1 V1 and v2 V2 . Note that
v (01 , 02 ) = (v1 , v2 ) (01 , 02 )
(definition of )

= (v1 + 01 , v2  02 )
= (v1 , v2 )

(definition of neutrality of 01 and 02 ).

Hence (01 , 02 ) acts neutrally, as claimed.


Now we show that additive inverses exist. Let v V be given; as before, this means v = (v1 , v2 )
for some v1 V1 and v2 V2 . Let w1 V1 be the additive inverse of v1 (which we know exists
since V1 is a vector space), and let w2 V2 be the additive inverse of v2 (which we know exists
since V2 is a vector space). We claim that (w1 , w2 ) is the additive inverse of v. To verify this,
observe that
v (w1 , w2 ) = (v1 , v2 ) (w1 , w2 )
(definition of )

= (v1 + w1 , v2  w2 )
= (01 , 02 )

(definition of additive inverse).

Hence (w1 , w2 ) is an additive inverse, as claimed.


Finally, we show that scaling distributes across vector sums. So let k inR be given, and v, w V .
As usual, we then have v = (v1 , v2 ) and w = (w1 , w2 ), where v1 , w1 V1 and v2 , w2 V2 . We
then have
k (v w) = k ((v1 , v2 ) (w1 , w2 ))
= k (v1 + w1 , v2  w2 )

(definition of )

= (k (v1 + w1 ), k

(definition of )

(v2  w2 ))

= ((k v1 ) + (k w1 ), (k
= (k v1 , k

v2 )  (k

v2 ) (k w1 , k

w2 ))

(since and

distribute over + and )

w2 )

= (k (v1 , v2 )) (k (w1 , w2 ))
This is the desire result.

(definition of )
(definition of ).


(2) Let V be a vector space with operations and , and let W be a vector space with operations  and
. We say that a function T : V W respects addition if for all v1 , v2 V we have T (v1 v2 ) =
T (v1 )  T (v2 ), and we say that T : V W respects scalars if for all k R and v V we have
T (k v) = k T (v). Now define
L(V, W ) = {T : V W : T respects addition and scalars}.
We define addition and scaling by
if T, S L(V, W ), then we define T + S : V W to be the function satisfying (T + S)(v) =
T (v)  S(v) for all v V
if k R and T L(V, W ), then we define k T : V W to be the function satisfying (k T )(v) =
k T (v) for all v V .
Carefully prove that for all T, S L(V, W ) we have T + S L(V, W ), and that for all k R we have
k T L(V, W ).
http://palmer.wellesley.edu/~aschultz/w16/math206

Page 3 of 6

Math 206, Spring 2016

Assignment 10 Solutions

Due: April 8, 2016

Solution. To show that T + S is linear, first let v1 , v2 V be given. We need to argue that
(T + S)(v1 v2 ) = (T + S)(v1 )  (T + S)(v2 ). We do this as follows
(T + S)(v1 v2 ) = T (v1 v2 )  S(v1 v2 )

(definition of T + S)

= (T (v1 )  T (v2 ))  (S(v1 )  S(v2 ))

(linearity of T and S)

= T (v1 )  (T (v2 )  (S(v1 )  S(v2 )))

(associativity of )

= T (v1 )  ((T (v2 )  S(v1 ))  S(v2 ))

(associativity of )

= T (v1 )  ((S(v1 )  T (v2 ))  S(v2 ))

(commutativity of )

= T (v1 )  (S(v1 )  (T (v2 )  S(v2 )))

(associativity of )

= (T (v1 )  S(v1 ))  (T (v2 )  S(v2 ))

(associativity of )

= (T (v1 )  S(v1 ))  (T (v2 )  S(v2 ))

(associativity of )

= (T + S)(v1 )  (T + S)(v2 )

(definition of S + T ).

To complete our proof of linearity of T + S, let k R and v V be given. Then we have


(T + S)(k v) = T (k v)  S(k v)
= (k

T (v))  (k

=k

(T (v)  S(v))

=k

((T + S)(v))

(definition of T + S)

S(v))

(linearity of T and S)
(since

distributes over )
(definition of T + S).

Hence T + S respects scalars. This completes the proof of linearity of T + S.


Now let ` R be given and T L(V, W ), and we argue that ` T is linear. First, we show it respects
addition. Let v1 , v2 V be given. We need to argue that (` T )(v1 v2 ) = (` T )(v1 )  (` T )(v2 ).
We do this as follows
(` T )(v1 v2 ) = `

T (v1 v2 )

=`

(definition of ` T )

(T (v1 )  T (v2 ))

= (`

T (v1 ))  (`

T (v2 ))

= ((` T )(v1 ))  ((` T )(v2 ))

(linearity of T )
(

distributes across )
(definition of ` T ).

Hence ` T respects addition.


To complete our proof of linearity of ` T , let k R and v V be given. Then we have
(` T )(k v) = `
=`

T (k v)
(k

T (v))

(definition of ` T )
(linearity of T )

= (`k)

T (v)

(scaling is associative)

= (k`)

T (v)

(real multiplication is commutative)

=k

(`

T (v))

(scaling is associative)

=k

((` T )(v))

(definition of ` T ).

Hence ` T respects scalars. This completes the proof of linearity of ` T .

http://palmer.wellesley.edu/~aschultz/w16/math206


Page 4 of 6

Math 206, Spring 2016

Assignment 10 Solutions

Due: April 8, 2016

Part B.
(1) (a) Use only vector space axioms to prove the following theorem (which we call the cancellative property): If V is a vector space with operations + and , then if v, w1 , w2 V satisfy
v + w1 = v + w2 ,
then w1 = w2 .
Solution. Since V is a vector space, we know there exists an additive inverse of v; denote the
additive inverse by u. By definition, this means that v + u = 0, where 0 is the neutral element of
V . Observe then that
w1 = 0 + w1

(definition of neutral element)

= (v + u) + w1

(definition of additive inverse)

= (u + v) + w1

(commutativity of addition)

= u + (v + w1 )

(associativity of addition)

= u + (v + w2 )

(hypothesis)

= (u + v) + w2

(associativity of addition)

= (v + u) + w2

(commutativity of addition)

= 0 + w2

(definition of additive inverse)

= w2

(definition of neutral element).




(b) Prove that if v V is given, then it has a unique additive inverse. [Note: the vector space axioms
say that v is guaranteed to have some additive inverse, but doesnt stipulate that the inverse in
unique; thats what youll be proving here.]
Solution.
we have

Suppose that u and w are two elements of V with v + u = 0 and v + w = 0. Then


v + u = v + w.

The previous result then gives that u = w.

(c) Carefully use vector space axioms to prove that if v V is given, then v = (1) v.
Solution. By the previous result, we can show that v = (1)v by proving that v +((1)v) =
0. Lets do it:
v + ((1) v) = (1 v) + ((1) v)
= (1 + (1)) v
=0v
=0

(vector space axiom)


(distributivity across scalar sums)
(definition of 1)
(result from class).

[Note that the results which werent explicitly written in the language of axioms our appears
to part (b) and a result from class were both proved using only axioms of a vector space. It is
in this sense that they are legal to use for this problem, since we could simply cut-and-paste the
previous arguments we have if we wanted to write everything fully in the language of axioms.] 

http://palmer.wellesley.edu/~aschultz/w16/math206

Page 5 of 6

Math 206, Spring 2016

Assignment 10 Solutions

Due: April 8, 2016

(2) Use the subspace theorem to prove that the set of even functions E defined by
E = {f F (R, R) : f (x) = f (x) for all x R}
is a subspace of F (R, R).
Solution. First, let Z be the neutral element of F (R, R). As we saw in class, for all x R we have
Z(x) = 0. In particular, for any x R we have that
Z(x) = 0 = Z(x).
Hence Z E.
Now let f, g E be given. To show that f + g E, let x R be given. Then we have
(f + g)(x) = f (x) + g(x)

(definition of f + g)
(since f, g E)

= f (x) + g(x)
= (f + g)(x)

(definition of f + g).

Hence f + g E.
Finally, let k R be given, and f E. We aim to show that the function k f E, so let x R be
given. Then we have
(k f )(x) = k(f (x))

(definition of k f )
(since f E)

= k(f (x))
= (k f )(x)

(definition of k f ).

Hence k f E.
By the subspace test, we see that E is a subspace of F (R, R).

http://palmer.wellesley.edu/~aschultz/w16/math206

Page 6 of 6