Вы находитесь на странице: 1из 7

Linear Algebra 1

Tutorial Groups: T7 and T10 Final Exam Sample


Question 1: Solve the linear system of equations: x1 + kx2 = x2 + kx3 = x3 + kx1 = 1 where k R is a parameter. One deduces that there are run-through gives: x1 + kx2 x1 + kx2 x2 + kx3 x1 + kx2 x2 + kx3 x3 + kx1 = x2 + kx3 = x3 + kx1 = x3 + kx1 =1 =1 =1 (1) (2) (3) (4) (5) (6) 4 2 = 6 possible equations, and the rst

However, notice that the rst 3 equations can be inferred from the last 3 equations, so that it is enough to consider the last 3 equations alone. So form the corresponding augmented matrix: 1 k 0 |1 0 1 k |1 k 0 1 |1 Begin with some case distinction. 1 0 0 1 0 0 If k = 0, one has: 0 |1 0 |1 1 |1 1

So there is a unique solution. Hence suppose that k = 0. Perform some Gaussian elimination: 1 0 k 1 Row 3 k Row 1 = 0 0 1 Row 3 + k 2 Row 2 = 0 0 k 0 |1 1 k |1 0 1 |1 k 0 1 k k 2 1 k 0 1 k 0 1 + k3 |1 |1 |1 k |1 |1 2 |1 k + k

Now if k = 1, then k 3 + 1 = 0 and k 2 k + 1 = 3, so that one does NOT have a solution in this case. So assume tht k / {0, 1}. Note that k 2 k + 1 > 0 for all k R. To see this, one uses some results from calculus. Take the derivative of f (k ) = k 2 k + 1, which is f (k ) = 2k 1, so that f (1/2) = 0 tells us that one has a minima or maxima at x = 1/2. Finally, f (2) (k ) = 2 > 0, which means that x = 1/2 is a minima. And f (1/2) = 3/4 > 0. Then the last row tells us that: x3 = k2 k + 1 1 + k3

The second row then gives that: x2 = 1 kx3 k (k 2 k + 1) =1 1 + k3 3 k + 1 k2 + k 1 =1 1 + k3 k 2 + k 1 =11 1 + k3 k2 k + 1 = 1 + k3 And nally the frist row has a similar computation: 2

x1 = 1 kx2 k2 k + 1 = 1 + k3 Therefore, one has described all possible solutions to this system of linear equations. Question 2: Consider the 2 2 matrix: A= 1 1 0 1

Find all 2 2 matries X such that AX = XA. To do this let: x1 x2 x3 x4 Then perform direct computation: 1 1 0 1 x1 x2 x3 x4

AX = =

x 1 + x3 x2 + x4 x3 x4 x1 x2 x3 x 4 1 1 0 1

XA = =

x1 x1 + x2 x3 x3 + x4

= x1 + x3 = x1 = x3 = 0 x2 + x4 = x1 + x4 = x1 = x4

These are all that one can infer from AX = XA. Therefore, all possible matrices X are of the form X= x1 x 2 0 x1

for x1 , x2 R. It is easy to check that such a X commutes with A. 3

Question 3: Given A= Compute A2013 . Compute a few terms rst: 1 2 0 1 1 3 0 1 1 1 0 1

A2 = A3 = Hence, one guesses that An =

1 n 0 1

And this can be proved by induction. Note that every An will commute with A. One has already established the base cases. So if An is given as above. Then a simple calculation shows: An+1 is: An+1 = An A = Hence, one has: A2013 = 1 2013 0 1 1 n+1 0 1

Question 4: Suppose that V, W Rn are vector subspaces. For any subset S Rn , let S be the orthogonal complement of S . Prove that (V + W ) = V W . Proof. Show: (V + W ) V W . Consider any x (V + W ) . Then for any v V V + W and any w W V + W , one has: x, v = 0 = x V x, w = 0 = x W = x V W Show: (V + W ) V W . 4

Consider any x V W . Then for any v V and w W , one has that for any v + w V + W : x, v + w = x, v + x, w = 0 = x (V + W ) Question 5: Let V, W Rn be vector subspaces. Let PV : Rn V be the orthogonal projection onto V . Find conditions such that the restricted map PV : W V has a trivial kernel ker(PV ). Recall how one can construct PV . First, one uses the fact that any subspace in Rn has an orthonormal basis, because Rn is a nite dimensional inner product space. So take an orthonormal basis {v1 , vk } for V , with k n. Clearly, one can assume that each vi = 0. Some properties of such an orthonormal basis are: 1. i = j = vi vj = 2. vi , vi = 1. Then, given any x Rn , one has: PV (x) =
1ik

vi , vj = 0.

x, vi vi

Hence, one sees a potential condition. Suppose W V , so that W span(v1 , , vk ). Then take any w W such that w ker(PV ). One sees that: PV (w) =
1ik

w, vi vi = 0

w=
1ik

i vi

w, vi = i i vi = w = 0
1ik

= PV (w) =

Hence, if W V , ker(PV ) is trivial. In fact, suppose PV is injective. Then if w W lies in ker(PV ), then PV (w) = 0 = PV (0). By injectivity, w = 0, meaning that the ker(PV ) is trivial. Moreover, if ker(PV ) is trivial, one can indeed show that PV is injective. For, suppose that x, y W are such that PV (x) = PV (y ) = PV (x) PV (y ) = 0. Since PV is a linear transformation, one has PV (x y ) = 0 = x y ker(PV ) = {0}. Therefore, x y = 0 = x = y , so that the map is injective. Hence, one has shown that ker(PV ) is trivial if and only if PV is injective. Question 5: Use vectors to prove that the 2 diagonals in a parallelogram intersect each other at their midpoints. Proof. Now any parallelogram P in Rn can be constructed in terms of 2 intersecting lines L1 and L2 in Rn . A line L in Rn is of the form: L = {x + v : x, v Rn , R} So suppose that: L1 = {x1 + 1 v1 } L2 = {x2 + 2 v2 } Translations in Rn are bijections. Hence, without any loss of generality, translating the parallelogram if necessary, one can assume that x1 = 0 = x2 . then the 4 vertices of the parallelogram are given by 0, v1 , v2 , v1 + v2 . Therefore, the diagonals D1 and D2 of P are described by lines of the form: D1 = {v1 + (1 )v2 }, [0, 1] D2 = {(0) + (1 )(v1 + v2 )}, [0, 1] In other words, D1 is the line segment joining v1 and v2 , an D2 is the line segment joining 0 and v1 + v2 . Now the midpoint of D1 is attained precisely when = 1/2, and similarly, the midpoint of D2 occurs when = 1/2. With these values, one has that: 6

v1 + v2 2 v1 + v2 = 1/2 = (1 )(v1 + v2 ) = 2 = 1/2 = v1 + (1 )v2 = Hence, D1 and D2 do intersect at their midpoints.

Вам также может понравиться