Вы находитесь на странице: 1из 9

CORRECTION OF HOMEWORK

In all the following, we let F denote either R, the eld of real nubers or C the eld of complex numbers. We shall identify polynomials and polynomial functions. We let F [X] be the set of polynomials with coecients in F , i.e. the set of functions P such that
n

P (X) = a0 + a1 X + a2 X 2 + + an X n =
i=0

ai X i

for some a0 , a1 , . . . , an in F . For such a function, if an = 0, we call n the degree of P . Finally, we let Fn [X] be the set of elements in F [X] of degree less than or equal to n. 1. Exercise 9 page 123 Here, we give a possible correction of this exercise. See at the bottom for other possibilities of proofs. On F [X], we dene the follwong transformations, T, D, F [X] F [X] by
n

(T (P )) (X) = (D(P )) (X) = where we have expressed

X 0

P (x)dx

=
i=0 n

ai X i+1 i+1 iai X i1

(1.1) (1.2)

P (X)

=
i=0

P = a0 + a1 X + a2 X 2 + . . . an X n .

(1.3)

Show that T is nonsingular on F [X], but is not invertible T is nonsingular if and only if its null space is the zero space null(T ) = {0}. But if P as (1.3) is such that T (P ) = 0, then, we deduce that an a1 X n+1 = 0, a0 X + X 2 + + 2 n+1 and since (X, X 2 , X 3 , . . . , X n+1 ) is linearly independent, we deduce that a1 an a0 = = = = 0, 2 n+1 from which it follows that P expressed as (1.3) is 0. Consequently, null(T ) = {0} and T is nonsingular. To prove that T is not invertible, we must prove that it is not onto (or at least, prove something that would imply it). Here, we remark that every polynomial Q in the range of T satises Q(0) = 0, and so it suces to nd a polynomial P such that P (0) = 0. For example, P = X + 1 (or P = 1 or P = X 2 + 1,. . . ) is not in the range of T . Find the null space of T
Date: Wednesday March 25, 2008.
1

Suppose that P given by (1.3) satises D(P ) = 0, then, we get that a1 + 2a2 X + + nan X n1 = 0, and since (1, X, X 2 , . . . , X n1 ) is linearly independent, we get that a1 = 2a2 = = nan = 0, and consequently, P = a0 is constant. Sor null(D) Span(1) = F0 [X]. Conversely, using the denition of D, we get that every constant polynomial p = a0 satises T (P ) = 0a0 = 0. In conclusion, null(D) = Span(1). Show that DT = IdF [X] and that T D = IdF [X] . Let P be any polynomial in F [X] written as in (1.3). Applying the denition of T in (1.1), we get that an a1 X n+1 , T (P ) = a0 X + X 2 + + 2 n+1 and applying the rule in (1.1), we get that an a2 X n = P. (D(T (P ))) (X) = a0 + 2 X + + (n + 1) 2 n+1 Since P was an arbitrary polynomial, we get that for all polynomials, DT (P ) = P , hence DT = IdF [X] . To prove that T D = IdF [X] , we need only nd a polynomial Q such that T D(Q) = Q. Inspired by the previous question, we can (for example) compute T D(X + 1) = T (1) = X = X + 1. Consequently, T D = IdF [X] . Show that T (T (f )g) = (T f )(T g) T (f (T g)) (1.4) for all f, g F [X] . One way to answer this question uses the following very useful principle in linear analysis: to verify that a linear relation/equality holds, it suces to test it with a basis. And here, we want to check (1.4) which is both linear in f and g. To see how this works, we rst prove the following claim: Suppose that (1.4) holds for every polynomial g, and every f , element of the basis B = (1, X, X 2 , X 3 , . . . ), then (1.4) holds for every polynomial f and and g. Indeed, let us assume the claim and let P be a polynomial given by (1.3), and g be another polynomial. We get, using linearity, that T (T (P )g) = T (T (a0 + a1 X + . . . an X n )g) = T ((a0 T (1) + a1 T (X) + + an T (X n )) g) = T (a0 T (1)g + a1 T (X)g + + an T (X n )g) = a0 T (T (1)g) + a1 T (T (X)g) + + an T (T (X n )g) and using the claim, for each term, we get that T (T (P )g) = a0 ((T 1)(T g) T (1(T g))) + a1 ((T X)(T g) T (X(T g))) + . . . + an ((T X n )(T g) T (X n (T g))) = a0 (T 1)(T g) + a1 T (X)T (g) + + an T (X n )T (g) (a0 T (1(T g)) + a1 T (X(T g)) + + an T (X n (T g))) = T (a0 + a1 X + + an X n )(T g) T ((a0 + a1 X + + an X n )(T g)) = (T P )(T g) T (P (T g))

which proves that the claim is true. Hence to prove (1.4), it suces to prove it in the special case when f = X j for some j 0 and g is any polynomial. Let us x j, and prove that for all polynomial g F [X], (1.4) holds. Here again, this statement is linear in g. Applying a similar reasoning, we see that it suces to prove it when g is an arbitrary element of the basis B, i.e. g = X k for some k 0. To sum up, it suces to prove that for j, k 0, we have that T T (X j )X k = (T X j )(T X k ) T (X j (T X k )). But this is now a simple matter of computation, since T T (X j )X k = T = (T X j )(T X k ) T (X j (T X k )) = = = = 1 1 X j+1 X k + T (X j+k+1 ) j+1 j+1 1 X j+k+2 and (j + 1)(j + k + 2) 1 1 1 X j+1 X k+1 T (X j X k+1 ) j+1 k+1 k+1 1 1 X j+k+2 T (X j+k+1 ) (j + 1)(k + 1) k+1 1 1 X j+k+2 (j + 1)(k + 1) (k + 1)(j + k + 2) 1 X j+k+2 (j + 1)(j + k + 2) (1.5)

hence, we get that (1.5) holds, and consequently that (1.4) is true whenever g is any polynomial, and f = X j . Since we made no hypothesis on j, (1.4) is true for all polynomial g and all element in the basis B. Finally, by the claim, (1.4) is true for all f, g polynomials. state and prove a similar statement for D. We claim that for all polynomials f and g, there holds that D(f g) = D(f )g + f D(g). (1.6)

Here agin, it suces to prove this when f and g are arbitrary elements of the basis B = (1, X, X 2 , X 3 , . . . ). Hence, let f = X j and g = X k for j, k 0. We compute D(X j X k ) = D(X j+k ) = (j + k)X j+k1 , and D(X j )X k + X j D(X k ) = jX j1+k + kX j+k1 = (j + k)X j+k1 . Consequently, (1.6) holds whenever f and g are elements in the basis B, and by linearity this is also true whenever f and g are arbitrary polynomials. Suppose that V is a nonzero subspace of F [X] such that T V V (i.e. for all f V , there holds that T f V ). Show that V is not nite dimensional. To prove this, we prove the contra-posit, namely that if V is nite dimensional, then either V = {0} or there exists f V such that T f does not belong to V . Let us suppose that V is nite dimensional, and let B be a basis of V . Then, either B is empty, and V = {0}, or there exists a polynomial in B of maximal degree P with degP = n 0. Then for all element Q in the basis B, Q Fn [X], and consequently, V = Span(B) Fn [X]

(i.e. every polynomial in F has degree lower or equal to n). But then, T (P ) is of degree n + 1, and hence does not belong to Fn [X], and not to V . This nishes the proof. Suppose V is a nite dimensional subspace of F [X], prove that there exists m 0 such that Dm V = {0}. Since V is nite dimensional, V admits a basis B. Either B is empty, then V = {0}, and DV = {0}, or, as above, V Fn [X] for some n 0. Now, for every polynomial P in Fn [X] (and in particular in V ), we have that Dn+1 P = 0 (indeed, Dn P is a polynomial of degree lower or equal to 0, hence a constant, hence D(Dn P ) = 0). Remark 1.1. (1) Another possibility to prove all the claims above was to use the tools from Calculous since (because we have identied polynomials and polynomial function) any polynomial is in particular a dierentiable function. Then formulas (1.1) allowed very short proofs to the questions above. (2) Introducing the notion of the valuation Val(P ) which is the smallest index i such that ai = 0, where ai is the coecient of X i in the expansion of P in (1.3), we see that Val(P Q) = Val(P )Val(Q), and that Val(P + Q) min(Val(P ), Val(Q)). Thus Val has some properties similar to the degree. Then, we see that, if P is not constant, deg(T P ) = degP + 1,degD(P ) = degP 1 Val(T P ) = ValP + 1,ValD(P ) = ValP 1. Considering the valuation instead of the degree could provide alternate proofs. 2. p126-127 In this section, we will use the following notations: For S = (x1 , x2 , . . . , xn+1 ) n + 1 distinct points in F , we denote Pi the i-th Lagrange interpolation polynomial associated to S. This is the only polynomial P such that deg(Pi ) n and P (xk ) = jk . Besides, we have the explicit formula for Pi : Pi (X) = X xi1 X xi+1 X xn+1 X x1 X x2 ... ... . xi x1 xi x2 xi xi1 xi xi+1 xi xn+1 (2.1)

2.1. Exercise 1. Find a polynomial f of degree lower or equal to 3 such that f (1) = 6, f (0) = 2, f (1) = 2 and f (2) = 6. We shall give two proofs of that exercise. First, using the Lagrange theorem, and formula (2.1), we et that f is uniquely given by f = 6P1 + 2P2 2P3 + 6P4 where the Pi are the Lagrange interpolation polynomials associated to S = (1, 0, 1, 2). Then it suces to compute the Pi to nd that1 f = 2 2X 6X 2 + 4X 3 . Another possibility to nd directly f without computing the Pi is as follows.
1At this point, it is convenient to take some time to check that f satises the interpolation problem

(2.2)

Following what we did in class, we introduce the mapping EvalS , R3 [X] R4 such that EvalS (P ) = (P (1), P (0), P (1), P (2)). We saw in class (and it is not dicult to check, using the matrix of the application) that this is an isomorphism. Introducing B = (1, X, X 2 , X 3 ), a basis of R3 [X], and BC, the canonical basis of R4 , we can compute the matrix of EvalS in basis B and BC. 1 1 1 1 1 0 0 0 [EvalS ]BC,B = 1 1 1 1 1 2 4 8 and we want to nd the unique polynomial P = a0 + a1 X + a2 X 2 + a3 X 3 such that EvalS (P ) = (6, 2, 2, 6), or in coordinates, a0 a0 1 1 1 1 6 a1 1 0 0 0 a1 2 [EvalS ]BC,B = a2 1 1 1 1 a2 = 2 a3 B 1 2 4 8 a3 B 6 BC Then, it is easy to solve this system to get that a0 2 a1 2 = a2 6 a3 B 4 BC which gives (2.2). 2.2. Exercise 2. Let , , and be real numbers. Prove that it is possible to nd a polynomial f R[X] of degree not more than 2 such that f (1) = , f (1) = , f (3) = and f (0) = if and only if 3 + 6 8 = 0. (2.3)

In this case, we have a degenerate interpolation problem since the degree allowed is smaller than Card(S)1, where S = (1, 1, 3, 0). There are at least three dierent and interesting ways to attack that problem. (1) One can set P = a0 + a1 X + a2 X 2 , and reduce the question to solving a system: we can rephrase the question as prove that the system P (1) P (1) P (3) P (0) = a0 a1 + a2 = a0 + a1 + a2 = a0 + 3a1 + 9a2 = a0 = = = =

has a system if and only if (2.3) holds. (2) One can use the Lagrange interpolation theorem. Indeed, there exists a unique polynomial g of degree lower or equal to 2 that solves the (partial) interpolation problem g(1) = , g(1) = and g(3) = , and we know g explicitly in terms of the Lagrange polynomials associated to S = (1, 1, 3). Hence, if there is a solution to the full interpolation problem, it must be g (that we now know explicitly). And conversely, g is a solution if and only if g(0) = . This, again gives (2.3).

(3) One can use our knowledge of matrices. Indeed we can rephrase the question as follows: prove that (, , , ) is in the range of the application EvalS , R2 [X] R4 dened by EvalS (P ) = (P (1), P (1), P (3), P (0)) if and only if (2.3) holds. But if B = (1, X, X 2 ) and BC is the canonical basis of R4 , we get that 1 1 1 1 1 1 M = [Evals ]BC,B = 1 3 9 1 0 0 and the vector (, , , ) is in the range of EvalS if and only if Col(M ). Column-reducing M , we get (2.3). 2.3. Exercise 3. Let 2 0 A= 0 0 and P = (X 2)(X 3)(X 1) = X 3 6X 2 + 11X 6. Show that p(A)=0. Computing directly, we get the claim. Alternatively, remarking that for a polynomial matrix, one can directly compute the image of the diagonal coecient by P , one gets that P (2) 0 0 0 0 P (2) 0 0 = 0 . P (A) = 0 0 P (3) 0 0 0 0 P (1) Let P1 , P2 and P3 be the Lagrange polynomials associated to S = (2, 3, 1), compute Ei = Pi (A). We will give two proofs (1) Again, one can compute the Pi explicitly. And we will see how to do it very eciently. Indeed, let B = (1, X, X 2 ) be a basis of R2 [X], we want to nd a d g P1 = b , P2 = e , P3 = h c B f B i B such that EvalS (Pi ) = Ei , where Ei is the i-th vector in BC, the canonical basis of R3 . But since 1 2 4 [EvalS ]BC,B = M = 1 3 9 , 1 1 1 0 2 0 0 0 0 3 0 0 0 0 1

this amounts to 1 [EvalS ]BC,B ((P1 )B (P2 )B (P3 )B ) = 0 0 or equivalently that a M b c d e f g h = I3 . i 0 1 0 0 0 1

hence, it suce to compute M 1 to read the coecients of the Pi . And this can be done eciently, row-reducing the matrix M . This gives 3 1 3 a d g 5 M 1 = b e h = 4 3 2 , 2 1 1 c f i 1 2 2 hence2 P1 = 3 + 4X X 2 1 3 P2 =1 X + X 2 2 2 5 1 P3 =3 X + X 2 . 2 2 that we know explicitly the Pi , we can see that 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 , E2 = 0 0 1 0 and E3 = 0 0 0 0 0 0 0 0 0 0 0 0

(2.4)

Now 1 0 E1 = 0 0

0 0 0 0

0 0 0 0

0 0 0 1

(2.5)

(2) Another possibility was to use again the computation of polynomials of diagonal matrices and the fact that the Pi solve a known interpolation problem to get that 1 0 0 0 P1 (2) 0 0 0 0 P1 (2) 0 0 0 1 0 0 = E1 = 0 0 P1 (3) 0 0 0 0 0 0 0 0 0 0 0 0 P1 (1) P2 (2) 0 0 0 0 0 0 0 0 P2 (2) 0 0 0 0 0 0 = E2 = 0 0 P2 (3) 0 0 0 1 0 0 0 0 P2 (1) 0 0 0 0 P3 (2) 0 0 0 0 0 0 0 0 P3 (2) 0 0 0 0 0 0 = E3 = 0 0 P3 (3) 0 0 0 0 0 0 0 0 P3 (1) 0 0 0 1. and we recover (2.5)
2once again, at this point it is convenient to pause to check that the P satisfy the interpolation i problem

Show that E1 + E2 + E3 = I3 , Ei Ej = ij Ei and A = 2E1 + 3E2 + E3 . This could be done by direct computations. A more theoretical proof is below. 2.4. Exercise 4. Let P be as above, and let T be any linear transformation of on R4 such that P (T ) = 0. Let P1 , P2 P3 be the Lagrange polynomials dened in Exercise 3 above, and let Ei = Pi (T ). Prove that E1 + E2 + E3 = IdR4 , Ei Ej = ij Ei and T = 2E1 + 3E2 + E3 . We introduce the linear transformation (substitution X T ) SubT , R[X] L(R4 ) dened by SubT (Q) = Q(T ), i.e. SubT (a0 + a1 X + a2 X 2 + + an X n ) = a0 IdR4 + a1 T + a2 T 2 + + an T n . We remind that this is a linear transformation satisfying that SubT (QR) = SubT (Q)SubT (R). To prove the rst formula, we need to show that SubT (P1 + P2 + P3 ) = IdR4 = SubT (1). If one has computed explicitly the Pi , one can compute P1 + P2 + P3 = 1. Hence, evaluating both sides at T , we get E1 + E2 + E3 = IdR4 . If one has not computed the Pi before, one can still see that Q = P1 + P2 + P3 is the only polynomial that satisfy the following interpolation problem: Q is of degree lower or equal to 2 and Q(2) = Q(3) = Q(1) = 1. But the polynomial 1 satises the same interpolation problem. By unicity of the solution of the Lagrange interpolation problem, we see that Q = 1, and we can conclude as before. To prove the second formula, we rst prove that Ei Ej = 0 if i = j. (2.6) Thus, we suppose i = j and we want to prove that SubT (Pi Pj ) = 0. But using the explicit formula for Pi , Pj given by (2.1), we see that if {i, j, k} = {1, 2, 3} (but not necessarily in that order) that P Pi = , (X xi )(xi xj )(xi xk ) and hence that Pi Pj = P P (X xi )(xi xj )(xi xk ) (X xj )(xj xi )(xj xk ) P 1 =P (X xi )(X xj ) (xi xj )(xi xk )(xj xi )(xj xk ) 1 = (X xk )P (xi xj )(xi xk )(xj xi )(xj xk ) = Qij P Ei Ej = Pi (T )Pj (T ) = Qij (T )P (T ) = 0 where we have used that P (T ) = 0 by hypothesis. Hence, we get (2.6). Now, to consider the case i = j, we start from the fact that E1 +E2 +E3 = IdR4 . Multiplying both sides by Ei , and using (2.6), we get that all the terms on the left hand side 2 vanish except Ei , and on the right hand side, we get Ei . .

and consequently, using the multiplicative property of SubT , we get that

Finally, to prove the last equality, we just need to prove that SubT (2P1 + 3P2 + P3 ) = SubT (X), but we can check directly that 2P1 + 3P2 + P3 = X, either by direct computations, or because these polynomials satisfy the same interpolation problem. 2.5. Exercise 6. Suppose that T is a linear transformation F [X] F such that T (f g) = T (f )T (g). (2.7) Show that T = 0 or that there exists x F such that T (f ) = f (t) for all polynomial f. The goal of the exercise is to explain that if we are consider a linear transformation, it suces so check an identity on a basis (or on any family that generates the space given the rules that we are given (addition, scalar multiplication)), then, in the case of an algebra, it suces to check an identity on a generating set (in the sense of algebra: a set such that any element can be expressed in terms of linear combinations of products of element in the set). More precisely, we let t = T (X). Then, we know exactly the image of all elements of the basis B = (1, X, X 2 , . . . ), except for the image of 1, T (1). Indeed, using (2.7), we get that T (X 2 ) = T (X)T (X) = t2 , and more generally that T (X i ) = ti for j 1. Now to get the image of the last element in the basis, we remark using (2.7) that T (1) = T (1 1) = T (1)T (1) = T (1)2 is a root of X 2 X = X(X 1) on F . Consequently, (1) either T (1) = 0, then T (X) = T (1 X) = T (1)x = 0, and hence for all j 0. So T coincides with the 0 transformation on a basis, hence on the whole space. (2) either T (1) = 1. Then, using linearity and (2.7), we get, for P given as in (1.3), that T (P ) = T (a0 + a1 X + a2 X 2 + . . . an X n ) = a0 T (1) + a1 T (X) + a2 T (X 2 ) + + an T (X n ) = a0 + a1 t + a2 t2 + + an tn = P (t) and T (P ) = P (t). In conclusion, either T = 0, or there exists an element t = T (X) such that T (P ) = P (t).
Brown University E-mail address: Benoit.Pausader@math.brown.edu

Вам также может понравиться