Академический Документы
Профессиональный Документы
Культура Документы
Seki Kim Mathematics Department University of Iowa Iowa City, IA 52242 December 1994
A simple quadrature rule for the solution of second-kind singular integral equations with variable coe cients is constructed and investigated. This method to calculate numerically singular integrals uses classical Jacobi quadratures. The major advantage is its simplicity. The proposed method is convergent under a reasonable assumption on the smoothness of the solution. It has a higher convergence rate in numerical experiments than theoretically shown.
Abstract
1. INTRODUCTION.
The singular integral equation with Cauchy kernel most often considered has the form Z 1 (t)dt Z 1 a(x) (x) + b(x) + k(x t) (t)dt = f (x) ; 1 < x < 1: (1.1) ;1 t ; x ;1 The rst integral term is understood to be a Cauchy principal value integral. It is possible to reduce singular integral equations (SIE's) to Fredholm integral equations (indirect method), but direct solution methods are preferred in practice. Also it is proved that when the Gaussian numerical integration rule is used that both numerical methods are equivalent in the sense that they provide the same numerical results for the same number of abscissae used in numerical integrations 17]. Usually the unknown function is replaced by the product of a smooth function times a function taken as the weight of the quadrature. For variable coe cients SIE's, this is non classical and the nodes and weights of the quadrature rule must be constructed from scratch. But for constant coe cients SIE's this reduces to Jacobi 1
2 quadrature. In this paper, we want to analyze the replacement of the possibly nonclassical weights and nodes, by the weights and zeros of Jacobi polynomials. This is a much simpler approach than the methods using nonclassical weights and nodes. We mention some methods for the variable coe cient SIE's. Theocaris and Tsamasphyros 16] attempt to apply a Gauss-Jacobi quadrature rule directly, but this results in the need to compute the zeros of a second kind of Jacobi function. Dow and Elliott 4] have developed an algorithm with error analysis, for solving an approximate solution to (1.1) by replacing f and k by polynomial approximations. In 13] the solvability of the discrete system is proved for arbitrary selection of quadrature and collocation nodes, but no error analysis is given there. Here we propose a simpler method and consider nodes which are well known zeros of Jacobi polynomial and want to develop the error analysis for the proposed method. This study concerns only global polynomial approximation. In the error analysis, a restrictive assumption is used, which is a bound on the size of the coe cients. Since we are unable to nd a closed form inverse from the matrix of the discretized system, we perform the error analysis by treating the singular operator as a perturbation of the identity. No problems were encountered in the experiments when the condition was lifted.
2. MATHEMATICAL PRELIMINARIES.
as The second kind singular integral equation with variable coe cients can be written
Z 1 K(x t) 1 (t)dt = f (x) ; 1 < x < 1: (2.1) a(x) (x) + ;1 t ; x This equation is reduced to (1.1) by setting K(x t) = K(x t) ;K(x x)] + K(x x) where ; K(x x) k(x t) = K(x t) t;x b(x) = K(x x): The dominant equation of (1.1) is Z 1 (t) a(x) (x) + b(x) t ; (2.2) x dt = f (x) ; 1 < x < 1 ;1 The solutions for SIE's can be found under the following assumptions. 11] (i) The functions a b f and k are Holder continuous in each independent variable on -1, 1]. (ii) The functions S (x) = a(x)+ b(x) and D(x) = a(x) ; b(x) do not vanish anywhere on -1,1].
3 Also it is not restrictive to assume the coe cients to satisfy r(x)2 = a(x)2 + b(x)2 = 1 and b(x) 6= 0 on (-1,1). For the latter case, b(x) may vanish at a nite number of isolated points in (-1,1) as long as it remains of one sign. However, we will assume b(x) not vanishing in (-1,1) for simplicity. A method in that case is proposed in Ch VII of 20] by using separating the integral at the zeros of b(x) and using an appropriate change of variables. Following 4] let us de ne the continuous function (2.3) (x) = ; 1 ln a(t) ; ib(t) 2 i a(t) + ib(t)
b(t) + N (t) = 1 arctan a (t) where N takes only integer values and may have discontinuities at the zeros of a=b and ; =2 < arctan x < =2. In order to apply the Elliott-Parget quadrature 2, 3], we need to assume a(x) doesn't have zeros in (-1,1), but this is not necessary for the Hunter's quadrature 9, 1]. The fundamental function Z is de ned as Z1 ( ) d ) for t 2 (;1 1) (2.4) Z (t) = (1 + t)n1 (1 ; t)n2 exp(; ;1 ; t where n1 and n2 are integers. This function can be rewritten, after xing n1 and n2 , as Z (t) = (1 + t)n1 ; (1) (1 ; t)n2 + (;1) (t) (2.5) say, where (t) = expf( (1) ; (t)) ln(1 ; t) + ( (t) ; (;1)) ln(1 + t)
Here de ne = n2 ; (1) and = n1 + (;1). The behavior of Z near the end points -1 and 1 can be considered as following : Z is bounded and 1=Z is in nite, but integrable if 0 < <1 Z is in nite, but integrable and 1=Z is bounded if ;1 < <0 In all cases we de ne the index of the singular operator of the dominant equation in consideration by = ;(n1 + n2 ): (2.7) It turns out that can have up to three values depending upon whether Z is chosen to be bounded or unbounded at non-special ends i.e. those for which (1) (or (;1))
Z 1 ( ) ; (t) ;t d g ;1
(2.6)
4 is not an integer. The largest of these values of is the index of singular operator 6]. In 20], it is shown that this index can attain only three values -1,0 and 1 if b(x) 6= 0 in -1,1]. If = 1, we need an extra condition to get a unique solution. Usually it is of the form 1 Z 1 (t)dt = C (2.8) o If = ;1, we need the consistency condition for the existence of solution of (2.2) Z 1 f (x) dx = 0 (2.9) ;1 Z (x) We use (x) = (1 ; x) (1 + x) as a Jacobi weight function. Then Z (x) = (x) (x) where (x) is a positive continuous function on -1,1]. (x) can be rewritten in terms of (x), which expresses explicitly the singular behavior at the end points, and a new unknown function y (x) such that (x) = Z (x)'(x) = (x) (x)'(x) = (x)y (x) Because the singular operator in (2.2) transforms any function (x), satisfying the Holder condition, into a new function (x) which also satis es the H condition, the right side (i.e f (x)) must satisfy the H condition to get the solution satisfying the H condition 11]. On the contrary, the solution (x) satis es the H condition if f (x) is assumed to satisfy the H condition 11]. We consider here the case = 1 i.e ;1 < < 0. Let H ;1 1] denote the class of Holder continuous functions of order on -1,1]. Then clearly (x) 2 H ;1 1] with = min(; ; ) . 11] The integral (2.2) can be discretized by a classical Gaussian (; ; ) ( ) quadrature. Let ti be the zeros of Pn (x) and sj be the zeros of Pn ;1 (x) where ( ) Pn (x) denotes the Jacobi polynomial of degree n relative to the weight function (; ; ) (x), and Pn ;1 (x) the one relative to 1= (x). Since b(x) 6= 0, we can rewrite the equation (2.2) as a(x) (x)y (x) + Z 1 (t)y (t) dt = f (x) : (2.10) b(x) b(x) ;1 t ; x If we use a (x) for ba((xx)) and f (x) for bf((xx)) , then (2.10) becomes Z 1 (t)y (t) a (x) (x)y (x) + dt = f (x): (2.11) ;1 t ; x To evaluate the singular integral in (2.11), we will use two kinds of Gauss-Jacobi quadrature : Hunter's and Elliott-Paget's. Both use Lagrange interpolations to derive quadrature rules. Let Z 1 P ( ) (t) n ( ) ( ) n (z ) = ;1 (t) t ; z dt = 2(z ; 1) (z + 1) qn (z ) z 62 ;1 1]
;1
5
( ) where qn represents the so called Jacobi function of the second kind. We can de ne ( ) the values of the function n (x) on the interval ;1 1] as follows n 1 ( ) (x + i0) + ( ) (x ; i0)o : ( ) ( x ) n n 2 n This function can be expressed explicitly by means of the hypergeometric function, 14]. Then Hunter's method has the form n w y (t ) ( ) X i i n (x)y (x) Qn(y x) = + (2.12) ( ) Pn (x) i=1 ti ; x where Z 1 (t)y (t) dt = Qn(y x) + Gh for x 2 ;1 1] ;1 t ; x Note that this method is exact for polynomials of degree not greater than 2n. The Elliott-Paget method for singular integral is of the form # n " ( ) X ( x ) y (ti ) Qn(y x) = wi ; (n ) (2.13) Pn (ti) ti ; x i=1 where Z 1 (t)y (t) dt = Qn(y x) + Ge for x 2 ;1 1] ;1 t ; x Note that this E-P method is exact for polynomials of degree not greater than n ; 1. Let us remark that through the paper. Ci i = 0 1 2 : : : 12 represent di erent positive constants.
0
3. NUMERICAL SCHEME.
By applying Hunter's method to (2.11) and (2.8) and collocating at the node points sj , we have 2 3 n w y (t ) ( ) X ( s ) j 5 i i 4a (sj ) (sj ) + n y ( s ) + + G1 = f (sj ) (3.1) j ( ) t ; s j Pn (sj ) i=1 i j = 1 ::: n; 1
n X i=1
wiy (ti) = Co
G1
where (i)
is the error of Gauss-Jacobi quadrature on sj and Z1 ( ) (ii) wi = (t) Pn ( (t)) dt ;1 (t ; ti)Pn (ti)
0
6 are the weights of Gauss-Jacobi quadrature. (3.2) comes from the normalization condition (2.8). Recall that this equation is of index 1. We transform (2.11) into the following equation Z 1 1 y (t) 2 (t) a (x) (x)y (x) + dt = f (x) (3.5) ;1 (t) t ; x and then proceed similarly, using as weight 1= (t) and collocating at ti,
2 4a (ti) (ti ) +
G2
where (i)
are the weights of Gauss-Jacobi quadrature. Remark. In (3.6), we applied a di erent weight function 1= (x) to exactly the same equation (2.11) and we look for the solution in (-1,1) because we already knew the end point behavior of the solution. Note that (x) is smooth on (-1,1) i.e. y (x) 2 (x) can be assumed to be smooth on (-1,1) if y (x) is smooth. After deleting the error terms from (3.1) and (3.6), we add nonzero constants l (see the matrix D1 and the bottom of the proof of Theorem 5) and ;ho (see the last column of G) to both sides of (3.2) and (3.6) respectively. Then we obtain a square system for the unknown vector y = (y(s1) : : : y(sn;1) 1 y(t1) : : : y(tn)) y (sn;1) 1 y (t1 ) : : : y (tn)). approximating the exact solution y = (y (s1) The system can be written as " # D A 1 My= G D y=f (3.8) 2 where f = (f (s1 ) : : : f (sn;1 ) Co + l f (t1) ; ho : : : f (tn) ; ho ), and D1 is a diagonal matrix with
(;
G2
= f (ti)(3.6)
(3.7)
8 ( ) (sj ) < (D1)jj = a (sj ) (sj ) + n for j = 1 : : : n ; 1 ( Pn )(sj ) : (D1)nn = l We will use 1=wn for l and 1=jb(1)j for ho later (see 7]). 1 0 (; ; ) ( t ) i 2 A ;1 (ti) i = 1 : : : n D2 = diag @a (ti) (ti ) + n (; ; ) Pn;1 (ti)
7 We assume (D1 )ii and (D2)ii are not zero for all i since, in general, the coe cient function a (x) does not produce the weight function. But for the constant coe cient case, we have D1 = D2 = 0. ( wj tj ;si i 6= n j = 1 2 : : : n Aij = w i=n j
Note that the system is of order 2n. Also we remark that G can be written as the product G = B D3 (3.9) where 8 w < j Bij = : sj ;ti j 6= n i = 1 2 : : : n ;ho j = n and D3 is the diagonal matrix with ( (D3)jj = 2(sj ) j = 1 2 : : : n ; 1 (D3)nn = 1 j=n Let us introduce the matrices: 0w 1 0w 1 1 1 B C B C ... ... B C B C C B C N1 = B N = 2 B C B C wn;1 @ A @ A wn;1 1 w n jb(1)j 1 0 1 1 1 With these matrices, we can rewrite matrices A and B as follows, A = S N1 B = ;S t N2 The marix M is the main matrix of the discretized system. In this notation, it is expressed as multiplication of two matrices. ! ! ! ;1 ;1 D S N D S N 0 1 1 1 D3 N2 2 D3 M = ;S t N D D = ;S t D2 N1;1 0 N1 2 3 2
; ; ;
B B S=B B B @
t1 ;s1
1
...
t2 ;s1
1
...
tn ;s1
1
...
t1 ;sn
t2 ;sn
tn ;sn
C C C C C A
8
;1 N ;1 and D N ;1 respectively. These are Here we use notations P1 and P2 for D1 D3 2 1 2 diagonal matrices. Then M =R T where ! ! N 0 P S 2 D3 1 T= (3.10) 0 N1 R = ;S t P2
Lemma 1
(i) (ii)
Proof .
n ( Pn ( n ( Pn
(
(ti )
( ; ;k;1=2 k;1=2 d =2 k ( ) f( dx ) Pn (x)gx=cos = O(n2k+ ) O(n ) c=n 0 c=n For positive sj , we have the following estimate from (3.4.3) and (15.3.14) in 14], ( ) j ;1=2 n; j 2 n (sj ) (sj ) ( ) n Pn (sj ) j ; ;1=2 n If sj is negative, we use instead of . This still gives us same estimate (sj ). In similar way, (; ; ) i; ;1=2 n i 2 n;1 (ti ) (t ) O(1) 2 i (; ; ) i ;1=2 n; n Pn ;1 (ti ) In the matrices D1 and D2 , if the value of a is large enough that it exceeds the value of n=Pn, then we may have positive values in P1 and P2 since D3, N1 and N2 are positive diagonal matrices. Also, since a (x) doesn't have any zero in (;1 1), we make a (x) have positive values. Suppose a (x) has negative values. Then we can multiply by ;1 the system to make it positive.
Hence we may assume P1 and P2 are positive diagonal matrices.
9
Proof. It su ces to show that R is invertible since M = R T and T is a diagonal matrix with positive entries. R can be decomposed as below.
0 1 10 1 1 10 1 1 ;2 ;2 2 2 P1 S = @ P1 01 A @ In 1 P1 SP2 A @ P1 01 A R= ; ;1 S t P2 2 2 t ;2 0 P2 ;P2 S P1 0 P22 In Here we observe that the middle matrix Q in the above expression can be written Q = I + K where K is skew-symmetric. 0 1 1 ;2 ;1 2 In 1 P1 SP2 A = I + K Q = @ ;1 2n 2 t ;2 ;P2 S P1 In !
where
P1 SP2 0 2 1 Then Q is nonsingular since K is skew-symmetric. Note that K has only pure imaginary eigenvalues. This shows eigenvalues of Q can not be zero. This concludes the proof. 2
1 1 ;P ; 2 S t P ; 2
0 K =@
1 ;2
1 ;2
1 A
We have some properties of the matrices A and B if + = ;1. In this case 1 In i.e. A is the inverse of B if b2 (1) = ;1. Also AN1;1At becomes a AB = BA = b2;(1) ;1 diagonal matrix b21 (1) N2 . All these kinds of properties are shown in Appendix A.
4. ERROR ANALYSIS.
In this section, we show convergence of the method. We also determine the error bound in the uniform norm. Let G1 be the vector of quadrature errors at the node points ti and G2 the one at sj . The system (3.8) can be rewritten with the errors G1 and G2 in (3.1) and (3.6) respectively as follows T T (4.1) M y = f = (f1T ; T G1 f2 ; G2 )
0 T f1 = f (s1) : : : f (sn;1) Co + l] B where @ f2T = f (t1) ; ho : : : f (tn) ; ho ] y is the exact solution. De ne the error vector e = y ; y. We have then T T M e = (; T G 1 ; G2 )
(4.2)
:
(4.3) (4.4)
By Theorem 2 we have the exact error expression as follows e = T ;1R;1 : Taking the Euclidean norms, we have the estimate kek2 kT ;1k2 kR;1k2 k k2:
! ! ! P S x P x + Sx 1 1 1 1 2 t t xt Rx = (xt1 xt2) ;S t P x2 = (x1 x2) ;st x1 + P2 x2 2 = xt1 P1x1 + xt1 Sx2 ; xt2 S t x1 + xt2 P2x2 = P1kx1 k2 + P2 kx2k2 > 0 since P1 and P2 are positive diagonal matrices and xt1 Sx2 ; xt2 S tx1 = 0 as below. xt1 Sx2 = (Sx2 x1 ) = (x1 Sx2) = (Sx2 )tx1 = xt2S t x1 : 2 Now let us de ne the symmetric part of matrix R by ! 1 P 0 1 U = 2 (R + Rt) = 0 P : 2 Note that U is symmetric positive de nite by Lemma 3.
Lemma 3 R is nonsysmetric positive de nite. Proof. For any nonzero x = (xt1 xt2 )t 2 R2n,
Lemma 4
kR;1k2
kU ;1 k2
Proof. It su ces to show n (R) n(U ) where n (R) is the smallest singular value of R and n(U ) is the smallest eigenvalue of U . Since U is symmetric positive 1 1 de nite, n(U ) is positive i.e. n (R) n (U ) > 0. This shows n (R) n (U ) . Consequently, this lemma holds. For any x 2 R2n with kxk2 = 1, we have t t t t xt Ux = 1 2 (x Rx + x R x) = x Rx kxk2 kRxk2
w1
= n(R) 2 By using Lemma 4, the estimate (4.4) becomes kek2 kT ;1k2 kU ;1 k2 k k2 : We can then obtain the estimate for the error in uniform norm.
xt Ux
kek2
We use (3.10) to estimate the uniform norm of T ;1. >From 15.3.12 of 14], wi = O(n;1) and wj 2 (sj ) wj because (15.3.14) of 14] it follows
(4.6)
wj
Then
(sj )
j ;2 +1 n2 = j 2 +1 n;2 wi:
;2 ;2
j n
kT ;1k1 = O(n):
On the other hand, ;1 0 ! P ; 1 1 U = 0 P2;1 =
(4.7)
! ;1 N2 D3D1 0 ;1 : 0 N1 D2
12 It is not restrictive to consider ti to be positive. Indeed if we use the negative ti, it is enough to replace by . Thus, by Lemma 1, (D2)ii (ti), so that ;1 ) (N1 D2 wi ;1 (sj ) ii i ;2 i2 +1 n;2 ;2 n = i n;2 : This gives ;1 k = O(n;1 ): kN1D2 (4.8) 1 Also, for i 6= n, ;1 ) (N2D3 D1 wi 2 (si) ;1(si) = wi (si) wi ;1 (si) ii by (4.6). In case i = n, ;1 ) = jb(1)j;1 1 = jb(1)j;1 w = O(n;1 ): (N2D3 D1 nn n l Similarly, ;1 k = O(n;1 ): kN2D3 D1 (4.9) 1 >From (4.8) and (4.9), kU ;1 k1 = O(n;1) (4.10) It follows then kek1 C3 n 12 k k1 since kT ;1k1 kU ;1 k1 C4 from (4.7) and (4.10). 2 To get the error bound of the system, we need to determine error estimate for Hunter's method. The latter can be obtained from 2, 3, 15, 18], as follows for Hunter's method.
>From Lemma 6 and Theorem 5 the convergence follows. Its rate is given by
Theorem 7 y m 2 H with m + 1, the following estimate holds kek1 C5 n;(m+ ;1=2;2 ;")
( )
(4.11)
13 Remark. In this procedure, we need y 2 H 1 for obtaining convergence 12, 3]. If we choose Elliott-Paget method 3] instead of Hunter's, and proceed in a similar way, we also obtain convergence. The convergence rate is given by
(4.12)
Finally, we reconstruct the approximate solution over the whole interval (;1 1) by using the Lagrange interpolatory polynomial L2n;2 (x) on the nodes ft1 s1 sn;1 tng,
L2n;2 (x0) =
2n;1
X
i=1
y(xi)
where xi 2 ftig or fsj g. Let L2n;2 (x) be the polynomial of degree 2n ; 2 interpolating on the exact values of y (x). Then jL2n;2(x0 ) ; L2n;2 (x0 )j
2n;1
X
i=1
kek1
where
p
is the Lebesgue constant. We will show that p = O(log n): ( ) For ;11 < x0 < 1, let be the a xed positive number, < 1 ;jx0j. Then Pn (x0 ) = O(n; 2 ) from (8.9.6) in 14]. Since alpha , in the previous formula are arbitrary, we (; ; ) ;1 have also Pn ;1 (x0 ) = O(n 2 ). Here we denote by li (x) the following quantity
li(x0 ) =
X
jxi ;x0 j>
jli(x0 )j =
X
jxi ;x0 j>
x=xi
14
1 1 ) O(n; 2 ) = O(n; 2
X
jxi ;x0 j>
)0
h ( ) (; ; ) i0 ;1 Pn (x)Pn;1 (x)
;1
(; ; ) Pn ;1 (ti )
x=xi
O(n;1)
+ O(n;1)
n X i=1 n ;1 X j =1
( Pn
(ti )
;1
( ) Pn (sj )
;1
(; ; ) Pn (s j ) ;1
0
;1
0
(; ; ) (; ; ) ( ) ( ) where Pn (x)Pn (xi) if xi 2 fsj g. It is easily ;1 (x) x=xi = Pn (xi )Pn;1 h ( ) (; ; ) i0 (; ; ) ( ) shown that Pn (x)Pn;1 (x) x=xi = Pn (xi )Pn ;1 (xi ) if xi 2 fti g. Here for the case xi 2 ftig,
0
i0
n X i=1
( Pn
)0
(ti)
;1
(; ; ) Pn ;1 (ti )
;1
O(1)
n X i=1
(i
+3 2
n; ;2 ) (i;
+1 2
n ) + O(1)
n X i=1
(i
3 +2
n; ;2) (i;
1 +2
n)
= O(n) This gives us the same result for the case xi 2 fsj g. Consequently, X jli(x0 )j = O(1)
jxi ;x0 j>
(; ; ) ( ) Pn x0)Pn ;1 (x0 ) i h ( 0 ( ) (; ; ) (x0 ; xi ) Pn (x)Pn ;1 (x) x=xi ( 1 = O(n; 2 ) Pn ) (; ; ) (; ; ) ( ) (xi )Pn ;1 (x0 ) ; Pn (x0 )Pn;1 (x0 ) (; ; ) Pn ;1 (xi ) (xi ; x0 ) ) ( (xi ) ; Pn xi ; x0 )
(x0 )
15 = O(1) using the formula (8.8.2) of 14] for the last fraction. Also we have the same result for the case xi 2 fsj g. If x0 = cos 0 and 0 < 0 < , X X jli(x0 )j = jli(x0 )j + O(1)
jxi ;x0 j
n 1 <j i; 0 j
; 0
where 0 is a xed positive number. We know X j i ; 0 j;1 O(n log n) since i i=n from (8.9.1) in 14] and Pi n=i = O(n log n). Hence we have the following estimate, X jli(x0 )j = O(log n): Note that all bounds are uniform for ;1 + 00 x0 1 ; 00. Consequently, we have an estimate for the di erence between the exact solution and the approximate solution as follows. ky (x) ; L2n;2 (x)k1 ky (x) ; L2n;2 (x)k1 + kL2n;2(x) ; L2n;2 (x)k1 1 C7 (1 + p) !(y n ) + C8 kek1 p where ! is the modulus of continuity. This allows to determine the rate convergence using the result of Theorem 7.
jxi ;x0 j
n 1 <j i ; 0 j
; 0
n 1 <j i; 0 j
;
j i ; 0 j;1 + O(1)
16 Proceeding in the same way we introduce the new unknown y (x) as in section 2. Applying Hunter's method, we obtain the system ! D1 A + V1 y = f (5.2) G + V2 D2
;1 with (V1)ij = 0 wj k(si tj )=b(si) i = 1 : : : n i=n ( 2 6= n (V2)ij = 0 wj k(ti sj ) (sj )=b(ti) j j=n Let us introduce the matrix V . ! 0 V 1 V = V 0 2
With this matrix the system (5.2) can be rewritten as below. (M + V ) y = f (5.3) where M +V = R T + V = (R + V T ;1) T: Now, ! ;1 ;1 0 ! ;1 ! 0 V N 0 V 1 1 N1 ; 1 2 D3 V T = V 0 ;1 0 N1;1 = V2N2;1 D3 0 2 Notice that (V1 N1;1)ij = wj k(si tj )=b(si)] wj;1 = k(si tj )=b(si ):
;1 are bounded. This allows us In similar way, we can show that entries of V2N2;1 D3 to obtain the following Lemma under the assumption that a is large enough.
17
Proof. Since R has the inverse by Theorem 2,
kR;1 V T ;1k2
< 1: The last inequality holds by the assumption a is large enough since the maximum of V T ;1 is bounded. 2 Now we show solvability of system (5.3).
kR;1k2 kV T ;1k2 kU ;1 k2 k V T ;1k2 by Lemma 4 q kU ;1 k2 kV T ;1k1 kV T ;1k1 kU ;1 k1 n max j(V T ;1)ij j C10 max j(V T ;1)ij j by (4.10)
Lemma 11 . The matrix M + V is nonsingular under the assumption that a is large enough. Moreover kM + V k2 is bounded.
Proof.
M + V = (R + V T ;1)T = R (I + R;1V T ;1) T >From the previous Lemma, we know that I + R;1V T ;1 is nonsingular. This implies that also M + V is nonsingular. Hence k(M + V );1k2 kT ;1k2 k(I + R;1 V T ;1);1 k2 kR;1k2 kT ;1k1 kU ;1 k1 (1 ; kR;1V T ;1k2);1 2 C11 : With this estimate the following convergence rate can be shown by using Lemma 6.
1, (5.4)
Remark. The above result can be strengthened with the Elliott-Paget method i.e. we have a weaker condition such as m + > 1=2. The rates of convergence of the two methods are almost the same even if they have di erent restrictions on the smoothness of the function y . In numerical examples, we can choose the coe cients so that the restriction is lifted. In other words, a (x) is taken to be small. Two functions are tested as solution, which are jx ; 0:1j1:3 and ex. The latter is a smooth function that gives a fast convergence rate. The results are in Tables 1 and 2. Acknowledgment. The author thanks Professor Ezio Venturino for suggesting the problem and for useful discussions on this subject.
18 Table 0.1: Small coe cient a with the solution of order 1:3
kek1
Table 0.2: Small coe cient a with the exponetial function as a solution
kek1
19
References
1] M.M. Chawla, T. R. Ramakrishnan, (1974), Modi ed Gauss-Jacobi quadrature formulas for the numerical evaluation of Cauchy type singular integrals, BIT 14, pp. 14-21. 2] D. Elliott, D. F. Paget, (1975), On the convergence of a quadrature rule for evaluating certain Cauchy principal value integrals, Numer. Math. 23, pp. 311-319. 3] D. Elliott, D. F. Paget, (1979), Gauss type quadrature rules for Cauchy principal value integrals, Math of Computation, Vol 33, pp. 301-309. 4] D. Elliott, (1979), The numerical solution of singular integral equations over (-1, 1), SIAM J. Numer. Anal., 16, pp. 115-134. 5] D. Elliott, (1982), The classical collocation method for singular integral equations, SIAM J. Numer. Anal., 19, pp. 816-832. 6] D. Elliott, (1982), The numerical treatment of singular integral equations | a review, in Treatment of Integral Equations by Numerical Methods, C.T.H. Baker, G.F. Miller (Editors), Acad. Press., New York. 7] A. Gerasoulis, R. P. Srivastav, (1982), On the solvability of singular integral equations via Gauss-Jacobi quadrature, Intern. J. Computer Math., Vol. 12, pp. 59-75. 8] A. Gerasoulis, (1986), The singular value decomposition of the Gauss-Chebyshev and Lobatto-Chebyshev methods for Cauchy singular integral equations, Comp. and Maths with Appls., 12A, pp. 895-907. 9] D. B. Hunter, (1972), Some Gauss-type formulae for the evaluation of Cauchy principal values of integrals, Numer. Math. 19, pp. 419-424. 10] A. I. Kalandiya, (1975), Mathematical Methods of two-dimensional elasticity, Moscow, Mir Publishers. 11] N.I. Muskhelishvili, (1953), Singular integral equations, P. Noordho , Groningen, Holland. 12] P. Rabinowitz, (1984), On the convergence and divergence of Hunter's method for Cauchy principal Value integrals, in Numerical solution of singular integral equations, A. Gerasoulis, R. Vinchnevetsky (Editors), proceedings of IMACS, published by IMACS.
20 13] R.P. Srivastav, Fenggang Zhang, (1991), Solving Cauchy singular integral equations by using general quadrature-collocation nodes, Computers Math. Applic. Vol. 21, No. 9, pp. 59-71. 14] G. Szego, (1975), Orthogonal Polynomials, AMS Colloquium Publications, vol. XXIII, 4th Ed. Philadelphia. 15] G. Tsamasphyros, P.S. Theocaris, (1977), On the convergence of Gauss quadrature rule for evaluation of Cauchy type singular integrals, BIT 17, pp. 458464. 16] P.S. Theocaris, G. Tsamasphyros, (1979), Numerical solution of systems of singular integral equations with variable coe cients, Applicable Analysis, Vol. 9, pp. 37-52. 17] G. Tsamasphyros, P.S. Theocaris, (1981), Equivalence and convergence of direct and indirect methods for the numerical solution of singular integral equations, Computing 27, pp. 71-80. 18] G. Tsamasphyros, P.S. Theocaris, (1983), On the convergence of some quadrature rules for Cauchy-value and nite-part integrals, Computing 31, pp. 105114. 19] E. Venturino, (1992), Unconventional solution of singular integral equations., JIE&A, Vol. 4, pp. 443-463. 20] S. Welstead, (1982), Orthogonal polynomials applied to the solution of singular integral equations, Ph.D. Thesis, Purdue University.