Вы находитесь на странице: 1из 16

C H A P T E R

High Speed Computation


1.1 INTRODUCTION
With the advent of the modern high speed electronic digital computers, the numerical methods have been successfully applied to study problems in mathematics, engineering, computer science and physical sciences such as biophysics, physics, atmospheric sciences and geosciences. The art and science of preparing and solving scientific and engineering problems have undergone considerable changes. This is due to the following two reasons. (i) The mathematical problem is to be reduced to a form amenable to machine solution. (ii) Several million operations are performed per minute on a high speed computer. It is difficult to check the intermediate results for possible build up of large errors during calculations. In what follows, we examine these two aspects in detail.

1.2

COMPUTER ARITHMETIC

The basic arithmetic operations performed by the computer are addition, subtraction, multiplication and division. The decimal numbers are first converted to the machine numbers consisting of only the digits 0 and 1 with a base or radix depending on the computer. If the base is two, eight or sixteen, the number system is called binary, octal or hexadecimal respectively. The decimal number system has the base 10. The decimal integer number 4987 actually means (4987)10 = 4 103 + 9 102 + 8 101 + 7 100 (0.6251)10 = 6 10 1 + 2 10 2 + 5 10 3 + 1 10 4 which is a polynomial in 10 1. Combining (1.1) and (1.2), we may write the number 4987.6251 in decimal system as (4987.6251)10 = 4 103 + 9 102 + 8 101 + 7 100 + 6 10 1 + 2 10 2 + 5 10 3 + 1 10 4 (1.3) (1.1) (1.2) which represents a polynomial in the base 10. Similarly, a fractional decimal number 0.6251 means

Numerical Methods for Scientific and Engineering Computation

where the subscripts denote the base of the number system. Thus, a number N = (dn 1 dn 2 L d0 d1 d2 L d m ) in decimal system can always be expressed in the form (N )10 = dn 1 10 n 1 + dn 2 10 n 2 + L + d1 101 + d0100 + d1 10 1 + d2 10 2 + L + d m 10 m (1.4) where d n 1, dn 2, L, d m are any digits between 0 and 9. Binary Number System Binary number system has base 2 with digits 0 and 1 called bits and any number N can be written as (N )2 = bn 1 bn 2 L b1 b0 b1 b2 L b m (1.5) where bn 1, bn 2, L, b m are binary bits 0 or 1 and the point is called the binary point. The corresponding decimal number is easily calculated by using the formula (N )10 = bn 1 2 n 1 + bn 2 2 n 2 + L + b121 + b020 + b1 2 1 + b2 2 2 + L + b m 2 m. (1.6)
Example 1.1

Find the decimal number corresponding to the binary number (111.011)2.

(111.011)2 = 1 22 + 1 21 + 1 20 + 0 2 1 + 1 2 2 + 1 2 3 = (7.375)10 We now consider the conversion of the integer N in decimal system to the binary number bn 1 bn 2 L b1 b0. We write bn
1

We have

2n 1 + bn 2 2n 2 + L + b1 21 + b0 = N.

(1.7)

The last binary digit b0 in (1.7) is zero if and only if N is even. The second digit b1 is zero if and only if (N b0 )/2 is even, and so on. Thus, we have N0 = N N - bk , k = 0, 1, 2 L (1.8) Nk + 1 = k 2 until Nk = 0, where 1, if Nk is odd bk = 0, if Nk is even
Example 1.2

We have

Convert (58)10 to the corresponding binary number. N0 = 58, b0 = 0 b1 = 1

N1 = (58 0)/2 = 29,

High Speed Computation

N3 = (14 0)/2 = 7, N4 = (7 1)/2 = 3, N5 = (3 1)/2 = 1, N6 = (1 1)/2 = 0

b3 = 1 b4 = 1 b5 = 1

Thus, the binary number is 111010. Next, we convert the fraction N in decimal system to binary fraction. b1 b2 L b m. We write b1 2 1 + b2 2 2 + L + b m 2 m = N, 0 < N < 1 where b1, b 2, L, b m are 0 or 1. It is easily seen that b1 is 1 if and only if 2 N 1 and 0 if and only if 2N < 1. Similarly, using (1.9), we may determine b k recursively as follows: N1 = N b k = (1.9)

1, if 2 Nk 1 0, if 2 Nk < 1
(1.10)

Nk + 1 = 2N k b k , k = 1, 2, L
Example 1.3

We have k 0 1 2 3 4 5 6 The required

Convert (0.859375)10 to the corresponding binary fraction. b k

Nk + 1 0.859375 2 1 0.718750 2 1 0.437500 2 0 0.875000 2 1 0.750000 2 1 0.500000 2 1 0.000000 binary fraction becomes (0.859375)10 = (0.110111)2

Chapter 1

N2 = (29 1)/2 = 14,

b2 = 0

Numerical Methods for Scientific and Engineering Computation

We have

Example 1.4

Convert (0.7)10 to the corresponding binary fraction. b k 1 0 1 1 0 0 1 1 0 Nk + 1 0.7 2 0.4 2 0.8 2 0.6 2 0.2 2 0.4 2 0.8 2 0.6 2 0.2 2 0.4 (0.7)10 = (.101100110 L)2

k 0 1 2 3 4 5 6 7 8 9 Thus we obtain

which is a never ending sequence. If only 7 bits are retained in the binary fraction then the corresponding decimal number becomes (0.1011001)2 = 1 2 1 + 0 2 2 + 1 2 3 + 1 2 4 + 0 2 5 + 0 2 6 + 1 2 7 = 0.6953125 which is not exactly the same as the given number. The difference 0.7 0.6953125 = 0.0046875 is the round-off error. Octal and Hexadecimal System The octal system has base 8 and uses the digits 0, 1, 2, 3, 4, 5, 6, 7. Similarly, the hexadecimal system has base 16 and uses the digits 0 to 9 and A, B, C, D, E, F to represent 10, 11, 12, 13, 14, 15 respectively. Decimal numbers can be converted to octal or hexadecimal numbers in a similar manner. The conversion between binary and octal or between binary and hexadecimal is simple due to the

High Speed Computation

relationship between the bases, 8 = 23 for octal and 16 = 24 for hexadecimal. We convert a binary number to an octal number by grouping the binary bits in groups of three to the right and left of the binary point by adding sufficient zeros to complete the groups and replacing each group of three bits by its octal equivalent. Similarly, to convert a binary number to a hexadecimal number, we form groups of four of the binary bits and replace it by the corresponding digit in the hexadecimal system. The hexadecimal system is sometimes referred to as hex.
Example 1.5

Convert the binary number 1101001.1110011

to the octal and the hexadecimal systems. We have (1101001.1110011)2 = (001101001.111001100)2 = (151.714)8 and (1101001.1110011)2 = (01101001.11100110)2 = (69.E6)16 Floating Point Arithmetic The first step in the computation with digital computers is to convert the decimal numbers to another number system with the base (say) b understandable to that particular computer and then to store these converted numbers in computer memory. The memory of the digital computer is divided into separate cells called words. Each word can hold the same number of digits called bits, with respect to its base plus a sign. Negative numbers are stored as absolute values plus a sign or in complement form. The number of digits which can be stored in a computer word is called its word length. The word length varies from one computer to another. The numbers in the computer word can be stored in two forms: (i) Fixed-point form. (ii) Floating-point form. In fixed-point form a t digit number is assumed to have its decimal point at the left-hand end of the word. This implies that all numbers are assumed to be less than 1 in magnitude. The fixed-point number with base b and t digits word length may be written as

k =1

ak b - k

where 0 ak < b. If y is any real number and y* is its machine representation, then the error in y* is at most at + 1 b (t + 1) or

| y y* | b t.

(1.11)

To avoid the difficulty of keeping every number less than 1 in magnitude during computation, most computers use floating-point representation for a real number. A floating-point number is characterized by four parameters, the base b, the number of digits t, and the exponent range (m, M ).

Chapter 1

Numerical Methods for Scientific and Engineering Computation

Definition 1.1

A floating-point number is a number represented in the form d1d2 L dt b e (1.12)

where d1, d2, L, dt are integers and satisfy 0 di < b and the exponent e is such that m e M. The fractional part d1d2 L dt is called the mantissa and it lies between + 1 and 1. The number 0 is written as + .00 L 0 b e (1.13) Definition 1.2 A non-zero floating-point number as defined in (1.12) is in normal form if the value of the mantissa lies in the interval ( 1, 1/b ] or in the interval [1/b, 1).
Example 1.6

Subtract the floating-point numbers 0.36143447 107 and 0.36132346 107. 0.36143447 107 0.36132346 107 0.00011101 107

We have

The result is a floating-point number, but not a normalized floating-point number due to the presence of three leading zeros. Shifting the fractional part three places to the left, we get the result 0.11101 104 which is a normalized floating-point number. Definition 1.3 A non-zero floating-point number as defined in (1.12) is in t-digit-mantissa standard form if it is normalized and its mantissa consists of exactly t-digits. If a number x has the representation in the form x = d1 d2 L dt dt + 1 L b e (1.14) then the floating-point number f l (x) in t-digit-mantissa standard form can be obtained in the following two ways: (i) Chopping: Here we neglect dt+1, dt+2, L in (1.14) and obtain f l (x) = d1 d2 L dt b e (1.15) (ii) Rounding: Here the fractional part in (1.14) is written as d1 d2 L dt dt + 1 + 1 b 2 and the first t digits are taken to write the floating-point number.
Example 1.7

(1.16)

form. The number of the smaller magnitude is adjusted so that its exponent is same as that of the number of larger magnitude. We have .1230 103 .0456 103 .1686 10
3

Find the sum of .123 103 and .456 102 and write the result in three-digit mantissa

.168 10 3 ,
3

for chopping

.169 10 , for rounding.

High Speed Computation

A computer has a finite word length and so only a fixed number of digits are stored and used during computation. This would mean that even in storing an exact decimal number in its converted form in the computer memory, an error is introduced. This error is machine dependent and is called machine epsilon. After the computation is over, the result in the machine form (with base b ) is again converted to decimal form understandable to the users and some more error may be introduced at this stage. We now discuss the effect of the errors on the results. The quantity, True value Approximate value is called the error. In order to determine the accuracy in an approximate solution to a problem, either we find the bound of the Relative error = or of the Absolute error = Error (1.18) Neglecting a blunder or mistake, the errors may be classified into the following types: Definition 1.4 The inherent error is that quantity which is already present in the statement of the problem before its solution. The inherent error arises either due to the simplified assumptions in the mathematical formulation of the problem or due to the errors in the physical measurements of the parameters of the problem. Definition 1.5 The round-off error is the quantity R which must be added to the finite representation of a computed number in order to make it the true representation of that number. Thus, if x is the computed number given by (1.14) x = d1 d2 L dt dt + 1 L b e then the relative error (1.17) for t-digit mantissa standard form representation of x becomes
x fl x x

Error True value

(1.17)

af

b1 - t

for chopping for rounding.


(1.19)

1 1- t b 2

Thus, the bound on the relative error of a floating-point number is reduced by half when rounding is used than chopping. It is for this reason that on most computers rounding is used. We write f l(x) = x (1 + d ) (1.20) where d = d (x), some number depending on x, is called the relative round-off error in f l (x). The number d is called the machine epsilon and is denoted by EPS. From (1.19) we have

b1 - t

for chopping for rounding.


(1.21)

| d (x)|

= EPS =

1 1- t b 2

Chapter 1

1.3

ERRORS

Numerical Methods for Scientific and Engineering Computation

Definition 1.6 The truncation error is the quantity T which must be added to the true representation of the quantity in order that the result be exactly equal to the quantity we are seeking to generate. This error is a result of the approximate formulas used which are generally based on truncated series. The Taylor series with a remainder is an invaluable tool in the study of the truncation error.
Example 1.8

Obtain a second degree polynomial approximation to f (x) = (1 + x)1/2, x [0, 0.1]

using the Taylor series expansion about x = 0. Use the expansion to approximate f (0.05) and find a bound of the truncation error. We have f (x) = (1 + x)1/2, f (x) = f (0) = 1 f (0) =

1 (1 + x)1/2, 2

1 2

f (x) = 1 (1 + x) 3/2, 4 f (x) = 3 (1 + x ) 5/2 8

f (0) = 1 4

Thus, the Taylor series expansion with remainder term may be written as (1 + x)1/2 = 1 + The truncation term is given by T = (1 + x)1/2 1 + =
x x2 1 + 2 8 16 x3

a1 + x f

1/ 2 5

, 0 < x < 0.1.

FG H

x x2 2 8

IJ K

We have

f (0.05)

1+

a1 + x f b0.05g 0.05 2 8

1 16

x3

1/ 2 5

= 0.10246875 101.

The bound of the truncation error, for x [0, 0.1] is

|T |

0 x 0 .1

max

( 0 .1) 3 16 (1 + x )1/ 2

a0.1f
16

= 0.625 10 4.

High Speed Computation

When a number x is written in normalized floating point form with t digits in base b as given in (1.15), we say that the number has t significant digits. The leading digit d1 is called the most significant digit. In other words, a number x* is an approximation to x to t significant digits if
x - x* x

1 1t b 2

(1.22)

in the base b. As an example, x* = 0.3 agrees with 1/3 to one significant digit and x* = 0.3333 agrees with 1/3 to four significant digits in the base b = 10. Suppose now, that x* and y* are approximations to x and y to t significant digits and we wish to calculate the number z = x y. Then z* = x* y* is an approximation to z which is also correct to t significant digits unless x* and y* have one or more leading digits same. In this case, there will be cancellation of digits during subtraction and z * will not be accurate to t significant digits. For example, if we have x* = 0.178693 101 y* = 0.178439 101 each correct to six digits in decimal system, then z* = 0.000254 101 is correct to only three significant digits. It may be noted that the relative error in x is (1/2) 105/(0.178693 101) 2.8 10 6, while that in x y is ((1/2) 10 5 + (1/2) 10 5 ) /(0.000254 101) 3.9 10 3. When this number z* is used in further arithmetic calculations, error may considerably increase. Thus, we find that subtraction of two nearly equal numbers causes a considerable loss of significant digits and may greatly magnify the error in later calculations. A similar loss in significant digits occurs when a number is divided by a small number. For example, consider f (x) =

1 1 x2

and we want to calculate f (x) for x = 0.9. The exact value of f (x) for x = 0.9 correct to six digits is f (x) = 0.526316 101. If x is approximated to x* = 0.900005, that is, an error is introduced in the sixth decimal place, we obtain the value f (x*) = 0.526341 101. Therefore, an error in the sixth place in x has caused an error in the fifth place in f (x). Thus, when a number is divided by a very small number (or multiplied by a large number) there is loss of significant digits and magnification of errors in the result. It is noticed, therefore, that every arithmetic operation performed during computation, gives rise to some error, which once generated may decay or grow in subsequent calculations. In some cases, errors may grow so large as to make the computed result totally redundant and we call such a procedure

Chapter 1

Significant Digits and Numerical Instability

10

Numerical Methods for Scientific and Engineering Computation

numerically unstable. In some cases it can be avoided by changing the calculation procedure, which avoids subtractions of nearly equal numbers or division by a small number or by retaining more digits in the mantissa. While finding numerical solution of problems of scientific nature, we often encounter problems with inherent instability or induced instability. Inherent instability may arise due to the ill-conditionedness of the problem. We cannot avoid inherent instability by changing the method of solution. It is the property of the problem itself. Sometimes, we can avoid this instability by suitable reformulation of the problem. For example, consider the Wilkinsons example of finding the zeros of a polynomial. The polynomial P20(x) = (x 1) (x 2) L (x 20) = x 20 210 x19 + L + 20! (1.23) has the zeros 1, 2, L, 20. Let the coefficient of x19 be changed from 210 to (210 + 2 23 ). This is a very small absolute change, of magnitude 10 7 approximately. Most computers neglect this small change which occurs after 23 binary bits. If the solution of the new equation is now obtained, we find that the smaller roots are obtained with good accuracy, while the roots of larger magnitude are changed by a large amount. The largest change occurs in the roots 16 and 17. They are now obtained as the complex pair 16.73 L i 2.81 L whose magnitude is 17 approximately. The change is very substantial and it is due to the inherent instability or ill-conditionedness of the polynomial. If the problem cannot be reformulated, then the method that we are using must at least provide some information about the degree of ill-conditioning. The second type of instability that we encounter is the induced instability which arises usually because of the wrong choice of the method of solution. The problem is often well conditioned in this case. Induced instability can be avoided by a suitable modification or change of the method of solution. For example, to evaluate the integral In = we may use the recurrence relation In = where

xn d x, n = 1, 2, L, 10 x+6

1 6 In 1, n = 1, 2, L, 10 n 1 dx = ln ( 7 6 ) = 0 .15415 . I0 = 0 x + 6

(1.24)

Using the recurrence relation (1.24) we obtain I1 0.07510, I3 0.03693, I5 0.02948, I7 0.20412, I9 6.70943, I2 I4 I6 I8 0.04940 0.02842 0.01021 1.09972

I10 40.15658 .

High Speed Computation

11

In 1 = I9 0.01666, I7 0.01821, I5 0.02432, I3 0.03679, I1 0.07510,

1 1 - In , n = 10, 9, L, 1. 6 n

(1.25)

Since In decreases as n increases, choose I10 = 0. We obtain I8 0.01574 I6 0.02077 I4 0.02928 I2 0.04942 I0 0.15415.

The exact value is I0 = 0.15415. The criterion when to stop an infinite sequence of computation should be carefully given to the computer. To illustrate this point, consider finding the sum of the series 1 1 1 1 + - +L 2 3 4 If the computer is asked to stop when the absolute value of the next term is less than the tolerable error, then nothing would happen and we get accurate solution. However, if the same criterion is applied to the series 1 + 1 + 1 + 1 +L 2 3 4 the computer would give a finite solution, while the sum is infinite.
Example 1.9

Find the smaller root of the equation x 2 400x + 1 = 0

using four digit arithmetic. For the equation ax2 bx + c = 0, b > 0, the smaller root is given by x= Here a = 1 = 0.1000 101 b = 400 = 0.4000 103 c = 1 = 0.1000 101 b2 4ac = 0.1600 106 0.4000 101 = 0.1600 106 (to four digit accuracy)
b b 2 4 ac 2a

Chapter 1

The exact value is I10 = 0.01449. This explosion has occurred because of the induced instability. This problem is well conditioned and accurate solutions can be obtained by choosing a suitable method. We may write the recurrence relation (1.24) as

12

Numerical Methods for Scientific and Engineering Computation

b 2 4 ac = 0.4000 103.

Substituting in the above formula we get x = 0.0000. However, if we write the above formula in the form 2c x= we get x= = which is the exact root.
Example 1.10

b + b 2 4 ac

0 . 2000 101 0 . 4000 10 3 + 0 . 4000 10 3 0 . 2000 101 = 0.0025 0 .8000 10 3

Compute the middle value of the numbers a = 4.568 and b = 6.762

using the four digit arithmetic. If we take the middle value as the mean of the numbers, we have c= = However, if we use the formula c =a+ we get
ba 2
1 1 a+b = 0 . 4568 10 + 0 . 6762 10 2 2

0 .1133 10 2 = 0.5660 101 2

c = 0.4568 101 + = 0.5665 101

0 .6762 101 0 . 4568 101 2 = 0.4568 101 + 0.1097 101

which is the correct result.

1.4

MACHINE COMPUTATION

To obtain meaningful results for a given problem using computers, there are five distinct phases: (i) Choice of a method (ii) Designing the algorithm (iii) Flow charting (iv) Programming (v) Computer execution

High Speed Computation

13

A method is defined to be a mathematical formula for finding the solution of a given problem. There may be more than one method available to solve the same problem. We should choose the method which suits the given problem best. The inherent assumptions and limitations of the method must be studied carefully. Once the method has been decided, we must describe a complete and unambiguous set of computational steps to be followed in a particular sequence to obtain the solution. The description is called an algorithm. It may be emphasized that the computer is concerned with the algorithm and not with the method. The algorithm tells the computer where to start, what information to use, what operations to be carried out and in which order, what information to be printed and when to stop.
Example 1.11

Design an algorithm to find the real roots of the equation ax 2 + bx + c = 0, a, b, c real

for 10 sets of values of a, b, c using the method x1 = where e = Step Step Step Step Step 1: 2: 3: 4: 5:
b + e , 2a

x2 =

b e 2a

(1.26)

db

4 ac .

The following computational steps are involved: Set I = 1. Read a, b, c. Check: is a = 0? If yes, print wrong data and go to step 9, otherwise go to next step. Calculate d = b2 4ac. Check: is d < 0? If yes, print complex roots and go to step 9, otherwise go to next step. Calculate e =

Step 6: Step Step Step Step Step 7: 8: 9: 10: 11:

db

4 ac .

Calculate x1 and x2 using method (1.26). Print x1 and x2. Increase I by 1. Check: is I 10? If yes, go to step 2, otherwise go to next step. Stop.

On execution of the above eleven steps or instructions in the same order, the problem is completely solved. These eleven steps constitute the algorithm of the method (1.26). An algorithm has five important features: 1. finiteness: an algorithm must terminate after a finite number of steps. 2. definiteness: each step of an algorithm must be clearly defined or the action to be taken must be unambiguously specified. 3. inputs: an algorithm must specify the quantities which must be read before the algorithm can begin. In the algorithm of Example 1.11 the three input quantities are a, b, c. 4. outputs: an algorithm must specify the quantities which are to be outputted and their proper place. In the algorithm of Example 1.11 the two output quantities are x1, x2.

Chapter 1

14

Numerical Methods for Scientific and Engineering Computation

5. effectiveness: an algorithm must be effective which means that all operations are executable. For example, in the algorithm of Example 1.11, we must avoid the case a = 0, as division by zero is not possible. Similarly, if b2 4ac < 0, some alternate path must be defined to avoid finding the square root of a negative number. A flow chart is a graphical representation of a specific sequence of steps (algorithm) to be followed by the computer to produce the solution of a given problem. It makes use of the flow chart symbols to represent the basic operations to be carried out. The various symbols are connected by arrows to indicate the flow of information and processing. While drawing a flow chart any logical error in the formulation of the problem or in the application of the algorithm can be easily seen and corrected. Some of the symbols used in drawing flow charts are given in Table 1.1.
Table 1.1 Flow Chart Symbols

Flow Chart Symbols

Meaning A processing symbol such as addition or subtraction of two numbers and movement of data in computer memory. A decision taking symbol. Depending on the answer, yes or no, a particular path is chosen.

An input symbol, specifying quantities which are to be read before processing can proceed. An output symbol, specifying quantities which are to be outputted.

A terminating symbol, including start or end of the flow chart. This symbol is also used as a connector.

Example 1.12

Draw a flow chart to find real roots of the equation ax 2 + bx + c = 0, a, b, c real, using the method x1 =
b + b 2 4 ac b b 2 4 ac , x2 = 2a 2a

for ten sets of values of a, b, c. The flow chart is given in Fig. 1.1. The flow chart can be easily translated into any high level language, for example C, Fortran, Pascal, Algol, Basic etc. and can be executed on the computer.

High Speed Computation

15

Fig. 1.1.

Flow chart for finding real roots of the quadratic equation.

1.5

COMPUTER SOFTWARE

The purpose of computer software is to provide a useful computational tool for users. The writing of computer software requires a good understanding of numerical analysis and art of programming. A good computer software must satisfy certain criteria of self-starting, accuracy and reliability, minimum number of levels, good documentation, ease of use and portability.

Chapter 1

16

Numerical Methods for Scientific and Engineering Computation

A computer software should be self-starting as far as possible. A numerical method very often involves some parameters whose values are determined by the properties of the problem to be solved. For example, in finding the roots of an equation using an iterative method, one or more initial approximations to the root have to be given. The program will be more acceptable, if it can be made automatic in the sense that the program will select the initial approximations itself rather than requiring the user to specify them. Accuracy and reliability are measures of the performance of an algorithm on all similar problems. Once an error criterion is fixed, it should produce solutions of all similar problems to that accuracy. The program should be able to prevent and handle most of the exceptional conditions like division by zero, infinite loops etc. The structure of the program should avoid many levels. For example, many programs used to find roots of an equation have three levels: Program calls zero-finder (parameters, function) Zero-finder calls function Function subprogram More the number of levels in the program, more is the time wasted in interlinking and transfer of parameters. Good documentation and easy to use are two very important criteria. The program must have some comment lines or comment paragraphs at various places giving explanation and clarification of the method used and steps involved. A good documentation should clarify, what kind of problems can be solved using this software, what parameters are to be supplied, what accuracy can be achieved, which method has been used and other relevant details. The criterion of portability means that the software should be made independent of the computer being used as far as possible. Since most machines have different hardware configuration, complete independency of the machine may not be possible. However, the aim at the time of writing a computer software should be that the same program should be able to run on any machine with minimum modifications. Machine dependent constants, for example, machine error EPS, must be avoided or automatically generated. A standard dialect of the programming language should be used rather than using a local dialect. Most of the numerical methods discussed in this book are available in the form of software, which is a package of thoroughly tested, portable and self documented subprograms. The general purpose packages contain a number of subroutines for solving a variety of mathematical problems that commonly arise in scientific and engineering computation. The special purpose packages deal with specified problem areas. Many computer installations acquire one or both types of packages and make it available, on-line, to their users. Most of the software packages are available for PCs also.

Вам также может понравиться