Вы находитесь на странице: 1из 23

Numerical Methods: Chapter 4

Instructor: Robel Metiku

Chapter Objectives

Chapter 4

Curve Fitting

  • 1. Understand the fundamental difference between regression and interpolation

  • 2. Understand the derivation of linear least-squares regression and be able to assess the reliability of the fit using graphical and quantitative assessments

  • 3. Know how to linearize data by transformation

  • 4. Understand situations where polynomial, multiple, and nonlinear regression are appropriate

  • 5. Be able to recognize general linear models, understand the general matrix formulation of linear least squares, and know how to compute confidence intervals for parameters

  • 6. Understand that there is one and only one polynomial of degree n or less that passes exactly through n + 1 points

  • 7. Know how to derive the first-order Newton’s interpolating polynomial

  • 8. Formulate the Newton and Lagrange interpolating polynomial and understand

their respective advantages and disadvantages 4.1 Introduction Data is often given for discrete values along a continuum. However, you may require estimates at points between the discrete values. This chapter describes techniques to fit curves to such data to obtain intermediate estimates. In addition, you may require a simplified version of a complicated function. One way to do this is to compute values of the function at a number of discrete values along the range of interest. Then, a simpler function may be derived to fit these values. Both of these applications are known as curve fitting. There are two general approaches for curve fitting that are distinguished from each other on the basis of the amount of error associated with the data. First, where the data exhibits a significant degree of error, the strategy is to derive a single curve that represents the general trend of the data. Because any individual data point may be incorrect, we make no effort to intersect every point. Rather, the curve is designed to follow the pattern of the points taken as a group. One approach of this nature is called least-squares regression (Fig. 4.1a). Second, where the data is known to be very precise, the basic approach is to fit a curve or a series of curves that pass directly through each of the points. Such data usually

Numerical Methods: Chapter 4

Instructor: Robel Metiku

originates from tables. Examples are values for the density of water or for the heat capacity of gases as a function of temperature. The estimation of values between well- known discrete points is called interpolation (Fig. 4.1b and c).

Numerical Methods: Chapter 4 Instructor: Robel Metiku originates from tables. Examples are values for the density
Numerical Methods: Chapter 4 Instructor: Robel Metiku originates from tables. Examples are values for the density
Numerical Methods: Chapter 4 Instructor: Robel Metiku originates from tables. Examples are values for the density

Fig. 4.1 (a) Least-square regression, (b) Linear interpolation, (c) curvilinear interpolation The simplest method for fitting a curve to data is to plot the points and then sketch a line that visually conforms to the data. Although this is a valid option when quick estimates are required, the results are dependent on the subjective viewpoint of the person sketching the curve. For example, Fig. 4.1 shows sketches developed from the same set of data by three engineers. The 1 st did not attempt to connect the points, but rather, characterized the general upward trend of the data with a straight line (Fig. 4.1a). The 2 nd engineer used straight-line segments or linear interpolation to connect the points (Fig. 4.1b). This is a very common practice in engineering. If the values are truly close to being linear or are spaced closely, such an approximation provides estimates that are adequate for many engineering calculations. However, where the underlying relationship is highly curvilinear or the data is widely spaced, significant errors can be introduced by such linear interpolation.

Numerical Methods: Chapter 4

Instructor: Robel Metiku

The 3 rd engineer used curves to try to capture the data (Fig. 4.1c). A fourth or fifth engineer would likely develop alternative fits. Obviously, our goal here is to develop systematic and objective methods for the purpose of deriving such curves. Before we proceed to numerical methods for curve fitting, the prerequisite mathematical background for Least-squares regression is the field of statistics. You have to familiarize yourself with the concepts of the mean, standard deviation, residual sum of the squares, normal distribution, and confidence intervals. If you are unfamiliar with these concepts or are in need of a review, check reference materials in your library for a brief introduction to these topics. Least-squares regression: We will first learn how to fit the “best” straight line through a set of uncertain data points. This technique is called linear regression. Besides discussing how to calculate the slope and intercept of this straight line, we also present quantitative and visual methods for evaluating the validity of the results. In addition to fitting a straight line, we also present a general technique for fitting a “best’’ polynomial. Thus, you will learn to derive a parabolic, cubic, or higher-order polynomial that optimally fits uncertain data. Linear regression is a subset of this more general approach, which is called polynomial regression. We will also discuss nonlinear regression. This approach is designed to compute a least-square fit of a nonlinear equation to data. An alternative curve-fitting technique called interpolation is also described. Interpolation is used for estimating intermediate values between precise data points. We introduce the basic concept of polynomial interpolation by using straight lines and parabolas to connect points. Then, we develop a generalized procedure for fitting an nth-order polynomial. Two formats are presented for expressing these polynomials in equation form. The first, called Newton’s interpolating polynomial is preferable when the appropriate order of the polynomial is unknown. The second, called the Lagrange interpolating polynomial, has advantages when the proper order is known beforehand.

  • 4.2. Least-Squares Regression

Where substantial error is associated with data, polynomial interpolation is inappropriate and may yield unsatisfactory results when used to predict intermediate values.

Numerical Methods: Chapter 4

Instructor: Robel Metiku

Experimental data is often of this type. For example, Fig. 4.2a shows seven experimentally derived data points exhibiting significant variability. Visual inspection of the data suggests a positive relationship between y and x. That is, the overall trend indicates that higher values of y are associated with higher values of x. Now, if a sixth- order interpolating polynomial is fitted to this data (Fig. 4.2b), it will pass exactly through all of the points. However, because of the variability in the data, the curve oscillates widely in the interval between the points. In particular, the interpolated values at x = 1.5 and x = 6.5 appear to be well beyond the range suggested by the data. A more appropriate strategy for such cases is to derive an approximating function that fits the shape or general trend of the data without necessarily matching the individual points. Figure 4.2c illustrates how a straight line can be used to generally characterize the trend of the data without passing through any particular point.

Numerical Methods: Chapter 4 Instructor: Robel Metiku Experimental data is often of this type. For example,
Numerical Methods: Chapter 4 Instructor: Robel Metiku Experimental data is often of this type. For example,
Numerical Methods: Chapter 4 Instructor: Robel Metiku Experimental data is often of this type. For example,

Fig. 4.2 (a) Data exhibiting significant error, (b) Polynomial fit oscillating beyond the range of the data. (c) More satisfactory result using the least-squares fit. One way to determine the line in Fig. 4.2c is to visually inspect the plotted data and then sketch a “best” line through the points. Although such “eyeball” approaches have commonsense appeal and are valid for “back-of-the-envelope” calculations, they are deficient because they are arbitrary. That is, unless the points define a perfect straight line (in which case, interpolation would be appropriate), different analysts would draw different lines. To remove this subjectivity, some criterion must be devised to establish a basis for the fit. One way to do this is to derive a curve that minimizes the discrepancy between the data points and the curve. A technique for accomplishing this objective, called least- squares regression.

Numerical Methods: Chapter 4

Instructor: Robel Metiku

Linear Regression

The simplest example of a least-squares approximation is fitting a straight line to a set

of paired observations: (x 1 , y 1 ), (x 2 , y 2 ), the straight line is

. . .

, (x n , y n ). The mathematical expression for

y = a 0 + a 1 x + e ………………………………………

(4.1)

where a 0 and a 1 are coefficients representing the intercept and the slope, respectively, and e is the error, or residual, between the model and the observations, which can be represented by rearranging Eq. (4.1) as e = y a 0 a 1 x Thus, the error, or residual, is the discrepancy between the true value of y and the approximate value, a 0 + a 1 x, predicted by the linear equation.

Criteria for a “Best” Fit

One strategy for fitting a “best” line through the data would be to minimize the sum of

the residual errors for all the available data, as in

Numerical Methods: Chapter 4 Instructor: Robel Metiku Linear Regression The simplest example of a least-squares approximation

(4.2)

.. where n = total number of points. However, this is an inadequate criterion, as illustrated by Fig. 4.3a which depicts the fit of a straight line to two points. Obviously, the best fit is the line connecting the points. However, any straight line passing through the midpoint of the connecting line (except a perfectly vertical line) results in a minimum value of Eq. (4.2) equal to zero because the errors cancel.

……………………………

Numerical Methods: Chapter 4 Instructor: Robel Metiku Linear Regression The simplest example of a least-squares approximation
Numerical Methods: Chapter 4 Instructor: Robel Metiku Linear Regression The simplest example of a least-squares approximation
Numerical Methods: Chapter 4 Instructor: Robel Metiku Linear Regression The simplest example of a least-squares approximation

Fig. 4.3 Criteria examples for “best fit” inadequate for regression: (a) minimizes the sum of the residuals, (b) minimizes the sum of the absolute values of the residuals, and (c) minimizes the maximum error of any individual point.

Numerical Methods: Chapter 4

Instructor: Robel Metiku

Therefore, another logical criterion might be to minimize the sum of the absolute values of the discrepancies, as in

Numerical Methods: Chapter 4 Instructor: Robel Metiku Therefore, another logical criterion might be to minimize the

Figure 4.3b demonstrates why this criterion is also inadequate. For the four points shown, any straight line falling within the dashed lines will minimize the sum of the absolute values. Thus, this criterion also does not yield a unique best fit. A third strategy for fitting a best line is the minimax criterion. In this technique, the line is chosen that minimizes the maximum distance that an individual point falls from the line. As depicted in Fig. 4.3c, this strategy is ill-suited for regression because it gives undue influence to an outlier, that is, a single point with a large error. It should be noted that the minimax principle is sometimes well-suited for fitting a simple function to a complicated function. A strategy that overcomes the shortcomings of the aforementioned approaches is to minimize the sum of the squares of the residuals between the measured y and the y calculated with the linear model

Numerical Methods: Chapter 4 Instructor: Robel Metiku Therefore, another logical criterion might be to minimize the

(4.3)

This criterion has a number of advantages, including the fact that it yields a unique line

for a given set of data. Before discussing these properties, we will present a technique for determining the values of a 0 and a 1 that minimize Eq. (4.3).

Least-Squares Fit of a Straight Line To determine values for a 0 and a 1 , Eq. (4.3) is differentiated with respect to each coefficient:

Numerical Methods: Chapter 4 Instructor: Robel Metiku Therefore, another logical criterion might be to minimize the

Numerical Methods: Chapter 4

Instructor: Robel Metiku

Note that we have simplified the summation symbols; unless otherwise indicated, all summations are from i = 1 to n. Setting these derivatives equal to zero will result in a minimum S r . If this is done, the equations can be expressed as

Now, realizing that

we can express the
we
can
express the

equations

as

a

set

of

two

simultaneous linear equations with two unknowns (a 0 and a 1 ):

Numerical Methods: Chapter 4 Instructor: Robel Metiku Note that we have simplified the summation symbols; unless

(4.4 & 4.5)

…………………… These are called the normal equations. They can be solved simultaneously

Numerical Methods: Chapter 4 Instructor: Robel Metiku Note that we have simplified the summation symbols; unless

………………………………

(4.6)

This result can then be used in conjunction with Eq. (4.4) to solve for

…………………………………………… where and are the means of y and x, respectively.
……………………………………………
where
and
are the means of y and x, respectively.

Example 4.1: Linear Regression

(4.7)

Fit a straight line to the x and y values in the first two columns of the table below.

Numerical Methods: Chapter 4 Instructor: Robel Metiku Note that we have simplified the summation symbols; unless

Quantification of Error of Linear Regression

Any line other than the one computed in Example 4.1 results in a larger sum of the squares of the residuals. Thus, the line is unique and in terms of our chosen criterion is

a “best” line through the points. A number of additional properties of this fit can be elucidated by examining more closely the way in which residuals were computed. Recall that the sum of the squares is defined as [Eq. (4.3)]

Numerical Methods: Chapter 4

Instructor: Robel Metiku

Numerical Methods: Chapter 4 Instructor: Robel Metiku ………………………. (4.8) Notice the similarity between Equations for statistics

……………………….

(4.8)

Notice the similarity between Equations for statistics and (4.8). In the statistics case, the square of the residual represented the square of the discrepancy between the data and a single estimate of the measure of central tendency—the mean. In Eq. (17.8), the square of the residual represents the square of the vertical distance between the data and another measure of central tendency—the straight line (Fig. 4.4).

Numerical Methods: Chapter 4 Instructor: Robel Metiku ………………………. (4.8) Notice the similarity between Equations for statistics

Fig. 4.4 The residual in linear regression represents the vertical distance between a data point and the straight line The analogy can be extended further for cases where (1) the spread of the points around the line is of similar magnitude along the entire range of the data and (2) the distribution of these points about the line is normal. It can be demonstrated that if these criteria are met, least-squares regression will provide the best (that is, the most likely) estimates of a 0 and a 1 (Draper and Smith, 1981). This is called the maximum likelihood principle in statistics. In addition, if these criteria are met, a “standard deviation” for the regression line can be determined as

Numerical Methods: Chapter 4 Instructor: Robel Metiku ………………………. (4.8) Notice the similarity between Equations for statistics

………………………………

..

(4.9)

Numerical Methods: Chapter 4

Instructor: Robel Metiku

where S y/x is called the standard error of the estimate. The subscript notation “y/x

designates the error is for a predicted value of y corresponding to a particular value of

x.

Also, notice that we now divide by n − 2 because two data-derived estimates—a 0 and a 1 —were used to compute S r ; thus, we have lost two degrees of freedom. Another justification for dividing by n − 2 is that there is no such thing as the “spread of data” around a straight line connecting two points. Thus, for the case where n = 2, Eq. (4.9) yields a meaningless result of infinity. Just as was the case with the standard deviation, the standard error of the estimate quantifies the spread of the data. However, S y/x quantifies the spread around the regression line as shown in Fig. 4.5b in contrast to the original standard deviation S y that quantified the spread around the mean (Fig. 4.5a).

Numerical Methods: Chapter 4 Instructor: Robel Metiku where S is called the standard error of the

Fig. 4.5 Regression data showing (a) the spread of the data around the mean of the dependent variable and (b) the spread of the data around the best-fit line. The reduction in the spread in going from (a) to (b), as indicated by the bell-shaped curves at the right, represents the improvement due to linear regression

Numerical Methods: Chapter 4 Instructor: Robel Metiku where S is called the standard error of the
Numerical Methods: Chapter 4 Instructor: Robel Metiku where S is called the standard error of the

Fig. 4.6 Examples of linear regression with (a) small and (b) large residual errors The above concepts can be used to quantify the “goodness” of our fit. This is particularly useful for comparison of several regressions (Fig. 4.6). To do this, we return to the original data and determine the total sum of the squares around the mean for the

Numerical Methods: Chapter 4

Instructor: Robel Metiku

dependent variable (in our case, y). This quantity is designated S t . This is the magnitude of the residual error associated with the dependent variable prior to regression. After performing the regression, we can compute S r , the sum of the squares of the residuals around the regression line. This characterizes the residual error that remains after the regression. It is, therefore, sometimes called the unexplained sum of the squares. The difference between the two quantities, S t S r , quantifies the improvement or error reduction due to describing the data in terms of a straight line rather than as an average value. Because the magnitude of this quantity is scale-dependent, the difference is normalized to S t to yield

Numerical Methods: Chapter 4 Instructor: Robel Metiku dependent variable (in our case, y ). This quantity

(4.10)

.. where r 2 is called the coefficient of determination and r is the correlation coefficient For a perfect fit, S r = 0 and r = r 2 = 1, signifying that the line explains 100 % of the variability of the data. For r = r 2 = 0, S r = S t and the fit represents no improvement. An alternative formulation for r that is more convenient for computer implementation is

………………………………………………………

Numerical Methods: Chapter 4 Instructor: Robel Metiku dependent variable (in our case, y ). This quantity

(4.11)

.. Note: Although the correlation coefficient provides a handy measure of goodness-of-fit, you should be careful not to ascribe more meaning to it than is warranted. Just because r is “close” to 1 does not mean that the fit is necessarily “good.” For example, it is possible to obtain a relatively high value of r when the underlying relationship between y and x is not even linear.

………………….……

Example 4.2: Estimation of error for the linear least-squares fit

Compute the total standard deviation, the standard error of the estimate, and the correlation coefficient for the data in example 4.1.

Linearization of Non-linear Relationships

Numerical Methods: Chapter 4

Instructor: Robel Metiku

Linear regression provides a powerful technique for fitting a best line to data. However, it is predicated on the fact that the relationship between the dependent and independent variables is linear. This is not always the case, and the first step in any regression analysis should be to plot and visually inspect the data to ascertain whether a linear model applies. For example, Fig. 4.7 shows data that is obviously curvilinear. Here, techniques such as polynomial regression are appropriate. For others, transformations can be used to express the data in a form that is compatible with linear regression.

Numerical Methods: Chapter 4 Instructor: Robel Metiku Linear regression provides a powerful technique for fitting a
Numerical Methods: Chapter 4 Instructor: Robel Metiku Linear regression provides a powerful technique for fitting a

Fig. 4.7 (a) Data ill-suited for linear least-squares regression, (b) Parabola is preferable. One example is the exponential model

Numerical Methods: Chapter 4 Instructor: Robel Metiku Linear regression provides a powerful technique for fitting a

(4.12)

.. where α 1 and β 1 are constants. This model is used in many fields of engineering to characterize quantities that increase (positive β 1 ) or decrease (negative β 1 ) at a rate that is directly proportional to their own magnitude. For example, population growth or radioactive decay can exhibit such behavior. As depicted in Fig. 4.8a, the equation represents a nonlinear relationship (for β 1 ≠ 0) between y and x. Another example of a nonlinear model is the simple power equation

……………………………….……

Numerical Methods: Chapter 4 Instructor: Robel Metiku Linear regression provides a powerful technique for fitting a

(4.13)

.. where α 2 and β 2 are constant coefficients. This model has wide applicability in all fields of engineering. As depicted in Fig. 4.8b, the equation (for β 2 ≠ 0 or 1) is nonlinear.

……………………………………….……

A third example of a nonlinear model is the saturation-growth-rate equation

Numerical Methods: Chapter 4 Instructor: Robel Metiku Linear regression provides a powerful technique for fitting a

……………………………….……

..

(4.14)

Numerical Methods: Chapter 4

Instructor: Robel Metiku

where α 3 and β 3 are constant coefficients. This model, which is particularly well-suited for characterizing population growth rate under limiting conditions, also represents a nonlinear relationship between y and x (Fig. 4.8c) that levels off, or “saturates,” as x increases.

Numerical Methods: Chapter 4 Instructor: Robel Metiku where α and β are constant coefficients. This model,

Fig. 4.8 (a) The exponential equation, (b) the power equation, and (c) the saturation- growth-rate equation. Parts (d), (e), and (f) are linearized versions of these equations that result from simple transformations. Nonlinear regression techniques are available to fit these equations to experimental data directly. However, a simpler alternative is to use mathematical manipulations to transform the equations into a linear form. Then, simple linear regression can be employed to fit the equations to data. For example, Eq. (4.12) can be linearized by taking its natural logarithm to yield ln y = ln α 1 + β 1 x ln e

Numerical Methods: Chapter 4

Instructor: Robel Metiku

But because ln e = 1, ln y = ln α 1 + β 1 x

…………………………………………………….

(4.15)

Thus, a plot of ln y versus x will yield a straight line with a slope of β 1 and an intercept of

ln α 1 (Fig. 4.8d). Equation (4.13) is linearized by taking its base-10 logarithm to give

log y = β 2 log x + log α 2

……………………………………………

(4.16)

Thus, a plot of log y versus log x will yield a straight line with a slope of β 2 and an

intercept of log α 2 (Fig. 4.8e). Equation (4.14) is linearized by inverting it to give

Numerical Methods: Chapter 4 Instructor: Robel Metiku But because ln e = 1, ln y =

…………………………….………………………

(4.17)

Plot of 1/y versus 1/x will be linear, with slope of β 3 3 and intercept of 13 (Fig. 4.8f ).

In their transformed forms, these models can use linear regression to evaluate the constant coefficients. They could then be transformed back to their original state and used for predictive purposes. Example 4.3 illustrates this procedure for Eq. (4.13).

Example 4.3: Linearization of a Power Equation

Fit Eq. (4.13) to the data given below using a logarithmic transformation of the data.

Data to be fit to the power equation

Numerical Methods: Chapter 4 Instructor: Robel Metiku But because ln e = 1, ln y =

General Comments on Linear Regression

Before proceeding to curvilinear and multiple linear regression, we must emphasize the introductory nature of the foregoing material on linear regression. We have focused on the simple derivation and practical use of equations to fit data. You should be cognizant

of the fact that there are theoretical aspects of regression that are of practical

Numerical Methods: Chapter 4

Instructor: Robel Metiku

importance but are beyond the scope of this course. For example, some statistical assumptions that are inherent in the linear least-squares procedures are

  • 1. Each x has a fixed value; it is not random and is known without error.

  • 2. The y values are independent random variables and all have the same variance.

  • 3. The y values for a given x must be normally distributed.

Such assumptions are relevant to the proper derivation and use of regression. For example, the first assumption means that (1) the x values must be error-free and (2) the regression of y versus x is not the same as x versus y.

  • 4.3 Polynomial Regression

In Sec. 4.1, a procedure was developed to derive the equation of a straight line using

the least-squares criterion. Some engineering data, although exhibiting a marked pattern such as seen in Fig. 4.7, is poorly represented by a straight line. For these cases, a curve would be better suited to fit the data. As discussed in the previous section, one method to accomplish this objective is to use transformations. Another alternative is to fit polynomials to the data using polynomial regression.

The least-squares procedure can be readily extended to fit the data to a higher-order polynomial. For example, suppose that we fit a second-order polynomial or quadratic:

y = a 0 + a 1 x + a 2 x 2 + e

For this case the sum of the squares of the residuals is

Numerical Methods: Chapter 4 Instructor: Robel Metiku importance but are beyond the scope of this course.

(4.18)

.. Following the procedure of the previous section, we take the derivative of Eq. (4.18)

………………………………

with respect to each of the unknown coefficients of the polynomial, as in

Numerical Methods: Chapter 4 Instructor: Robel Metiku importance but are beyond the scope of this course.

Numerical Methods: Chapter 4

Instructor: Robel Metiku

Numerical Methods: Chapter 4 Instructor: Robel Metiku MfM – ATTC Manufacturing Technology Department Page 15

Numerical Methods: Chapter 4

Instructor: Robel Metiku

These equations can be set equal to zero and rearranged to develop the following set of normal equations:

Numerical Methods: Chapter 4 Instructor: Robel Metiku These equations can be set equal to zero and

…………………

(4.19)

where all summations are from i = 1 through n. Note that the above three equations are linear and have three unknowns: a 0 , a 1 , and a 2 . The coefficients of the unknowns can be calculated directly from the observed data. For this case, we see that the problem of determining a least-squares second-order polynomial is equivalent to solving a system of three simultaneous linear equations.

The two-dimensional case can be easily extended to an mth-order polynomial as y = a 0 + a 1 x + a 2 x 2 + ·· ·+a m x m + e The foregoing analysis can be easily extended to this more general case. Thus, we can recognize that determining the coefficients of an mth-order polynomial is equivalent to solving a system of m + 1 simultaneous linear equations. For this case, the standard error is formulated as

Numerical Methods: Chapter 4 Instructor: Robel Metiku These equations can be set equal to zero and

…………………………………………

(4.20)

This quantity is divided by n − (m + 1) because (m + 1) data-derived coefficients— a 0 ,

a 1 ,

, a m —were used to compute S r ; thus, we have lost m + 1 degrees of freedom. In

. . . addition to the standard error, a coefficient of determination can also be computed for

polynomial regression with Eq. (4.10).

Numerical Methods: Chapter 4

Instructor: Robel Metiku

Example 4.4: Polynomial Regression

Fit a second-order polynomial to the data in the first two columns of the table below.

Numerical Methods: Chapter 4 Instructor: Robel Metiku Example 4.4: Polynomial Regression Fit a second-order polynomial to

4.4 Interpolation

You will frequently have occasion to estimate intermediate values between precise data

points. The most common method used for this purpose is polynomial interpolation.

Recall that the general formula for an nth-order polynomial is f(x) = a 0 + a 1 x + a 2 x 2 +· · ·+a n x n

………………………………

..

(4.21)

For n + 1 data points, there is one and only one polynomial of order n that passes through all the points. For example, there is only one straight line (that is, a first-order polynomial) that connects two points (Fig. 4.11a).

Similarly, only one parabola connects a set of three points (Fig. 4.11b). Polynomial interpolation consists of determining the unique nth-order polynomial that fits n + 1 data points. This polynomial then provides a formula to compute intermediate values.

Although there is one and only one nth-order polynomial that fits n + 1 points, there are a variety of mathematical formats in which this polynomial can be expressed. In this chapter, we will describe two alternatives that are well-suited for computer implementation: the Newton and the Lagrange polynomials.

Numerical Methods: Chapter 4

Instructor: Robel Metiku

Numerical Methods: Chapter 4 Instructor: Robel Metiku Fig. 4.11 Examples of interpolating polynomials: ( a )

Fig. 4.11 Examples of interpolating polynomials: (a) first-order (linear) connecting two points, (b) second order (quadratic or parabolic) connecting three points, and (c) third- order (cubic) connecting four points.

Newton’s Divided – Difference Interpolating Polynomials

As stated above, there are a variety of alternative forms for expressing an interpolating

polynomial. Newton’s divided-difference interpolating polynomial is among the most popular and useful forms. Before presenting the general equation, we will introduce the first and second-order versions because of their simple visual interpretation.

Linear Interpolation

The simplest form of interpolation is to connect two data points with a straight line. This technique, called linear interpolation, is depicted graphically in Fig. 18.2. Using similar triangles,

Numerical Methods: Chapter 4 Instructor: Robel Metiku Fig. 4.11 Examples of interpolating polynomials: ( a )

which can be rearranged to yield

Numerical Methods: Chapter 4 Instructor: Robel Metiku Fig. 4.11 Examples of interpolating polynomials: ( a )

……………………………….

(4.22)

which is a linear-interpolation formula. The notation f 1 (x) designates that this is a first- order interpolating polynomial. Notice that besides representing the slope of the line connecting the points, the term [ f (x 1 ) − f (x 0 )]/(x 1 x 0 ) is a finite-divided-difference approximation of the first derivative. In general, the smaller the interval between the data points, the better the approximation. This is due to the fact that, as the interval

Numerical Methods: Chapter 4

Instructor: Robel Metiku

decreases, a continuous function will be better approximated by a straight line. This characteristic is demonstrated in the following example.

Numerical Methods: Chapter 4 Instructor: Robel Metiku decreases, a continuous function will be better approximated by

Fig. 4.12 Graphical depiction of linear interpolation; the shaded areas indicate the similar triangles used to derive the linear-interpolation formula [Eq. (4.22)].

Example 4.5: Linear Interpolation

Estimate the natural logarithm of 2 using linear interpolation. First, perform the computation by interpolating between ln 1 = 0 and ln 6 = 1.791759. Then, repeat the procedure, but use a smaller interval from ln 1 to ln 4 (1.386294). Note that the true value of ln 2 is 0.6931472.

Quadratic Interpolation

The error in Example 4.5 resulted from our approximating a curve with a straight line.

Consequently, a strategy for improving the estimate is to introduce some curvature into the line connecting the points. If three data points are available, this can be accomplished with a second-order polynomial (also called a quadratic polynomial or a

parabola). A particularly convenient form for this purpose is f 2 (x) = b 0 + b 1 (x x 0 ) + b 2 (x x 0 )(x x 1 )

……………………

..

(4.23)

Note that although Eq. (4.23) might seem to differ from the general polynomial [Eq.

(4.21)], the two equations are equivalent.

Numerical Methods: Chapter 4

Instructor: Robel Metiku

This can be shown by multiplying the terms in Eq. (4.23) to yield f 2 (x) = b 0 + b 1 x b 1 x 0 + b 2 x 2 + b 2 x 0 x 1 b 2 xx 0 b 2 xx 1 or, collecting terms,

where a 0 = b 0 b 1 x 0 + b 2 x 0 x 1 a 1 = b 1 b 2 x 0 b 2 x 1 a 2 = b 2

f 2 (x) = a 0 + a 1 x + a 2 x 2

Thus, Eqs. (4.21) and (4.23) are alternative, equivalent formulations of the unique

second-order polynomial joining the three points. A simple procedure can be used to determine the values of the coefficients. For b 0 , Eq. (4.23) with x = x 0 can be used to compute

b 0 = f(x 0 )

…………………………………….…………

(4.24)

Equation (4.24) can be substituted into Eq. (4.23), which can be evaluated at x = x 1 for

Numerical Methods: Chapter 4 Instructor: Robel Metiku This can be shown by multiplying the terms in

……………………………………

(4.25)

Finally, Eqs. (4.24) and (4.25) can be substituted into Eq. (4.23), which can be

evaluated at x = x 2 and solved (after some algebraic manipulations) for

Numerical Methods: Chapter 4 Instructor: Robel Metiku This can be shown by multiplying the terms in

……………

(4.26)

Notice that, as was the case with linear interpolation, b 1 still represents the slope of the line connecting points x 0 and x 1 . Thus, the first two terms of Eq. (4.23) are equivalent to linear interpolation from x 0 to x 1 , as specified previously in Eq. (4.22). The last term, b 2 (x x 0 )(x x 1 ), introduces the second-order curvature into the formula. Before illustrating how to use Eq. (4.23), we should examine the form of the coefficient b 2 . It is very similar to the finite-divided-difference approximation of the second derivative introduced previously in Eq. (4.24). Thus, Eq. (4.23) is beginning to manifest a structure that is very similar to the Taylor series expansion. This observation will be explored further when we relate Newton’s interpolating polynomials to the Taylor series

Numerical Methods: Chapter 4

Instructor: Robel Metiku

in later. But first, we will do an example that shows how Eq. (4.23) is used to interpolate

among three points.

Example 4.6: Quadratic Interpolation

Fit a second-order polynomial to the three points used in Example 4.5:

0 = 1

x

f(x 0 ) = 0

x

1 = 4

f(x 1 ) = 1.386294

2 = 6

x

f(x 2 ) = 1.791759

Use the polynomial to evaluate ln 2.

General Form of Newton’s Interpolating Polynomials

The preceding analysis can be generalized to fit an nth-order polynomial to n + 1 data

points. The nth-order polynomial is

f n (x) = b 0 + b 1 (x x 0 )+· · ·+b n (x x 0 )(x x 1 ) · · · (x x n1 )

………………

(4.27)

As was done previously with the linear and quadratic interpolations, data points can be

used to evaluate the coefficients b 0 , b 1 ,

, b n . For an nth-order polynomial, n + 1 data

points are required: [x 0 , f (x 0 )], [x 1 , f (x 1 )],

, [x n , f (x n )].

We use these data points and the following equations to evaluate the coefficients:

b 0 = f(x 0 )

……………………………………

(4.28)

b 1 = f [x 1 , x 0 ]

………………………………

...

 

(4.29)

b 2 = f [x 2 , x 1 , x 0 ]

……………………………

..

(4.30)

b n = f [x n , x n1 ,

, x 1 , x 0 ]

…………….…… (4.31)

where the bracketed function evaluations are finite divided differences. For example, the

first finite divided difference is represented generally as

Numerical Methods: Chapter 4 Instructor: Robel Metiku in later. But first, we will do an example

………….… ..

(4.32)

The second finite divided difference, which represents the difference of two first divided

differences, is expressed generally as

Numerical Methods: Chapter 4 Instructor: Robel Metiku in later. But first, we will do an example

(4.33)

Numerical Methods: Chapter 4

Instructor: Robel Metiku

Similarly, the nth finite divided difference is

Numerical Methods: Chapter 4 Instructor: Robel Metiku Similarly, the nth finite divided difference is … (4.34)

… (4.34)

Numerical Methods: Chapter 4 Instructor: Robel Metiku Similarly, the nth finite divided difference is … (4.34)

Fig. 4.15 The use of quadratic interpolation to estimate ln 2; the linear interpolation from

x = 1 to 4 is also included for comparison

These differences can be used to evaluate the coefficients in Eqs. (4.28) through (4.31),

which can then be substituted into Eq. (4.27) to yield the interpolating polynomial

f n (x) = f (x 0 ) + (x x 0 ) f [x 1 , x 0 ] + (x x 0 )(x x 1 ) f [x 2 , x 1 , x 0 ]

+ · · ·+(x x 0 )(x x 1 ) · · · (x x n1 ) f [x n , x n1 ,

. . .

, x 0 ]

……….

(4.35)

which is called Newton’s divided-difference interpolating polynomial. It should be noted

that it is not necessary that the data points used in Eq. (4.35) be equally spaced or that

the abscissa values necessarily be in ascending order, as illustrated in the following

example. Also, notice how Eqs. (4.32) through (4.54) are recursive—i.e, higher-order

differences are computed by taking differences of lower-order differences (Fig. 4.15).

Example 4.7: Newton’s Divided-Difference Interpolating Polynomials

In Example 4.6, data points at x 0 = 1, x 1 = 4, and x 2 = 6 were used to estimate ln 2 with a

parabola. Now, adding a fourth point [x 3 = 5; f (x 3 ) = 1.609438], estimate ln 2 with a

third-order Newton’s interpolating polynomial.

Lagrange Interpolating Polynomials

The Lagrange interpolating polynomial is simply a reformulation of the Newton

polynomial that avoids the computation of divided differences. It can be represented

concisely as

Numerical Methods: Chapter 4 Instructor: Robel Metiku Similarly, the nth finite divided difference is … (4.34)

………………………………

(4.36)

Numerical Methods: Chapter 4

Instructor: Robel Metiku

where

Numerical Methods: Chapter 4 Instructor: Robel Metiku where ……………………………… .. (4.37) where П designates the “product

………………………………

..

(4.37)

where П designates the “product of.” For example, the linear version (n = 1) is

Numerical Methods: Chapter 4 Instructor: Robel Metiku where ……………………………… .. (4.37) where П designates the “product

…….……

(4.38)

and the second-order version is

Numerical Methods: Chapter 4 Instructor: Robel Metiku where ……………………………… .. (4.37) where П designates the “product

(4.39)

Equation (4.36) can be derived directly from Newton’s polynomial. However, the

rationale underlying the Lagrange formulation can be grasped directly by realizing that

each term L i (x) will be 1 at x = x i and 0 at all other sample points.

Thus, each product L i (x) f(x i ) takes on the value of f(x i ) at the sample point x i .

Consequently, the summation of all the products designated by Eq. (4.36) is the unique

nth order polynomial that passes exactly through all n + 1 data points.

Example 4.8 Lagrange Interpolating Polynomials

Use a Lagrange interpolating polynomial of the first and second order to evaluate ln 2,

on the basis of the data given in Example 4.6:

0 = 1

x

f(x 0 ) = 0

x

1 = 4

f(x 1 ) = 1.386294

2 = 6

x

f(x 2 ) = 1.791760