Вы находитесь на странице: 1из 20

International Journal of Computational and Applied Mathematics. ISSN 1819-4966 Volume 5, Number 4 (2010), pp.

459478 Research India Publications http://www.ripublication.com/ijcam.htm

Response Surface Approximation using Sparse Grid Design


P Beena1 and Ranjan Ganguli2 Research Assistant and 2Professor Department of Aerospace Engineering, Indian Institute of Science, Bangalore-560012, India 1 E-mail: beena@aero.iisc.ernet.in and 2E-mail: ganguli@aero.iisc.ernet.in
1,2 1

Abstract An approach to simplify an optimization problem is to create metamodels or surrogates of the objective function with respect to the design variables. Response surface approximations yield low order polynomial metamodels which are very effective in the engineering analysis and optimization. However, response surface approximations based on design of experiments require a large number of sampling points. In this paper, the response surface approximations are investigated using the Sparse Grid Design (SGD). SGD requires significantly fewer analysis runs than the full grid design for the construction of response surfaces. It is found using several test functions that the SGD is able to capture the basic trends of the analysis using second-order polynomial response surfaces and give good estimate of the actual minimum point. Keywords: Response surface approximation; Sparse grids; Metamodels; Polynomial response surfaces; Function approximations; Optimization.

Introduction
For a thorough understanding of physical, economic, and other complex systems, developing mathematical models and performing numerical simulations plays a key role [1]. With increasing capabilities of modern computers, the models are becoming more sophisticated and realistic. It is difficult to link optimization algorithms to complex computational models. Considerable research has been done on using polynomial response surface approximations based on sampling points from the theory of design of experiments to decouple the analysis and optimization problems. However, a large number of sampling points is needed by the design of experiments.

460

P. Beena and Ranjan Ganguli

In this paper, we propose to investigate the use of sparse grids [2] for response surface construction. Sparse grids provide sampling points which avoid the curse of dimensionality. Regression models are used to fit the data and construct response surfaces for the objective function. Once the response surfaces are obtained, the optimum can be found at low cost because the response surfaces are merely algebraic expressions. Response surface approximations are a collection of statistical and mathematical techniques which were originally created for developing, improving and optimizing products and processes. They are the most widely used metamodels in optimization. Response surface method constructs global approximations to system behaviour based on results calculated at various points in the design space. The response surface seeks to find a functional relationship between an output variable and a set of input variables. Typically, second-order polynomials are used for response surfaces. However, some studies have also used higher-order polynomial approximations. Response surface approximations are global in nature, they have witnessed widespread application in recent years [3- 5]. An excellent introduction to response surface methods can be found in reference [6]. An important objective in response surface construction is to achieve an acceptable level of accuracy while attempting to minimize the computational effort, i.e. the number of function evaluations [5]. Sparse grids have been developed to approximate general smooth functions of many variables. They provide a method for reducing dimensionality problems for high dimensional function approximations. The advantages of sparse grids over other grid-based methods is that they use fewer parameters and this makes the sparse grid approach particularly attractive for the numerical solution of moderate and higher-dimensional problems. The sparse grids approach was first described by Smolyak [7] and adapted for partial differential equations by Zenger [8]. Subsequently, Griebel et al. [9] developed an algorithm known as the combination technique, prescribing how the collection of simple grids can be combined to approximate high dimensional functions. More recently, Garcke and Griebel [10, 11] demonstrated the feasibility of sparse grids in data mining by using the combination technique in predictive modelling. Sparse grid is also successfully used for integral equations [12, 13], interpolation and approximation [1418]. Furthermore there is work on stochastic differential equations [19-20], differential forms in the context of the Maxwell-equation [21] and a wavelet -based sparse grid discretization of parabolic problems is treated in [22]. A tutorial introduction to sparse grid is available in [2]. Sparse grids are studied in detail in [2324].

Sparse Grids
The sparse grid method is a special discretization technique. It is based on hierarchical basis [25-27], a representation of a discrete function space which is equivalent to the conventional nodal basis, and a sparse tensor product construction. Sparse grids represent a very flexible predictive modeling and analysis system [28]. Sparse grid methods are known under various names, such as hyperbolic cross points, discrete

Response Surface Approximation using Sparse Grid Design

461

blending, boolean interpolation or splitting extrapolation as the concept is closely related to hyperbolic crosses [29-31], boolean methods [32-33] and splitting extrapolation methods [34]. The distribution of the points in a sparse grid is as shown in Figure 1. SGD for a problem of dimension (d=2) and level (n=2) is as shown in Figure 1(a). The number of degrees of freedom in each coordinate direction is determined by N and it is equal to 2 n + 1 . Thus, a n=2 problem will have five degrees of freedom in each direction. SGD for a 3-D problem (d=3) with level (n=2) is shown in Figure 1(b). SGD for a problem with dimension (d=2) and levels (n=4) is as shown in Figure 1(c). A n=4 problem will have seventeen degrees of freedom in each direction. SGD for a 3-D problem (d=3) with level (n=4) is shown in Figure 1(d). Thus, as the dimension and the levels of the points are increased, the total number of points and its distribution in a SGD varies.

n=2, d=2: 13nodes 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 1 0.8 0.6 0.4 0.2 0 1

n=2, d=3: 25nodes

1 0.5 0.5 0 0.2 0.4 0.6 0.8 1 0 0

(a)
n=4, d=2: 65nodes 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 1 1 0.8 0.6 0.4 0.2

(b)
n=4, d=3: 177nodes

1 0.5 0.5 0 0.2 0.4 0.6 0.8 1 0 0

(c)

(d)

Figure 1: Distribution of points in a sparse grid.

The comparison of the experimental runs required by factorial designs and sparse grids is given in Table 1. It can be seen from the table that the sparse grid approach overcomes the disadvantage of full factorial design as d increases. It just employs 221 experimental runs for d =10 as against 107 of full factorial design.

462

P. Beena and Ranjan Ganguli

Table 1: Number of experiments required by sparse grid, full factorial and CCD. Method Sparse grid Full factorial Central composite design (CCD) Sparse grid Full factorial Central composite design (CCD) n 1 N 3 2 2 5 5 5 d=2 5 4 9 13 25 36 d=3 7 8 15 24 125 141 d=4 9 16 25 41 625 646 d = 10 21 1024 1045 221 107 107 d = 100 201 1030 1030 20201 7x1069 7x1069

Selection of a suitable sparse grid is essential for response surface approximation: We have examined three possibilities to construct the sparse grid, namely M 1. The classical maximum- or L2-norm-based sparse grid H , including the boundary. The points xi j comprising the set of support nodes X i are defined as m i = 2i + 1

x i j = (j 1)/(m i 1) for j = 1,,m i and i 1 2. The maximum-norm-based sparse grid, but excluding the points on the boundary, denoted by H NB . Now, the xi j are defined as m i = 2i 1 x i j = j/(m i + 1) for j = 1,,m i
CC 3. The Clenshaw-Curtis-type sparse grid H , with equidistant nodes as

described in [35-36]. Here the xi j are defined as if i = 1 1 m i = i 1 2 + 1, if i > 1 (j 1)/(m i 1) for j = 1, ,m i xi j = 0.5 for j = 1 if m i = 1

if m i > 1

Figure 2 illustrates the grids HM4,2, HNB4,2 and HCC4,2 for d = 2. Figure 3 illustrates the grids HM4,3, HNB4,3, and HCC4,3 for d = 3. It can be seen from these figures that the number of grid points grows much faster with increasing n (levels) and d (dimension) for HM. The number of points of the Clenshaw-Curtis grid HCC increases the slowest. In this paper, we use the Clenshaw-Curtis sparse grids for the study of response surfaces.

Response Surface Approximation using Sparse Grid Design

463

Figure 2: Different sparse grids for d=2.

Figure 3: Different sparse grids for d=3.

To further illustrate the growth of the number of nodes with n, d depending on the chosen grid type, we have included Table 2. Note that the grid HM is not suited for higher-dimensional problems, since at least 3d support nodes are needed.
Table 2: Comparison of nodes in different sparse grids.

n 0 1 2 3 4 5 6 7

d=2 M NB CC 911 21 5 5 49 17 13 113 49 29 257 129 65 577 321 145 1281 769 321 2817 1793 705

d=4 M NB CC 81 1 1 297 9 9 945 49 41 2769 209 137 7681 769 401 20481 2561 1105 52993 7937 2929 1.3e5 23297 7537

d=8 M NB CC 6561 1 1 41553 17 17 1.9e5 161 145 7.7e5 1121 849 2.8e6 6401 3937 9.3e6 31745 15713 3.0e7 141569 56737 9.1e7 5.8e5 1.9e5

464

P. Beena and Ranjan Ganguli

Response surface method using sparse grid


Response surfaces are smooth analytical functions that are most often approximated by low-order polynomials. The approximation can be expressed as y(x) = f (x) + (1) where y(x) is the unknown function of interest, f (x) is a known polynomial function of x , and is random error. If the response is well modeled by a linear function of the k independent variables, then the approximating function is the firstorder model (2) f = 0 + 1 x1 + 2 x 2 + + k x k +

When nonlinearities are present, a second-order model is used.


y =
0

+ i xi + i i xi + i j xi x
2 i =1 i =1 i< j

(3)

The parameters 0 , i , ii and ij of the polynomials in Equations (2) and (3) are determined through least-squares regression, which minimizes the sum of the squares (x) , from the actual values, y(x) . of the deviations of predicted values, y For example, the second-order response surface for two design variables is 2 2 (4) y(x1 , x2 ) = 0 + 1 x1 + 2 x2 + 11 x1 + 12 x1 x2 + 22 x2 The sparse grid for two design variables with 13 nodes (data points) is as shown in Figure 4. The data points are at five levels for each variable. An important consideration in the choice of nodes is their distribution over the design space. A poor distribution can have a strong influence upon the fidelity of the fitted response surface.
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Figure 4: Sparse grid with 13 data points.

Response Surface Approximation using Sparse Grid Design

465

Once the design points are obtained, we need to obtain a least-squares response surface. To evaluate the parameters 0 , etc., Equations (2) and (3) can be written (5) y = X + where y is a n 1 vector of responses and X is a n p matrix of sample data points and as
1 1 . . 1

X =

x x 11 12 x x 21 22 . . . . x x n1 n2

x2 x2 11 12 2 x2 x 22 21 . . . . 2 2 x x n1 n2

x x 11 12 x x 21 22 . . x x n1 n2

(6)

Here is a p 1 vector of the regression parameters, is a p 1 vector of error terms, and p is the number of design points. The parameters 0 , i , ii and ij are obtained by minimizing the least-square error obtained using Equation (5). [6]
L = i2 = T = ( y X ) ( y X )
T i =1 n

= y T y - 2 T X T y + T X T X

(7)

where L is the square of the error. To minimize L, Equation (7) is differentiated with respect to :
L or =0 = -2X T y + 2X T X

= ( X X ) 1 X y

(8)

Therefore, the fitted regression model is


= X y

(9)

Numerical Studies
Response surfaces are constructed using sparse grids for a variety of test functions given in [37-38]. A MATLAB implementation of sparse grids [39-40] can be found here http://www.ians.uni-stuttgart.de/spinterp/. We have used this tool box to generate sparse grid coordinates for required dimension and levels. The response surface approximations are then minimized and the stationary points obtained are compared with that of the actual function.

466 Problem 1: Rosenbrocks function

P. Beena and Ranjan Ganguli

f = 100( x 2 x1 ) 2 + (1 x1 ) 2 By setting the gradient equal to zero, the minimum point of the above function is found to be at x1 = 1 and x 2 = 1 . We use (n=2, d=2) SGD to construct the response surface for this function where n represents the levels and d represents the problem dimension. As mentioned before, the number of degrees of freedom in each coordinate direction is five and is determined by N which is equal to 2 n + 1 . Let y1 and y 2 represent the coded SGD points in the domain (0, 1). We obtain the physical points x1 and x 2 by using 20% and 40% perturbation on the design variables y1 and y 2 respectively i.e the coded variables are selected with the physical space of 0.8 to 1.2 and 0.8 to 1.4. The relation between the coded and the physical variables is obtained by a linear transformation given by equation (10).
2

x1 = 0.4 y1 + 0.8
1.4 1.35 1.3 1.25 1.2 1.15 1.1 1.05 1 0.8

x 2 = 0.4 y 2 + 1

(10)

0.85

0.9

0.95

1.05

1.1

1.15

1.2

Figure 5: Sparse grid design (2 ,2) with physical points.

The value of the Rosenbrocks function at these data points is evaluated and a second order response surface thus obtained after solving for the regression coefficients is f = 13.0889 52.3276 y1 + 28.8813 y 2 64 y1 y 2 + 58.1943 y1 + 16.2743 y 2 After setting the gradient equal to zero we see that the response surface has a minimum at y1 = 0.47 and y 2 = 0.04 . Substituting the values of y 1 and y 2 in equation (10) we get x1 = 0.988 and x 2 = 1.016 and at this point, f = 0.15 .
2 2

Response Surface Approximation using Sparse Grid Design

467

Next, we use a (n=3, d=2) SGD as in Figure 6 and a higher order cubic response surface for a better fit. Using the data generated, the third order response surface thus obtained is = 12.97 45.6 y + 28.8 y + 26.40 y 2 + 15.94 y 2 51.20 y y 12.80 y 2 y + 25.6 y 3 f 1 2 1 2 1 2 1 2 1 Finding the minimum for the cubic function yields multiple possible solutions of (0.15, -0.64), (0.81, 0.67), (0.499, -4.45e-15). As we know that a positive definite Hessian matrix is the sufficient condition for a local minimum, we calculate Hessian at these stationary points and found the points (0.15, -0.64) and (0.81, 0.67) are the points of inflexion and (0.499, -4.45e-15) is found to be the minimum point. At y1 = 0.499 and y 2 = -4.45e - 15 , from equation (10) we get x1 = 0.999 and x2 =1. At this point, f = 0.000064 which is much less than the starting design. Thus for the Rosenbrocks function a quadratic response surface provides an adequate fit and a cubic response surface provides an excellent approximation to the objective function. Accurate approximations can be created by using higher values of the level n, however, this also leads to more sampling points.
1.4 1.35 1.3 1.25 1.2 1.15 1.1 1.05 1 0.8

0.85

0.9

0.95

1.05

1.1

1.15

1.2

Figure 6: Sparse grid design (3 ,2) with physical points.

Problem 2: Powells badly scaled function


f = (10,000 x1 x 2 1) 2 + [exp( x1 ) + exp( x 2 ) 1.0001]
2

This function has a minima at x1 = 1.098 10 5 and x 2 = 9.106 and f ( x1 , x 2 ) = 0. We use (n=2, d=2) SGD as in Figure 4 to construct a response surface for this function. Next, we scale the design variables using the linear transformation formula given by equation (11) and obtain the physical points x1 and x 2 . Equation

468

P. Beena and Ranjan Ganguli

(11) relates the physical points with that of the coded SGD points where y1 and y 2 represents the coded SGD points in the domain (0, 1). (11) x1 = 0.00004 y1 + 0.00001 x2 = 4 y 2 + 8 The value of the function at these data points is evaluated and a second order response surface obtained after solving for the regression coefficients is f = 0.27 7.51y1 2.16 y 2 + 16.36 y1 + 1.80 y 2 + 16 y1 y 2 After setting the gradient equal to zero, we see that the response surface has a minimum at y1 = 0.05 and y 2 = 0.35 . Substituting these values of y 1 and y 2 in equation (11) we get x1 = 0.000012 and x 2 = 9.4 and at this point, f = 0.016 which is a good approximation to the original function.
2 2

Problem 3: Browns badly scaled function

f = ( x1 10 6 ) 2 + ( x 2 2 10 6 ) 2 + ( x1 x 2 2) 2 This function has a minima at x1 = 10 6 and x 2 = 2 10 6 and f ( x1 , x 2 ) = 0. We now try to fit a second order response surface using the SGD as in Figure 4. We scale the design variables using the linear transformation as in equation (12) and obtain the physical points. (12) x1 = 10 6 y1 x2 = 5 10 6 y 2 The second order response surface obtained after solving for the regression coefficient is f = 1 1012 2 1012 y1 10.52 y 2 + 5 y1 y 2 + 1 1012 y1 + 9.8 y 2 We see that the response surface has a minimum at y1 = 1 and y 2 = 0.28 . Substituting these values of y 1 and y 2 in equation (12) we get x1 = 1 10 6 and x 2 = 1.4 10 6 and at this point, f = 0.36. As this value is higher than the actual minima, we use SGD with n=3 and d=2 (Figure 6) and use the linear transformation as in Equation (12) to evaluate a better fit. A second order response surface is thus obtained as
2 2

f = 1 1012 2 1012 y1 11.96 y 2 + 5 y1 y 2 + 1 1012 y1 + 7.71y 2 We see that the coefficients of the linear and the quadratic term for y2 have changed in the response surface. Solving for y1 and y 2 and substituting the values in Equation (12) gives x1 = 1 10 6 and x 2 = 2.25 10 6 and at this point, f = 0.062 which is a better approximation. Next, we form another second order response surface using (n=4, d=2) as in Figure 1(c) and get x1 = 1 10 6 and x 2 = 2.15 10 6 and at this point, f = 0.02 is very close to the actual minima. Thus, by increasing the number points in the SGD we have obtained a better fit for the objective function.
2 2

Response Surface Approximation using Sparse Grid Design


Problem 4: Powells quartic function

469

f = ( x1 + 10 x 2 ) 2 + 5( x3 x 4 ) 2 + ( x 2 2 x3 ) 4 + 10( x1 x 4 ) 4 This function has a minimum point at ( x1 , x 2 , x3 , x 4 ) = (0, 0, 0,0) and f ( x1 , x 2 , x3 , x 4 ) = 0. The sparse grid points are generated with level (n=2, d=4). For this problem, the physical and the coded points are considered identical i.e (13) x1 = y1 x2 = y 2 x3 = y3 x 4 = y 4 The value of the original function at these grid points is determined and a response surface is constructed using the data obtained.

f = 4.21 + 3.03 y1 + 3.28 y 2 + 3.82 y 3 + 3.03 y 4 + 20 y1 y 2 20 y1 y 4 16 y 2 y3 10 y3 y 4 + 7.96 y1 + 101.86 y 2 + 15.01y3 + 11.96 y 4


2 2 2 2

Solving for y1 , y2 , y3 and y4 and substituting the values in equation (13) gives x1 = 1.45 , x 2 = 0.14 , x3 = 0.18 , x 4 = 1.16 and at this point, f = 13.05 . As this value is higher than the actual minima, we use higher levels of SGD to obtain a better fit. With (n=3, d=4) SGD, we obtain ( x1 , x 2 , x3 , x 4 ) = (0.16, 0.01, 0.07, 0.14) and f ( x1 , x 2 , x3 , x 4 ) = 0.092. With (n=4, d=4) SGD, we obtain ( x1 , x 2 , x3 , x 4 ) = (0.019, 0.005, 0.006, 0.015) and f ( x1 , x 2 , x3 , x 4 ) = 0.005. This is a better design when compared to the starting design.
Problem 5: Beales function
f = [1.5-x1 (1-x 2 )]2 + [2.25 x1 (1 x 2 )]2 + [2.625 x1 (1 x 2 )] 2
2 3

This function has a minimum point at ( x1 , x2 ) = (3, 0.5) and f ( x1 , x 2 ) = 0. The sparse grid points are generated with level n=2 and d=2. For this problem, the physical points are related to the coded points by equation (14) (14) x1 = 4 y1 + 1 x2 = y 2 The second order response surface is obtained as f = 4.85 9.58 y1 19.82 y 2 21y1 y 2 + 29.41 y1 + 29.93 y 2 Solving for y1 , y2 and substituting the values in equation (14) gives x1 = 2.28 , x 2 = 0.44 , and at this point, f = 0.50 . With (n=3, d=2) SGD, we obtain a second order response surface as
2 2

f = 3.07 5.23 y1 14.38 y 2 30.70 y1 y 2 + 23.03 y1 + 34.15 y 2 Solving for y1 , y2 and substituting the values in equation (14) gives x1 = 2.44 , x 2 = 0.37 , and at this point, f = 0.11 which is certainly a good
2 2

470

P. Beena and Ranjan Ganguli

approximation. However, we use a (n=3, d=2) SGD and fit a cubic polynomial to see if we can evaluate a better fit. Using the data generated, the third order response surface is obtained as
= 3.36 32.05 y + 24.48 y 13.89 y y + 65.69 y 2 63.16 y 2 + 39.64 y y 2 56.45 y 2 y f 1 2 1 2 1 2 1 2 1 2 9.62 y1 + 51.66 y 2
3 3

Finding the minimum for the cubic function yields multiple possible solutions of (0.48, 0.50), (0.33, 0.19) and (-0.18. 1.24). We calculate Hessian at these stationary points and found the points (0.33, 0.19) and (-0.18. 1.24) are the points of inflexion and (0.48, 0.50) is found to be the minimum point. At y1 = 0.48 and y 2 = 0.50 from equation (14) we get x1 = 2.92 and x2 =0.5. At this point f = 0.01 .
Problem 6: Booths function

f = ( x1 + 2 x 2 7) 2 + (2 x1 + x 2 5) 2 This function has a global minimum at x1 = 1 and x 2 = 3 and f ( x) = 0. We use (n=2, d=2) SGD to construct response surface for this function. Equation (15) relates the physical points with that of the coded SGD points. (15) x1 = 4 y1 x2 = 4 y2 We create a second order response surface after solving for the regression coefficients as f = 74 136 y1 152 y 2 + 128 y1 y 2 + 80 y1 + 80 y 2 Solving for y1 , y2 and substituting the values in equation (15) gives x1 = 1 , x 2 = 3 and f ( x1 , x 2 ) = 0 as that of the original function.
2 2

Problem 7: Woods function


f = [10( x 2 x1 )]2 + (1 x1 ) 2 + 90( x 4 x3 ) 2 + (1 x3 ) 2 + 10( x 2 + x 4 2) 2 + 0.1( x 2 x 4 )
2 2

This function has a minimum point at ( x1 , x 2 , x3 , x 4 ) = (1, 1, 1, 1) and f ( x1 , x2 , x3 , x4 ) = 0. We use n=2 and d=4 SGD to construct a response surface for this problem. The physical and the coded points are related as (16) x1 = 0.4 y1 + 0.8 x2 = 0.4 y2 + 0.8 x3 = 0.4 y3 + 0.8 x 4 = 0.4 y4 + 0.8 A second order response surface is obtained as

f = 4.67 26.62 y1 + 11.77 y 2 23.97 y 3 + 10.20 y 4 64 y1 y 2 + 3.2 y 2 y 4 57.6 y3 y 4 + 64.82 y1 + 17.63 y 2 + 58.36 y 3 + 16.03 y 4
2 2 2 2

Response Surface Approximation using Sparse Grid Design

471

Solving for y1 , y2 , y3 and y4 and substituting the values in equation (16) gives x1 = 0.86, x 2 = 0.77, x3 = 0.98 , x 4 = 1.004 and at this point, f = 0.68 . With (n=3, d=4) SGD and a second order response surface, we obtain ( x1 , x 2 , x3 , x 4 ) = (0.88, 0.81, 0.99, 1.01) and f ( x1 , x 2 , x3 , x 4 ) = 0.44 which is a somewhat better fit.
Problem 8: A nonlinear function of three variables
2 x +x 1 1 1 3 f = + sin x 2 x3 + exp 2 x 1 + ( x1 x 2 ) 2 2 2 This function has a maximum point at ( x1 , x 2 , x3 ) = (1, 1, 1) and f ( x1 , x 2 , x3 ) = 3. We use n=3 and d=3 SGD to construct a response surface for this problem. The physical and coded points are related as (17) x1 = 0.8 y1 + 0.6 x 2 = 0.8 y 2 + 0.6 x3 = 0.8 y 3 + 0.6

A second order response surface solving is obtained as

f = 2.21 + 0.32 y1 + 1.38 y 2 + 1.30 y 3 + 1.40 y1 y 2 0.59 y1 y 3 0.47 y 2 y 3


2 2 2

0.72 y1 1.74 y 2 1.022 y 3 Solving for y1 , y2 , y3 and substituting the values in equation (17) gives x1 = 1.20 , x 2 = 1.12 , x3 = 0.8 and at this point, f = 2.97 , which is a reasonable approximation.
Problem 9: Extended Raydan function d This function is defined for any general dimension d as, f = i =1 (e xi xi ) . Several

cases of d = 2, 3, 5, 10 and 20 are evaluated next using SGD.


Case 1: f = i =1 (e xi xi )
2

By setting the gradient equal to zero, the minimum point of the given function is found to be at x1 = 0 and x 2 = 0 and f ( x1 , x 2 ) = 2. For this problem, the physical and the coded points are considered identical. Using (2, 2) SGD , the second order response surface obtained is f = 2.0058 0.1311x1 0.1311x 2 + 0.8435 x1 + 0.8435 x 2 Solving for x1 , x2 we get x1 = 0.077 and x 2 = 0.077 and f ( x1 , x 2 ) = 2.006 which is a good approximation to the objective function.
2 2

Case 2: f = i =1 (e xi xi )
3

This function has a minimum at ( x1 , x 2 , x3 ) = (0, 0, 0) and f ( x1 , x 2 , x 3 ) = 3.

472

P. Beena and Ranjan Ganguli

Using (2, 3) SGD, the second order response surface obtained is = 3.0051 0.1283x 0.1283x 0.1283x + 0.8435 x 2 + 0.8435 x 2 + 0.8435 x 2 f 1 2 3 1 2 3 Solving for x1 , x2 , x3 we get x1 = 0.076 , x 2 = 0.076 and x3 = 0.076 and f ( x1 , x2 , x3 ) = 3.008.
Case 3: f = i =1 (e xi xi )
5

This function has a minimum at ( x1 , x 2 , x5 ) = (0, 0, 0, 0, 0) and f ( x1 , x 2 x5 ) = 5 . Using (2, 5) SGD , the second order response surface obtained is
2 f = 5 .0054 - 0 .1268 x1 - 0 .1268 x 2 - 0 .1268 x 3 - 0 .1268 x 4 - 0 .1268 x 5 + 0 .8435 x12 + 0 .8435 x 2 2 2 2 + 0 .8435 x 3 + 0 .8435 x 4 + 0 .8435 x 5

Solving for ( x1 , x 2 , x5 ) f ( x1 , x 2 x5 ) = 5.01.


Case 4: f = i =1 (e xi xi )
10

we

obtain

(0.075, 0.075, 0.075, 0.075, 0.075)

and

The above function has a minimum at ( x1 , x 2 , x10 ) = (0, 0, , 0) and f ( x1 , x2 x10 ) = 10. For a function with d=10 and five degrees of freedom in each coordinate direction the total number of runs required by the sparse grid approach is equal to 221. For the two level factorial and CCD designs 210 = 1024 and 210 + 2*10+1=1045 points would be required. Using (2, 10) SGD , the second order response surface obtained is = 10.0073-0.1260 x -0.1260 x -0.1260 x -0.1260 x -0.1260 x -0.1260 x -0.1260 x f 1 2 3 4 5 6 7
2 2 2 + 0.8435 x3 + 0.8435 x 4 -0.1260 x8 -0.1260 x9 -0.1260 x10 + 0.8435 x12 + 0.8435 x 2 2 2 2 2 2 + 0.8435 x5 + 0.8435 x 6 + 0.8435 x 7 + 0.8435 x82 + 0.8435 x9 + 0.8435 x10 Solving for ( x 1 , x 2 , x 10 ), we get (0.074, 0.074, , 0.074) and f ( x 1 , x 2 x 10 ) = 10.02.

Case 5: f = i =1 (e xi xi )
20

This function has a minimum at ( x1 , x 2 , x 20 ) = (0, 0, , 0) and f ( x1 , x 2 x 20 ) = 20 . Using (2, 20) SGD, the second order response surface obtained is

Response Surface Approximation using Sparse Grid Design


= 20.0118-0.1256 x -0.1256 x -0.1256 x -0.1256 x -0.1256 x -0.1256 x f 1 2 3 4 5 6 -0.1256 x7 -0.1256 x8 -0.1256 x9 -0.1256 x10 -0.1256 x11-0.1256 x12 -0.1256 x13 -0.1256 x14 -0.1256 x15 -0.1256 x16 -0.1256 x17 -0.1256 x18
2 2 2 + 0.8435 x 4 -0.1256 x19 -0.1256 x 20 + +0.8435 x12 + 0.8435 x 2 + 0.8435 x3 2 2 2 2 2 2 + 0.8435 x5 + 0.8435 x6 + 0.8435 x7 + 0.8435 x8 + 0.8435 x9 + 0.8435 x10

473

2 2 2 2 2 2 2 + 0.8435 x11 + 0.8435 x12 + 0.8435 x13 + 0.8435 x14 + 0.8435 x15 + 0.8435 x16 + 0.8435 x17 2 2 2 + 0.8435 x18 + 0.8435 x19 + 0.8435 x 20

Solving for ( x1 , x 2 , x 20 ) , we obtain (0.074, 0.074, , 0.074) and f ( x1 , x 2 x 20 ) = 20.05. For d=20, SGD requires 841 runs compared to 220 = 1048576 for the two level factorial design and 220 + 2*20+1=1048617 for the CCD design. Thus for an extended Raydan function we obtain very good approximations using SGD. Next, we use the diagonal function to further test SGD at higher dimensions.
Problem 10: Extended Diagonal function
d d i This function is defined for a d- dimensional problem as f = xi + xi2 . i =1 100 i =1 We evaluate SGD for the cases d=2, 3, 5, 10 and 20. 2 2

2 2 i Case 1: f = xi + xi2 i =1 100 i =1 The minimum point of the given function is found to be at x1 = 0 and x 2 = 0 and f ( x1 , x 2 ) = 0. Using (2, 2) SGD, the second order response surface obtained is

f = 2 x1 x 2 + 1.01x1 + 1.02 x 2 Solving for x1 , x2 we get x1 = 0 and x 2 = 0 and f ( x1 , x 2 ) = 0 which is a good approximation to the objective function.
2 2 3 3 i Case 2: f = xi + xi2 i =1 100 i =1 This function has a minimum at ( x1 , x 2 , x3 ) = (0, 0, 0) and f ( x1 , x 2 , x3 ) = 0. Using (2, 3) SGD, the second order response surface obtained is = 2 x x + 2 x x + 2 x x + 1.01x 2 + 1.02 x 2 + 1.03x 2 f 1 2 1 3 2 3 1 2 3 Solving for x1 , x2 , x3 we get x1 = 0 , x 2 = 0 and x3 = 0 and f ( x1 , x 2 , x3 ) = 0 . 2

5 5 i Case 3: f = xi + xi2 100 i =1 i =1

474

P. Beena and Ranjan Ganguli

This function has a minimum at ( x1 , x 2 , x5 ) = (0, 0, 0, 0, 0) and f ( x1 , x 2 x5 ) = 0 . Using (2, 5) SGD, the second order response surface obtained is f = 0 + 0 x1 + 0 x 2 + 0 x 3 + 0 x 4 + 0 x 5 + 2 x1 x 2 + 2 x1 x 3 + 2 x1 x 4 + 2 x1 x 5 + 2 x 2 x 3 + 2 x 2 x 4
2 2 2 2 + 2 x 2 x 5 + 2 x 3 x 4 + 2 x 3 x 5 + 2 x 4 x 5 + 1.01 x12 + 1 .02 x 2 + 1 .03 x 3 + 1 .04 x 4 + 1 .05 x 5 Solving for ( x1 , x 2 , x5 ) we obtain (0, 0, 0, 0, 0) and f ( x1 , x 2 x5 ) = 0.

10 10 i Case 4: f = xi + xi2 i =1 100 i =1 This function has a minimum at ( x1 , x 2 , x10 ) = (0, 0, , 0) and f ( x1 , x 2 x10 ) = 0 Using (2, 10) SGD, the second order response surface obtained is = 2x x + 2x x + 2x x + 2x x + 2x x + 2x x + 2x x + 2x x + 2x x + 2x x f 1 7 1 8 1 9 1 10 2 3 1 2 1 3 1 4 1 5 1 6 + 2 x 2 x 4 + 2 x 2 x5 + 2 x 2 x 6 + 2 x 2 x 7 + 2 x 2 x8 + 2 x 2 x9 + 2 x 2 x10 + 2 x 3 x 4 + 2 x3 x5 + 2 x3 x 6 + 2 x3 x 7 + 2 x3 x8 + 2 x3 x9 + 2 x3 x10 + 2 x 4 x5 + 2 x 4 x 6 + 2 x 4 x 7 +2 x 4 x8 + 2 x 4 x9 + 2 x 4 x10 + 2 x5 x 6 + 2 x5 x 7 + 2 x5 x8 + 2 x5 x9 + 2 x5 x10 + 2 x 6 x 7 + 2 x 6 x8 + 2 x6 x9 + 2 x 6 x10 + 2 x7 x8
2 2 2 + 2 x 7 x9 + 2 x 7 x10 + 2 x8 x9 + 2 x8 x10 + 2 x9 x10 + 1.01x12 + 1.02 x 2 + 1.03x3 + 1.04 x 4 2 2 2 2 2 + 1.05 x5 + 1.06 x 6 + 1.07 x 7 + 1.08 x82 + 1.09 x9 + 1.10 x10 Solving for ( x1 , x 2 , x10 ) we obtain (0, 0, ,0) and f ( x1 , x 2 x10 ) = 0.

20 20 i xi2 Case 5: f = xi + i =1 100 i =1 This function has a minimum f ( x1 , x 2 x 20 ) = 0

at

( x1 , x 2 , x 20 ) = (0, 0, , 0) and

. Using (2, 20) SGD , the second order response surface obtained is given by f For a function with d =20, SGD employs 841 runs with five degrees of freedom in each coordinate direction and we obtain good approximaton to the diagonal function even at higher dimensions using SGD. For this case, two level factorial design would require 1048576 points and the CCD would require 1048617 points.

Response Surface Approximation using Sparse Grid Design

475

= 2x x + 2x x + 2x x + 2x x + 2x x + 2x x + 2x x + 2x x + 2x x + 2x x + 2x x + 2x x f 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 9 1 10 1 11 1 12 1 13 + 2x1 x14 + 2x1 x15 + 2x1 x16 + 2x1 x17 + 2x1 x18 + 2x1 x19 + 2x1 x20 + 2x2 x3 + 2x2 x4 + 2x2 x5 + 2x2 x6 + 2x2 x7 + 2x2 x8 + 2x2 x9 + 2x2 x10 + 2x2 x11 + 2x2 x12 + 2x2 x13 + 2x2 x14 + 2x2 x15 + 2x2 x16 + 2x2 x17 + 2x2 x18 + 2x2 x19 + 2x2 x20 + 2x3 x4 + 2x3 x5 + 2x3 x6 + 2x3 x7 + 2x3 x8 + 2x3 x9 + 2x3 x10 + 2x3 x11 + 2x3 x12 + 2x3 x13 + 2x3 x14 + 2x3 x15 + 2x3 x16 + 2x3 x17 + 2x3 x18 + 2x3 x19 + 2x3 x20 + 2x4 x5 + 2x4 x6 + 2x4 x7 +2x4 x8 + 2x4 x9 + 2x4 x10 + 2x4 x11 + 2x4 x12 + 2x4 x13 + 2x4 x14 + 2x4 x15 + 2x4 x16 + 2x4 x17 + 2x4 x18 + 2x4 x19 + 2x4 x20 + 2x5 x6 + 2x5 x7 + 2x5 x8 + 2x5 x9 + 2x5 x10 + 2x5 x11 + 2x5 x12 + 2x5 x13 + 2x5 x14 + 2x5 x15 + 2x5 x16 + 2x5 x17 + 2x5 x18 + 2x5 x19 + 2x5 x20 + 2x6 x7 + 2x6 x8 + 2x6 x9 + 2x6 x10 + 2x6 x11 + 2x6 x12 + 2x6 x13 + 2x6 x14 + 2x6 x15 + 2x6 x16 + 2x6 x17 + 2x6 x18 + 2x6 x19 + 2x6 x20 + 2x7 x8 + 2x7 x9 + 2x7 x10 + 2x7 x11 + 2x7 x12 + 2x7 x13 + 2x7 x14 + 2x7 x15 + 2x7 x16 + 2x7 x17 + 2x7 x18 + 2x7 x19 + 2x7 x20 + 2x8 x9 + 2x8 x10 + 2x8 x11 + 2x8 x12 + 2x8 x13 + 2x8 x14 + 2x8 x15 + 2x8 x16 + 2x8 x17 + 2x8 x18 + 2x8 x19 + 2x8 x20 + 2x9 x10 + 2x9 x11 + 2x9 x12 + 2x9 x13 + 2x9 x14 + 2x9 x15 + 2x9 x16 + 2x9 x17 + 2x9 x18 + 2x9 x19 + 2x9 x20 + 2x10 x11 + 2x10 x12 + 2x10 x13 + 2x10 x14 + 2x10 x15 + 2x10 x16 + 2x10 x17 + 2x10 x18 + 2x10 x19 + 2x10 x20 + +2x11x12 + 2x11x13 + 2x11x14 + 2x11x15 + 2x11x16 + 2x11x17 + 2x11x18 + 2x11x19 + 2x11x20 + 2x12 x13 + 2x12 x14 + 2x12 x15 + 2x12 x16 + 2x12 x17 + 2x12 x18 + 2x12 x19 + 2x12 x20 + 2x13 x14 + 2x13 x15 + 2x13 x16 + 2x13 x17 + 2x13 x18 + 2x13 x19 + 2x13 x20 + 2x14 x15 + 2x14 x16 + 2x14 x17 + 2x14 x18 + 2x14 x19 + 2x14 x20 + 2x15 x16 + 2x15 x17 + 2x15 x18 + 2x15 x19 + 2x15 x20 + 2x16 x17 + 2x16 x18 + 2x16 x19
2 2 + 2x16 x20 + 2x17 x18 + 2x17 x19 + 2x17 x20 + 2x18 x19 + 2x18 x20 + 2x19 x20 +1.01x12 +1.02 x2 +1.03x3 2 2 2 2 2 2 2 2 2 2 +1.04x4 +1.05x5 +1.06x6 +1.07x7 +1.08x8 +1.09x9 +1.10x10 +1.11x11 +1.12x12 +1.13x13 2 2 2 2 2 2 2 +1.14x14 +1.15x15 +1.16x16 +1.17x17 +1.18.x18 +1.19x19 +1.20x20

Solving for ( x1 , x 2 , x 20 ) we obtain (0, 0, ,0) and f ( x1 , x 2 x 20 ) = 0.

Conclusions
Response surface approximation is carried out using sparse grid design. Several test functions are used to illustrate the method by fitting response surfaces based on the data from the sampled points. The stationary points of the function are compared with the actual values and higher levels of the SGD are used in some cases to refine the response surfaces. It is found that the second order response surface obtained using SGD provides adequate approximation to the functions considered. SGD is found attractive for higher dimensions as it requires fewer runs when compared to the two level and central composite design of the design of experiments.

Acknowledgement
We would like to thank Prof. M. Masmoudi of the University of Toulouse for his lecture on using sparse grids for response surface methods as part of the Indo-French cyber university.

476

P. Beena and Ranjan Ganguli

References
[1] Klimke, A., 2006, Uncertainty Modeling using Fuzzy Arithmetic and Sparse Grids, Institut fr Angewandte Analysis und Numerische Simulation Universitt Stuttgart. Garcke, J., 2005, Sparse Grid Tutorial, http://www.maths.anu.edu.au/~garcke/paper/ sparseGridTutorial.pdf. Kodiyalam, S., and Sobieski-sobieszczanski, J., 2000, Bilevel Integrated System Synthesis with Response Surfaces, American Institute of Aeronautics and Astronautics Journal, 38, pp. 14791485. Batill, S. M., Stelmack, M. A., and Sellar, R. S., 1999, Framework of Multidisciplinary Design Based on Response Surface Approximations, Journal of Aircraft, 36, pp. 275287. Roux, W. J., Stander, N., and Haftka, R. T., 1998, Response Surface Approximations for Structural Optimization, International Journal for Numerical Methods in Engineering, 42, pp. 517534. Myers, R. H. and Montgomery, D. C., 1995, Response Surface Methodology, Process and Product Optimization Using Designed Experiments, New York: Wiley. Smolyak, S. A., 1963, Quadrature and Interpolation Formulas for Tensor Products of Certain Classes of Functions, Dokl. Akad. Nauk SSSR, 148, 1042-1043. Russian, Engl.: Soviet Math. Dokl. 4, pp. 240243. Zenger, C., 1990, Sparse grids, Parallel Algorithms for Partial Differential Equations, Proceedings of the Sixth GAMM-Seminar, Kiel, pp. 241251, Vieweg-Verlag, Griebel, M., Schneider, M., Zenger, C., 1992, A Combination Technique for the Solution of Sparse Grid Problems, Iterative Methods in Linear Algebra, IMACS, Elsevier, North Holland, pp. 263281. Garcke, J., Griebel, M., 2002, Classification with Sparse Grids using Simplicial Basis Functions, Intelligent Data Analysis, 6, pp. 483502. Garcke, J., Griebel, M., Thess, M., 2001, Data Mining with Sparse Grids, Computing, 67, pp, 225253. Frank, K., Heinrich, S., and Pereverzev, S., 1996, Information Complexity of Multivariate Fredholm Integral Equations in Sobolev Classes, J. of Complexity, 12, pp. 17-34. Griebel, M., Oswald, P., Schiekofer, T., 1999, Sparse Grids for Boundary Integral Equations, Numer. Mathematik, 83(2), pp. 279-312. Baszenski, G., 1985, N-th order Polynomial Spline Blending, In Schempp, W., and Zeller, K., editors, Multivariate Approximation Theory III, ISNM 75, pp. 35-46, Birkhauser , Basel. Temlyakov, V. N., 1989, Approximation of Functions with Bounded Mixed Derivative, Proc. Steklov Inst. Math, 1. Sickel, W., and Sprengel, F., 1999, Interpolation on Sparse Grids and Nikol'skij-Besov Spaces of Dominating Mixed Smoothness, J. Comput. Anal. Appl., 1, pp. 263-288,

[2] [3]

[4]

[5]

[6]

[7]

[8]

[9]

[10] [11] [12]

[13] [14]

[15] [16]

Response Surface Approximation using Sparse Grid Design [17] [18]

477

[19] [20] [21] [22]

[23] [24]

[25] [26] [27] [28]

[29]

[30] [31] [32] [33]

[34] [35]

Griebel, M., and Knapek, S., 2000, Optimized Tensor-Product Approximation Spaces, Constructive Approximation, 16(4), pp. 525-540. Klimke, A., and Wohlmuth, B., 2005, Computing Expensive Multivariate Functions of Fuzzy Numbers using Sparse Grids, Fuzzy Sets and Systems, 154(3), pp. 432-453. Schwab, C., and Todor, R., 2003, Sparse Finite Elements for Stochastic Elliptic Problems - Higher Order Moments, Computing, 71(1), 43-63, 2003. Schwab, C., and Todor, R. A., 2003, Sparse Finite Elements for Elliptic Problems with Stochastic Loading, Numer. Math., 95(4), pp.707-734. Gradinaru, V., and Hiptmair, R., 2003, Multigrid for Discrete Differential Forms on Sparse Grids, Computing, 71(1), pp.17-42. von Petersdorff, T., and Schwab, C., 2004, Numerical solution of parabolic equations in high dimensions, Mathematical Modeling and Numerical Analysis, 38 (1), pp. 93-127. Bungartz, H. J., and Griebel, M., 2004, Sparse Grids, Acta Numerica, 13, pp. 147-269. Bungartz, H. J., and Griebel, M., 1999, A Note on the Complexity of Solving Poisson's Equation for Spaces of Bounded Mixed Derivatives, J. of Complexity, 15:167-199. Also as Report No 524, SFB 256, Univ. Bonn, 1997. Faber, G., 1909, Uber stetige Funktionen, Mathematische Annalen, 66, pp. 81-94. Yserentant, H., 1992, Hierarchical bases, In R. E. O'Malley, J. et al., editors, Proc. ICIAM'91, SIAM, Philadelphia. Yserentant, H., 1986, On the Multi-Level Splitting of Finite Element Spaces, Numerische Mathematik, 49, pp. 379-412. Laffan, S. W., Nielsen, O. M., Silcock, H., and Hegland, M., 2005, Sparse Grids: A New Predictive Modeling Method for the Analysis of Geographic Data, International Journal of Geographical Information Science, 19(3), 267 292. Babenko, K. I., 1960, Approximation of Periodic Functions of Many Variables by Trigonometric Polynomials, Dokl. Akad. Nauk SSSR, 132:247250. Russian, Engl. Transl: Soviet Math. Dokl. 1, 513-516, 1960. Temlyakov, V. N., 1993, Approximation of Periodic Functions, Nova Science, New York. Temlyakov, V. N., 1993, On Approximate Recovery of Functions with Bounded Mixed Derivative. J. Complexity, 9, 41-59. Delvos, F. J., 1982, D-Variate Boolean Interpolation, J. Approx. theory, 34, 99-114. Delvos, F. J., and Schempp, W., 1989, Boolean Methods in Interpolation and Approximation, Pitman Research Notes in Mathematics series 230, Longman Scientific & Technical, Harlow. Liem, C. B., Lu , T., and Shih, T. M., 1995, The Splitting Extrapolation Method, World Scientific, Singapore, Novak, E., and Ritter, K., 1996, High-Dimensional Integration of Smooth Functions Over Cubes, Numer. Math, 75 (1), 7997.

478 [36] [37] [38] [39]

P. Beena and Ranjan Ganguli Schreiber, A., 2000, Smolyaks Method for Multivariate Interpolation, Ph.D. dissertation. Georg-August-Universitat Gottingen Germany. 2000. Rao, S. S., 1996, Engineering optimization Theory and Practice, WileyInterscience, New York. Neculai, A., 2008, An Unconstrained Optimization Test Functions Collection, Advanced modeling and optimization, 10(1), 147-161. Klimke, A., Wohlmuth, B., 2005, Algorithm 847: Spinterp: Piecewise Multilinear Hierarchical Sparse Grid Interpolation in MATLAB, ACM Trans. Math. Softw, 31(4), 561-579. Klimke, A., 2006, Sparse Grid Interpolation Toolbox -User's Guide, Technical Report IANS report 2006/001, University of Stuttgart.

[40]

Вам также может понравиться