Вы находитесь на странице: 1из 25

Assessment of Multi-objective genetic algorithms with Different Niching Strategies and Regression Methods for Engine Optimization and

Design
Yu Shi and Rolf Reitz Engine Research Center, University of Wisconsin-Madison
ICES2009-76015

Outline
Research background Models and methodologies
o o o

Convergence and diversity metrics NSGA II with different niching strategies Assessment of regression methods

Results and discussion Conclusions

Research background
Advanced engine combustion strategies (PCCI, MK, etc.) involve a vast number of optimum parameters Engine optimization design aided by low cost CFD modeling is becoming more popular, but it is very time-consuming

Research background
Engine optimization is usually a Multi-objective Optimization Problem (MOP), such as simultaneously reducing emissions (NOx, soot, CO, UHC) and fuel consumption NOx The Pareto optimal solutions consist of solutions that are not dominated by any of other solutions

A C

Soot The goals in MOP 1 To find a set of solutions as close as possible to the Pareto-optimal front. 2 To find a set of solutions that are as diverse as possible.

diversified designs

VS

diversified objectives

Research background
Evolutional methods
Mimic the evolutional processes of ecosystems, which obey the Darwinian idea of survival of the fittest o Single Objective Genetic Algorithm (SOGA, eg. micro-GA) Maximize or minimize the specified sole objective as the optimization process evolves o Multi-Objective Genetic Algorithm (MOGA, eg. NSGA II) Move the Pareto front towards the ideal optimal set of solutions as the optimization process evolves Suitable for finding global optimum solutions 5

Task 3: Can we use a regression method to partially replace expensive CFD evaluations?

Research background
output Engine CFD code

Task 1: When does the optimization loop converge?

NOx Soot GISFC

Input

SOI

Spray angle

Inj. Pre. Input

Swirl ratio

Feedback

Optimization Methodology (NSGA II)

Task 2: How can we obtain diversified objectives or designs? Work on GA?

Models and methodologies Convergence metric (Task 1)


Why we need a convergence metric? The movement of the Pareto front cannot be visualized when the number of objectives exceeds three defining a convergence metric can be regarded as reducing the dimensionality of the Pareto front to one-dimension. d12> d23> d34>d45>d56 convergence G4 G1 d23 G5 d G2 G6 d4534 d56 G3 d12
f k ,i f k , j di = min max f kmin k =1 f k j =1
M

Objective 2

G2 G3 G4 G5 G6

objective 1

generations

Models and methodologies Diversity metric (Task 1)


Design variable 2 Design variable 2 Design variable 1 Design variable 1

To quantify the diversity metric, different weights are assigned to the sub-grids that contain different numbers of Pareto solutions, and these sub-grids containing fewer solutions are given higher weights (1/n, n is the number of Pareto solutions that are located in that grid).

Models and methodologies - NSGA II with different niching strategies (Task 2)


Non-dominated Sorting Genetic Algorithm II (NSGA II) [1] performs better for engine optimization in terms of efficiency and optimality of the results [2] Niching technique is used in most genetic algorithms to avoid local optimal results
1 Cases of low rank are preferred to mate and generate the next generation 2 For cases that have the same rank, those located in less crowded place are preferred (niching technique)

[1] Deb et al., IEEE trans. 2002 9 [2] Shi and Reitz, SAE trans. 2008-01-0949

Models and methodologies - NSGA II with different niching strategies (Task 2)


Objective-niching: distance calculated based on value of objectives (i.e. emissions and fuel consumption) - more diversified objectives

Design-niching: distance calculated based on value of design parameters (i.e., SOI, injection pressure, etc.) more diversified designs

10

Models and methodologies Regression methods (Task 3)


Response surfaces of objective functions to design parameters can be built using regression methods, based on the knowledge learned from existing datasets. Response surfaces can be employed to approximate objectives of a design corresponding to the design parameters . Time spent on learning and evaluating is much less than that of CFD evaluation
11

Models and methodologies Regression methods (Task 3)


G1 Design of experiment (DoE) or fixed previous generations CFD evaluation to produce dataset Learning using regression methods Creating response surface

G2

G3 G4 G5

Partially replace CFD evaluations

Static learning process


G6

12

Models and methodologies Regression methods (Task 3)


G1 Creating response surface G2 Partially replace CFD evaluations G3 G4 G5 G6

Dynamic learning process

13

Models and methodologies Regression methods (Task 3)


K-nearest neighbor (KN) method Kriging (KR) method Neural Networks (NN) method Radial Basis Functions (RBF) method
A commercial software modeFRONTIER 4.0 www.esteco.com

14

Results and discussion Optimization problem[1]


Conditions Speed [rev/min] IVC temperature [K] IVC pressure [kPa] Load (%) Injection quantity [mg/cyc] EGR level [%] Global equivalence ratio O2 Concentration [vol. %] High-load 1672 385 310 95 229 25 0.60 17.65

Mode 4

Parameters
A(% bowl depth) B (% bowl diameter) C (% cylinder diameter)

High-load 65-75 74-80 71-84 0.1-0.7 0.3-0.9 0.8-1.5 60-85 0.5-2.0 -15 - -13

Caterpillar 3400 series heavyduty diesel engine NSGA II (objective VS design niching) 24 populations 51 generations = 1024 evaluations Objectives: NOx, soot, GISFC

1 Bezier curve control point 2 Bezier curve control point 3 Bezier curve control point Injector spray half-angle Swirl ratio SOI (ATDC)

[1] Shi and Reitz, IJER 2008

15

Results and discussion Convergence metric


Normalized convergence metric

1.0 0.8 0.6 0.4 0.2 0.0


Objective niching Design niching

(a) Convergence metric

0 5 10 15 20 25 30 35 40 45 50 55 Number of generation

220
GISFC (g/kw.hr)

Generation 20 Generation 51

220
GISFC (g/kw.hr)

Generation 20 Generation 51

210

210

200 190 0.4 10 0.3 el) NO 20 30 0.2 g.fu x (g 0.1 (g/k /kg 40 .fue 50 ot l) 60 0.0 So

200 190 0.4 10 0.3 uel) 0.2 /kg.f NO 20 30 x (g 0.1 t (g /kg 40 50 .fue oo l) 60 0.0 S

(b) Pareto front from the optimization using the objective niching

(c) Pareto front from the optimization 16 using the design niching

Results and discussion diversity metric


1.0
Diversity metric using three objectives

1.0
Diversity metric using three design paramters

0.8 0.6 0.4 0.2 0.0


Objective niching Design niching

0.8 0.6 0.4 0.2 0.0

Objective niching Design niching

0 5 10 15 20 25 30 35 40 45 50 55 Number of generation

0 5 10 15 20 25 30 35 40 45 50 55 Number of generation

(a) Diversity metric in the objective space (NOx, Soot, GISFC)

(b) Diversity metric in the design space (Spray angle, SOI, Swirl)

17

Results and discussion Comparison of regression methods


Comparison methodology 1. Using regression methods to learn from existing datasets (either from DoE or real GA generations) 2. Virtual designs are calculated based on the response surfaces. Real designs are calculated by the KIVA code 3. Compare how close the virtual designs are to the corresponding real designs Mean error, maximum error, median error, minimum error, standard deviation of the error (in percentage)

18

Results and discussion Comparison of regression methods (DoE generated dataset (120 cases), static learning) 170 12
Mean error for GISFC

10

160
Mean error for NOx
11 21 31 41 51

8 6 4 2 0

150 140 70 60 50 40 30 20 11 21 31 41 Predicted generations 51

Predicted generations

GISFC
600
Mean error for soot

NOx
K-nearest neighbor (KN) Kriging (KR) Neural Network (NN) Radius Basis Functions (RBF) KN with a log transformation KR with a log transformation NN with log transformation RBFwith log transformation

500 400 300 200 100 11 21 31 41 Predicted generations 51

Soot

19

Results and discussion Comparison of regression methods (GA generated dataset, dynamic learning) GISFC
20 15
Maximum error Mean error

140

8 7 6
Median error

120

5 4 3 2 1 0

10 5 0

20

11

21 31 41 Predicted generations

51

11

21 31 41 Predicted generations

51

11

21 31 41 Predicted generations

51

Mean error
1.0
Standard deviation of the error

Maximum error
35 30 25 5

Median error
Objective niching, K-nearest neighbor Objective niching, Kriging Objective niching, Neural Network Objective niching, Radial Basis Functions Design niching, K-nearest Design niching, Kriging Design niching, Neural Network Design niching, Radial Basis Functions

0.8
Minimum error

0.6 0.4 0.2 0.0

11

21 31 41 Predicted generations

51

11

Minimum error

21 31 41 Predicted generations

51

STD deviation of the error

20

Results and discussion Comparison of regression methods (GA generated dataset, dynamic learning)
70 60 500 50 40
Maximum error Median error

NOx

50
Mean error

40 30 20 10 0 11 21 31 41 Predicted generations 51

400

30 20 10 0

100 50 0 11 21 31 41 Predicted generations 51

11

21 31 41 Predicted generations

51

Mean error
Standard deviation of the error

Maximum error
100 90 80 30 20 10 0 11 21 31 41 Predicted generations 51

Median error
Objective niching, K-nearest neighbor Objective niching, Kriging Objective niching, Neural Network Objective niching, Radial Basis Functions Design niching, K-nearest Design niching, Kriging Design niching, Neural Network Design niching, Radial Basis Functions

8 6
Minimum error

4 2 0

11

21 31 41 Predicted generations

51

Minimum error

STD deviation of the error

21

Results and discussion Comparison of regression methods (GA generated dataset, dynamic learning)
150 1700 1600 1500 1400 1300 600 400 200 0 11 21 31 41 Predicted generations 51 100 90 80 70 60 50 40 30 20 10 0

Soot

Maximum error

140 70 60 50 40 30 20 10 0

11

21 31 41 Predicted generations

51

Median error

Mean error

11

21 31 41 Predicted generations

51

Mean error
Standard deviation of the error

Maximum error
400 380 360 340 320 140 120 100 80 60 40 20 0

Median error
Objective niching, K-nearest neighbor Objective niching, Kriging Objective niching, Neural Network Objective niching, Radial Basis Functions Design niching, K-nearest Design niching, Kriging Design niching, Neural Network Design niching, Radial Basis Functions

20

Minimum error

18 6 4 2 0 11 21 31 41 Predicted generations 51

11

21 31 41 Predicted generations

51

Minimum error

STD deviation of the error

22

Conclusions
The niching strategy does not influence NSGA II convergence performance. The present convergence metric indicates when the optimization process can be terminated. Using design niching, more diversified design parameters were produced, as shown by the diversity metric

23

Conclusions
By dynamically learning from all previous generations in the GA optimization process, the regression methods, especially the K-nearest neighbors and Kriging methods, predicted results in good agreement with the KIVA evaluations for the next generation. A logarithm transformation in the objective-space improved the prediction accuracy. These findings promise a proposed methodology that part of the real evaluations can be replaced by virtual designs through learning from previously existing data.

24

Contact: Yu Shi, shi5@wisc.edu Rolf Reitz, reitz@engr.wisc.edu 25

Вам также может понравиться