Вы находитесь на странице: 1из 598

A CONCEPT EXPLORATION METHOD FOR PRODUCT FAMILY DESIGN

Ph.D. Dissertation
Timothy W. Simpson

Abstract
Current design research is directed at improving the efficiency and effectiveness of designers in
the product realization process, and until recently, the focus has been predominantly on
designing a single product. However, today’s market—characterized by words such as mass
customization, rapid innovation, and make-to-order products—requires a new approach to
provide the necessary product variety to remain competitive. The answer advocated in this
dissertation is the design and development of scalable product platforms around which a family
of products can be realized to satisfy a variety of market niches. In particular, robust design
principles, statistical metamodeling techniques, and the market segmentation grid, an attention
directing tool from management science, are synthesized into the Product Platform Concept
Exploration Method (PPCEM); the PPCEM is an efficient and effective method for designing
scalable product platforms, the cornerstone of an effective product family. The efficiency and
effectiveness of the method are tested and verified through application to three example
problems: the design of a family of oil filters, the design of a family of absorption chillers, and the
design of a family of General Aviation aircraft.

Research Questions and Hypotheses


The principal goal in this dissertation is to develop a method to facilitate the design of scalable
product platforms around which a family of products can be developed. Decision-Based
Design, robust design principles, and the Robust Concept Exploration Method provide the
foundation on which this work is built. Given this foundation and goal, the motivation for this
research is embodied in one primary research question and two secondary research questions
which are stated as follows.

Primary Research Question:

Q1. How can scalability be modeled and realized in product family design?

Secondary Research Questions:

Q2. Is kriging a viable metamodeling technique for building approximations of


deterministic computer analyses?
Q3. Are space filling designs better suited for building approximations of deterministic
computer analyses than classical experimental designs?
There are three hypotheses which are investigated throughout this dissertation in an effort to
answer the proposed research questions.

Hypothesis 1: The Product Platform Concept Exploration Method provides an


efficient and effective method for developing a scalable product platform.
Hypothesis 2: Kriging is a viable alternative for building metamodels of deterministic
computer analyses.
Hypothesis 3: Space filling experimental designs are better suited for building
metamodels of deterministic computer experiments than classical experimental
designs.

There is a one-to-one correspondence between the hypotheses and research questions. The
Product Platform Concept Exploration Method mentioned in Hypothesis 1 is developed to
answer the first research question, providing a method to model and realize scalability in product
family design. Hypotheses 2 and 3 entail affirmative answers to Questions 2 and 3 which are
explicitly tested and verified in Chapter 4. Confirmation of Hypothesis 1 is not contingent upon
verification of Hypotheses 2 and 3; Hypotheses 2 and 3 help to support Hypothesis 1 but have
implications which extend beyond product family design. These implications are discussed more
thoroughly in the concluding chapter of this dissertation, Section 8.1.

Since Question 1 is quite broad, three supporting research questions and sub-hypotheses are
proposed to facilitate the verification of Hypothesis 1. As with the preceding research questions
and hypotheses, there is a one-to-one correspondence between each of these supporting
question and the correspondingly numbered hypothesis. The supporting questions and sub-
hypotheses are stated as follows.

Q1.1. How can product platform scaling opportunities be identified from overall
design requirements?
Q1.2. How can robust design principles be used to design a scalable product
platform?
Q1.3. How can individual targets for product variants be aggregated and modeled for
product platform design?

Sub-Hypothesis 1.1: The market segmentation grid can be utilized to identify scale
factors for a product platform.
Sub-Hypothesis 1.2: Robust design principles can be used to design a scalable
product platform by minimizing the sensitivity of a product platform to variations in
scale factors.
Sub-Hypothesis 1.3: Individual targets for product variants can be aggregated into an
appropriate mean and variance and used in conjunction with robust design
principles to effect a product family .

As with the main hypotheses, the sub-hypotheses are stated here to provide context for the
literature review in the next chapter and development of the PPCEM in Section 3.1.

Supporting Posits
There are several posits which support the research hypotheses. Six posits support Hypotheses
1 and Sub-Hypotheses 1.1-1.3.; they are the following.

Posit 1.1: The RCEM provides an efficient and effective means for developing robust
top-level design specifications for complex systems design.
Posit 1.2: Metamodeling techniques, specifically, design of experiments and response
surface methodology, can be used to facilitate concept exploration and optimization,
thus increasing a designer’s efficiency.
Posit 1.3: Robust design principles can be used to minimize the sensitivity of a design
to variations in uncontrollable (i.e., noise) factors and/or variations in design
parameters (i.e., control factors).
Posit 1.4: Robust design principles can be used effectively in the early stages of the
design process by modeling the response itself with separate goals for “bringing the
mean on target” and “minimizing the variation.”
Posit 1.5: The compromise DSP is capable of effecting robust design solutions through
separate goals for “bringing the mean on target” and “minimizing variation” of noise
factors and/or variations in the design variables.
Posit 1.6: The market segmentation grid can be used to identify opportunities for
platform leveraging in product family design.

These posits are founded on and substantiated through the following.


• Posit 1.1 is substantiated by Chen (1995) by explicitly testing and verifying the
efficiency and effectiveness of the RCEM for developing robust top-level design
specifications for complex systems design.
• Posit 1.2 is substantiated through the numerous applications of design of
experiments and response surface models that have been used to illustrate the
usefulness of metamodeling techniques in design, facilitating concept exploration and
optimization and increasing a designer’s efficiency; over twenty-five such
applications are reviewed in (Simpson, et al., 1997). Furthermore, Chen (1995)
explicitly tests and verifies this posit as part of the development of the RCEM.
• Posit 1.3, Posit 1.4, and Posit 1.5 are substantiated by the work in (Chen, et al.,
1996); Chen and her coauthors describe a general robust design procedure which
can minimize the sensitivity of a design to variations in noise factors and/or design
parameters (Posit 1.3) by having separate goals for “bringing the mean on target”
and “minimizing the variation” (Posit 1.4) of each response in the compromise DSP
(Posit 1.5). These posits are further substantiated (Chen, 1995) as part of the
development of the RCEM.
• Posit 1.6 is substantiated by the discussion in Section 2.___, i.e., the market
segmentation grid can be used as an attention directing tool to identify leveraging
opportunities in product platform design (cf., Meyer, 1997; Meyer and Lehnerd,
1997); identifying these leveraging opportunities provided the initial impetus for
developing the market segmentation grid in the first place.
These six posits help to support Hypothesis 1 and Sub-Hypotheses 1.1-1.3. The strategy for
testing and verifying these hypotheses is outlined in Section ___. Before the verification strategy
is outlined, however, posits for Hypotheses 2 and 3 are given.

The following two posits have been identified in support of Hypothesis 2.

Posit 2.1: Building an (interpolative) kriging model is not predicated on the assumption
of underlying random error in the data.
Posit 2.2: Kriging provides very flexible modeling capabilities based on the wide
variety of spatial correlation functions which can be selected to model the data.

• Posit 2.1 is more fact than assumption; it can be substantiated by Sacks, et al.
(1989); Koehler and Owen (1996); and Cressie (1993).
• Posit 2.2 is substantiated by many researchers, most notably: Sacks, et al. (1989);
Welch, et al. (1992); Cressie (1993); and Barton (1992; 1994).
The testing of Posit 2.2 helps to verify Hypothesis 2; the strategy for testing Hypothesis 2 (and
Posit 2.2) is outlined in Section ___.

Finally, the following posit supports Hypothesis 3.

Posit 3.1: The experimental design conditions of replication, blockability, rotatability


have no significant meaning when sampling deterministic computer experiments.

• Posit 3.1 is taken from Sacks, et al. (1989) who state that the “classical notions of
experimental blocking, replication and randomization are irrelevant” for deterministic
computer experiments which contain no random error. Moreover, any experimental
design text (see, e.g., Montgomery, 1991) can verify that experimental design
properties such as replication, blockability, and rotatability are developed explicitly
to handle and account for random (measurement) error in a physical experiment for
which classical experimental designs have been developed.
Furthermore, since kriging (using an underlying constant model) is being advocated in this
dissertation for metamodeling deterministic computer experiments, an additional posit in support
of Hypothesis 3 is the following.

Posit 3.2: Since kriging (with an underlying constant model) models rely on the spatial
correlation between data, confounding and aliasing of main effects and two-factor
interactions have no significant meaning when predicting a response.

• Posit 3.2 is substantiated by Sacks, et al. (1989); Currin, et al. (1991); Welch, et
al. (1990); and Barton (1992; 1994). In physical experimentation, great care is
taken to ensure that aliasing and confounding of main effects and two-factor
interactions do not occur to ensure accurate estimation of coefficients of the
polynomial response surface model (see, e.g., Montgomery, 1991).
The experimental procedure for testing Hypothesis 3 is discussed in the next section along with
the specific strategy for verification and testing of all of the hypotheses.

Verification and Testing of the Hypotheses


Verification of the hypotheses has already through the stating and substantiating of posits in
support of each hypothesis. What is tested in the remainder of the dissertation is the “intellectual
leap of faith” required to jump from the posits to the hypotheses. The strategy for verification
and testing of the Hypothesis 1 and the related sub-hypotheses is outlined in Section ___. The
strategy for verification and testing of the Hypotheses 2 and 3 is also outlined in Section ___
and examined more thoroughly in Section 4.2 wherein the experimental procedure for testing
Hypotheses 2 and 3 is detailed.

Testing Hypothesis 1 and Sub-Hypotheses 1.1-1.3

First and foremost, the Product Platform Concept Exploration Mehotd (PPCEM) is
hypothesized as a method for designing scalable product platforms. In addition, the PPCEM is
hypothesized to be efficient and effective. The efficiency of the PPCEM is attained by using:
• metamodels to:
– create inexpensive approximations of computer analyses, and
– facilitate the implementation of robust design; and
• robust design principles to design simultaneously a family of products around a
scalable product platform.
The effectiveness of the PPCEM is attained by using:
• robust design principles to design a scalable product platform, and
• lexicographic minimum concept to generate a solution portfolio to maintain design
flexibility.
To verify the effectiveness of the PPCEM as a method to design scalable product platforms,
three example problems are utilized. In each example, the resulting product platform obtained
using the PPCEM is compared to the results obtained from designing each product in the family
separately and then aggregating the products into a common set of specifications. The three
example problems for testing the PPCEM are:

• the design of a family of oil filters,


• the design of a family of absorption chillers, and
• the design of a family of General Aviation aircraft
These three problems have been selected for three reasons. First, they are well-suited for
demonstrating the PPCEM; each problem entails the design of a family of products around a
scalable product platform. Second, these examples represent engineering design problems of
varying size and complexity with the design of a family of oil filters being the smallest and least
complex of the problems to the design of a family of General Aviation aircraft which is the
largest and most complex of the three. The design of a family of absorption chillers falls
somewhere in the middle. Finally, these problems have all been substantial enough for three
Master’s thesis projects. The design of a family of oil filters is part of (Uma’s Thesis), the
design of a family of absorption chillers is part of , and the design of a family of General Aviation
aircraft filters is part of the author’s previous work (Simpson, 1995). As such, the previous
work in the cited theses provides additional verification for the PPCEM and the related
hypotheses.

Efficiency of the PPCEM is verified by comparing the time involved for building, validating, and
using the necessary metamodels against the time the procedure takes to repeat the process
without the metamodels. Similarly, the efficiency achieved through implementation of robust
design principles to design simultaneously a family of products is discussed at the end of each
example problem.
Verification and testing of the sub-hypotheses related to Hypotheses 1 entail:
Testing Sub-Hypothesis 1.1 - The procedure for using the market segmentation grid
to identify scale factors for a product platform is shown in Figure ___ and described
in Section ___. Further verification of this sub-hypothesis requires demonstrating
that this procedure can indeed be used to identify scale factors for a product
platform. This is demonstrated in all three examples wherein the appropriate scale
factors are identified for leveraging the product platform in the product family.
Testing Sub-Hypothesis 1.2 - If appropriate scale factors can be identified for a
product platform (i.e., Sub-Hypothesis 1.1 is true) then the principles of robust
design can be employed to develop a design which is robust with respect to these
noise factors much in the same way that Chang, et al. (1994) use robust design
principles to develop “conceptually robust” designs with regard to appropriate
“conceptual noise” factors resulting from distributed, concurrent engineering
practices. Verification of this sub-hypothesis requires demonstration of the
approach, and the three examples provide such a demonstration.
Testing Sub-Hypothesis 1.3 - The procedure for aggregating the individual targets of
the product variants is outlined in Section ___. As with Sub-Hypothesis 1.1, further
verification of this sub-hypothesis requires demonstrating that this procedure can
indeed be used; the three examples are used to demonstrate just that.
Verification of these sub-hypotheses helps to further verify Hypothesis 1. The strategy for
testing Hypotheses 2 and 3 is outlined in the next Section.

Testing Hypotheses 2 and 3

Testing of Hypotheses 2 and 3 proceeds as follows, occurring predominantly in Chapter 4.


First, an initial feasibility study of the utility of kriging is presented in Section 4.1. A simple, yet
realistic engineering example—the multidisciplinary design of an aerospike nozzle—is used to
compare the predictive capability of a kriging model against that of second-order response
surfaces. The specific aspect of Hypothesis 2 being tested in Section 4.1 is whether or not
kriging, using an underlying constant global model in combination with a Gaussian correlation
function (one of the five being investigated in this dissertation, see section 2.3.1) is as accurate
as a full second-order response surface.
After this initial feasibility study, an extensive investigation into the utility of kriging and space
filling designs is conducted to test Hypotheses 2 and 3. The details of the investigation are given
in Section 4.2. To summarize, six engineering test problems—outlined in Section 4.2.1 and
detailed in Appendix C—are selected to:
• test the effect of five different correlation functions, Equations 2.___ in Section
2.3.1, on the accuracy of the kriging model for a wide variety of engineering analysis
equations (linear, quadratic, cubic, reciprocal, exponential, etc.);
• correlate the types of functions (analysis equations) which kriging models can and
cannot accurately approximate; and
• test the effect of eleven different experimental designs on the accuracy of the
resulting kriging model.
Of the eleven experimental designs mentioned in the last bullet, two are classical designs—
central composite and Box-Behnken—and the remaining nine are space filling, see Section
4.2.2 and refer to Section 2.3.2 for a description of each type of design used in this dissertation.
In this manner, Hypothesis 3 is explicitly tested by comparing the accuracy of the kriging model
built from a space filling design and a classical experimental design. The first two bullets relate
to testing Hypothesis 2, and the particulars of that study (types of designs, number of points,
types of equations, etc.) are detailed in Section 4.2.
OVERVIEW OF THE DISSERTATION

The remaining chapters of the dissertation flow as shown in Figure Error! No text of
specified style in document..1. Having laid the foundation in Decision-Based Design, robust
design, and the RCEM and presented the research questions for the work in this chapter,
Chapter 2 contains a literature review of related work. Based on the discussion in Section 1.1,
three research areas are reviewed: (1) product family and product platform design, (2) robust
design, and (3) metamodeling in Sections 2.1, 2.2, and 2.3, respectively.
The PPCEM is then introduced in Chapter 3 as the tools, approaches, and philosophies
from Chapters 1 and 2 are synthesized into an efficient and effective method for desiging
scalable product platforms for a product family. As noted in Figure Error! No text of
specified style in document..1, the PPCEM is overviewed in Section 3.1 with the individual
steps elaborated in Sections 3.1.1 through 3.1.5. After the PPCEM is introduced, the research
hypotheses are revisited in Section 3.2, and supporting posits are stated and substantiated.
Section 3.3 contains an outline of the strategy for verification and testing of the hypotheses. This
includes a preview of Chapter 4—wherein Hypotheses 2 and 3 are tested—and Chapters 5
through 7 wherein the PPCEM is applied to three example problems, verifying Hypotheses 1
and Sub-Hypotheses 1.1 through 1.3.
Chapter 4 entails a brief departure from product platform design yet is an intergral part
of the development of the PPCEM. In Section 4.1, an initial feasibility study of the usefulness of
kriging is performed, comparing the predictive capability of kriging models to that of second-
order response surfaces, the current “standard” for metamodeling. The experimental set-up for
testing Hypotheses 2 and 3 is then introduced in Section 4.2. An extensive study of six
engineering test problems selected from the literature is conducted to determine the utility of
various spatial correlation functions and space filling designs; results are presented and
discussed in Section 4.3.
Once the kriging and space filling design study is completed in Chapter 4, the first of the
three example problems used to demonstrate the PPCEM and verify its associated hypthoses is
given in Chapter 5: the design of a family of oil filters. The second and third example problems
are the design of a family of absorption chillers, Chapter 6, and the design of a family of General
Aviation aircraft, Chapter 7. In each chapter, an overview of the problem is given along with
pertinent analysis information. Then, the steps of the PPCEM are performed and a summary
and discussion of the results is given.
Chapter 8 is the final chapter in the dissertation. It begins in Section 8.1 with a recap of
the dissertation, emphasizing the research hypotheses and resulting contributions from the work.
A critical review of the dissertation is given in Section 8.2; limitations and shortcomings of the
work are addressed. This is followed by a discussion of possible future work to refine the
PPCEM and the associated metamodeling techniques.
Finally, there are three appendices which supplement the dissertation, specifically the
work in Chapter 4. A description of the minimax Latin hypercube design algorithm which is
unique to this work is given in Appendix A. Appendix B outlines the kriging algorithm
developed and utilized in this dissertation. In addition, a step-by-step example of building and
using a kriging model is also included. Finally, the six test problems used in Section 3.4 for
investigating the utility of different experimental design techniques and kriging models are
detailed in Appendix C.

Chp 8: Achievements and Future Work

Di
Filter Element

P o, Vo
L - 0.02
Oil outflow

P i, Vi

Q
Oil inflow

General
Absorption
Oil Filters Aviation
Chillers
Aircraft

Chp 5 Chp 6 Chp 7


Kriging/DoE Testbed

Product Platform Design Examples

Platform

MDO Example
Product Platform Concept Exploration Method
Chp 4
Chp 3

Market Scalable Aggregate Space


Scale
Segmentation Product Product Kriging Filling
Factors
Grid Platform Specification DoE

Product Family Design Robust Design Principles Metamodeling

§2.1 §2.2 §2.3

FOUNDATIONS: Decision-Based Design & the Robust Concept Exploration Method

Figure Error! No text of specified style in document..1 Pictorial Overview of the


Dissertation
Chapter Outline

CHAPTER 1
FOUNDATIONS FOR PRODUCT FAMILY AND PRODUCT PLATFORM
DESIGN

The principal objective in this disseration is to develop the Product Platform Concept
Exploration Method (PPCEM) for efficient and effective design of scalable product platforms
for a product family. As the title of this chapter implies, the foundations for developing the
PPCEM are presented here. The heart of the chapter lies in Section 1.3 wherein the research
objectives, hypotheses, and contributions for the work are described; this section sets the stage
for the chapters that follow, culminating in the development of the PPCEM in Chapter 3.
Specifically, Sections 1.1 and 1.2 provide the motiviation, foundation, and context for
investigating the proposed research and serve to establish context for the reader. More
specifically, in Section 1.1 the concepts of product family and product plaform design are
introduced, and opportunities for advancing this nascent research area are identified. In Section
1.2, the foundations for the work are presented, namely, Decision-Based Design, robust design
principles, and the Robust Concept Exploration Method. Section 1.4 contains an overview of
the dissertation.

1.1 FRAME OF REFERENCE: PRODUCT FAMILY AND PRODUCT PLATFORM


DESIGN
1.1.1 Engineering Examples of Product Families and Product Platforms
1.1.2 Opportunities in Product Family and Product Platform Design
1.1.3 Opportunities in Product Family and Product Platform Design
1.2 FOUNDATIONS FOR DESIGNING SCALABLE PRODUCT PLATFORMS FOR A
PRODUCT FAMILY
1.2.1 Decision-Based Design, the Decision Support Problem Technique, and the
Compromise Decision Support Problem
1.2.2 Robust Design Principles
1.2.3 The Robust Concept Exploration Method
1.3 REASEARCH FOCUS IN THE DISSERTATION
1.3.1 Research Questions and Hypotheses in the Dissertation
1.3.2 Contributions from the Research
1.4 OVERVIEW OF THE DISSERTATION

CHAPTER 2
STATE-OF-THE-ART IN PRODUCT FAMILY DESIGN, ROBUST DESIGN, AND
METAMODELING: LITERATURE REVIEW

In this chapter a survey of relevant work in product family and product platform design, robust
design, and metamodeling is presented in Sections 2.1, 2.2, and 2.3, respectively. In Section
2.1, the descriptors, tools, and current methods for designing product families and product
platforms are discussed. Section 2.2 contains a review of robust design principles, tracing the
evolution of robust design from parameter design to the early stages of product design. This
segues into a discussion of metamodeling techniques for building inexpensive approximations of
computer analyses to facilitate robust design and concept exploration. In particular, the kriging
approach to metamodeling is introduced in Section 2.3.1, and a variety of space filling
experimental designs for quering the computer code to build these models are described in
Section 2.3.2. Section 2.4 concludes the chapter with a summary of what has been presented
and a preview of what is next.

2.1 THE STATE-OF-THE-ARE IN PRODUCT FAMILY AND PRODUCT PLATFORM


DESIGN
2.1.1 Descriptors for Product Families
2.1.2 Mapping a Product Family and Its Evolution
2.1.3 Tools for Product Platform Design
2.1.4 Measuring Commonality in a Product Family
2.1.5 Current Product Family Design Methods
2.2 ROBUST DESIGN
2.3 METAMODELING TECHNIQUES
2.3.1 The Kriging Approach to Metamodeling
2.3.2 Space Filling and Classical Experimental Designs
2.4 A LOOK BACK AND A LOOK AHEAD

CHAPTER 3
THE PRODUCT PLATFORM CONCEPT EXPLORATION METHOD

The work in this chapter is a synthesis of the previous chapters and represents the principal
objective in this dissertation, namely, to develop the Product Platform Concept Exploration
Method (PPCEM) for efficient and effective design of scalable product platforms for a product
family. To start, an overview of the PPCEM and its associated steps and tools is given in
Section 3.1 with each step of the PPCEM and its constituent elements elaborated in Sections
3.1.1 through 3.1.5; the resulting infrastructure of the PPCEM is presented in Section 3.1.6. In
Section 3.2 the research hypotheses on which the PPCEM is founded are revisited from
Section 1.3.1. More specifically, in Section 3.2.1 the relationship between the research
hypotheses and modifications to the RCEM are detailed, and in Section 3.2.2 supporting posits
for the research hypotheses are stated. Section 3.3 follows with an outline of the strategy for
verification and testing of the research hypotheses. Section 3.4 concludes the chapter with a
recap of what has been presented and a look ahead to the metamodeling study in Chapter 4
and the example problems in Chapters 5, 6, and 7 which are used to test the research
hypotheses and demonstrate the application of the PPCEM.

3.1 OVERVIEW OF THE PPCEM AND RESEARCH HYPOTHESES


3.1.1 Step 1 - Create the Market Segmentation Grid
3.1.2 Step 2 - Classify Factors and Ranges
3.1.3 Step 3 - Build and Validate Metamodels
3.1.4 Step 4 - Aggregate Product Platform Specifications
3.1.5 Step 5 - Develop Product Platform Portfolio
3.1.6 Infrastructure of the PPCEM
3.2 RESEARCH HYPOTHESES AND POSITS FOR THE PPCEM
3.2.1 Relationship of the Research Hypotheses to the RCEM
3.2.2 Supporting Posits for the Research Hypotheses
3.3 STRATEGY FOR VERIFICATION AND TESTING OF THE RESEARCH
HYPOTHESES
3.3.1 Testing Hypothesis 1 and Sub-Hypotheses 1.1-1.3
3.3.2 Testing Hypotheses 2 and 3
3.4 A LOOK BACK AND A LOOK AHEAD

CHAPTER 4
THE UTILITY OF KRIGING AND SPACE FILLING EXPERIMENTAL DESIGNS

In this chapter, Hypotheses 2 and 3 are tested, verifying the utility of kriging and space filling
experimental designs for building metamodels of deterministic computer analyses. An initial
feasibility study of kriging as a metamoldeing technique is given in Section 4.1; the study invovles
comparing kriging and response surface models in the multidisciplinary design of an aerospike
nozzle. Once kriging is established as a viable alternative, a detailed study is set-up to test
Hypotheses 2 and 3, Section 4.2. Six problems are introduced in Section 4.2.1 to serve as the
test bed for benchmarking kriging and space filling designs. In Sections 4.2.2 and 4.2.3,
experimental design choices and error assessment measures are discussed, respectively.
Section 4.3 contains the results of the study and a discussion of the ramifications of the results.
A summary of the chapter is then given in Section 4.4 along with a discussion of the relevance of
this chapter to the development of the PPCEM.
4.1 INITIAL FEASIBLITY STUDY OF KRIGING: THE MULTIDISCIPLINARY
DESIGN OF AN AEROSPIKE NOZZLE
4.1.1 Background for the Aerospike Nozzle Problem
4.1.2 Metamodeling of the Aerospike Nozzle Problem
4.1.3 Optimization using the Response Surface and Kriging Metamodels
4.2 EXPERIMENTAL SET-UP: KRIGING AND SPACE FILLING EXPERIMENTAL
DESIGN TESTBED
4.2.1 Overview of Testbed Problems
4.2.2 Experimental Design Choices for Test Problems
4.2.3 Validation Points and Error Metrics for Assessing Model Accuracy
4.2.4 Summary of Kriging Study
4.3 THE UTILITY OF KRIGING AND SPACE FILLING DESIGNS
4.3.1 Experimental Set Up
4.3.2 Which Correlation Function is Best?
4.3.3 Which Types of Experimental Designs are Best?
4.4 A LOOK BACK AND A LOOK AHEAD

CHAPTER 5
DESIGN OF A FAMILY OF OIL FILTERS

5.1 OVERVIEW OF THE OIL FILTER PLATFORM PROBLEM


5.1.1 Problem Statement and Leveraging Strategy
5.1.2 Relevant Analyses for Oil Filter Problem
5.2 STEPS 1 AND 2: CREATE MARKET SEGMENTATION GRID AND AND
CLASSIFY FACTORS FOR OIL FILTER PLATFORM
5.3 STEP 3: BUILD AND VALIDATE METAMODELS
5.4 STEP 4: AGGREGATE PRODUCT SPECIFICATIONS AND FORMULATE OIL
FILTER PLATFORM COMPRMISE DSP
5.5 STEP 5: DEVELOP THE OIL FILTER PLATFORM PORTFOLIO
5.6 RAMIFICATIONS OF THE RESULTS OF THE OIL FILTER EXAMPLE PROBLEM
5.7 A LOOK BACK AND A LOOK AHEAD

CHAPTER 6
DESIGN OF A FAMILY OF ABSORPTION CHILLERS
6.1 OVERVIEW OF THE ABSORPTION CHILLER PLATFORM PROBLEM
6.1.1 Problem Statement and Leveraging Strategy
6.1.2 Relevant Analyses for Absorption Chillers
6.2 STEPS 1 AND 2: CREATE MARKET SEGMENTATION GRID AND AND
CLASSIFY FACTORS FOR ABSORPTION CHILLER PLATFORM
6.3 STEP 3: BUILD AND VALIDATE METAMODELS
6.4 STEP 4: AGGREGATE PRODUCT SPECIFICATIONS AND FORMULATE
ABSORPTION CHILLER PLATFORM COMPRMISE DSP
6.5 STEP 5: DEVELOP THE ABSORPTION CHILLER PLATFORM PORTFOLIO
6.6 RAMIFICATIONS OF THE RESULTS OF THE ABSORPTION CHILLER
EXAMPLE PROBLEM
6.7 A LOOK BACK AND A LOOK AHEAD

CHAPTER 7
DESIGN OF A FAMILY OF GENERAL AVIATION AIRCRAFT

7.1 OVERVIEW OF THE GENERAL AVIATION AIRCRAFT PLATFORM PROBLEM


7.2 STEPS 1 AND 2: CREATE MARKET SEGMENTATION GRID AND AND
CLASSIFY FACTORS FOR GAA PLATFORM
7.3 STEP 3: BUILD AND VALIDATE METAMODELS
7.3.1 Relevant Analyses for General Aviation Aircraft
7.3.2 Metamodel Construction and Validation for GAA Responses
7.4 STEP 4: AGGREGATE PRODUCT SPECIFICATIONS AND FORMULATE GAA
PLATFORM COMPRMISE DSP
7.5 STEP 5: DEVELOP THE GAA PLATFORM PORTFOLIO
7.6 RAMIFICATIONS OF THE RESULTS OF THE GAA EXAMPLE PROBLEM
7.6.1 GAA Platform Design Effectiveness
7.6.2 Comparison of PPCEM Results to Previous Results
7.7 A LOOK BACK AND A LOOK AHEAD

CHAPTER 8
ACHIEVEMENTS AND RECOMMENDATIONS
8.1 RESEARCH OBJECTIVES AND HYPOTHESES REVISITED
8.2 CRITICAL REVIEW OF THE DISSERATION
8.3 FUTURE WORK
8.3.1 Future Work in Kriging
8.3.2 Future Work with Space Filling Experimental Designs
8.3.3 Future Work in Product Family and Product Platform Design

APPENDIX A
A MINIMAX LATIN HYPERCUBE DESIGN GENERATOR USING A GENETIC
ALGORITHM

A.1 WHY A MINIMAX LATIN HYPERCUBE DESIGN?


A.2 A GENETIC ALGORITHM FOR GENERATING MINIMAX LATIN HYPERCUBE
DESIGNS
A.3 EXAMPLE MINIMAX LATIN HYPERCUBE DESIGNS

APPENDIX B
KRIGING STEP-BY-STEP

This appendix is intended to supplement the brief description of kriging which is given in Section
2.4.1. In Section B.1, the question of “What is kriging?” is addressed. As part of this section,
three other questions the reader might be asking him/herself are addressed:
• Section B.1.1 - “Why use kriging?”
• Section B.1.2 - “What is a spatial correlation function?”
• Section B.1.3 - “How is a kriging model built, validated, and implemented?”
After these questions are addressed, a simple one dimensional example is presented, going
step-by-step through the process of building, validating, and using a kriging model.

B.1 WHAT IS KRIGING?


B.1.1 Why Use Kriging?
B.1.2 What are Spatial Correlation Functions?
B.1.3 How is a Kriging Model Built, Validated, and Implemented?
B.2 KRIGING STEP-BY-STEP: A 1-D EXAMPLE
APPENDIX C
KRIGING TEST PROBLEMS

Six engineering test problems—2 two variabåle problems, 2 three variable problems, and 2 four
variable problems—have been selected from the literature to further test the utility of kriging as a
metamodeling technique. The analysis of these problems is simple enough that building kriging
models of the responses is overkill to say the least. However, these problems do serve a
purpose; they have been selected because: (a) they have been well studied, (b) the behavior of
the system and the underlying equations are known, and (c) the optimum solution is also know.
Thus, because the underlying equations and optimum solution are known, it is easy to determine
the utility of kriging on a variety of problems, hence testing Hypotheses 2 and 3 from Section
1.3.1. Each example is described in turn along with the corresponding constraints, bounds, and
objectives. The optimum solution for each problem is also given, and only the pertinent
equations have been numbered.

C.1 DESIGN OF A TWO-BAR TRUSS


C.2 DESIGN OF A SYMMETRIC THREE-BAR TRUSS
C.3 DESIGN OF A HELICAL COMPRESSION SPRING
C.3.1 Shear Stress
C.3.2 Free Length
C.3.3 Preload Deflection
C.3.4 Combined Deflections
C.3.5 Deflection Requirement
C.3.6 Geometric Constraints
C.4 DESIGN OF A TWO-MEMBER FRAME
C.5 DESIGN OF A WELDED BEAM
C.5.1 Weld Stress
C.5.2 Bar Bending Stress
C.5.3 Bar Buckling Load
C.5.4 Bar Deflection
C.6 DESIGN OF A PRESSURE VESSEL
1.
CHAPTER 1

FOUNDATIONS FOR PRODUCT FAMILY AND


PRODUCT PLATFORM DESIGN

The principal objective in this dissertation is to develop the Product Platform Concept

Exploration Method (PPCEM) to facilitate the design of a common scalable product platform

for a product family. As the title of this chapter implies, the foundations for developing this

method are presented here. The heart of the chapter lies in Section 1.3 wherein the research

objectives, hypotheses, and contributions for the work are described; this sets the stage for the

chapters that follow, culminating in the development of the PPCEM in Chapter 3. Sections 1.1

and 1.2 contain the motivation, foundation, and context for investigating the proposed research

and serve to establish context for the reader. Specifically, in Section 1.1 the concepts of

product family and product platform design are introduced and defined, and opportunities for

advancing this nascent research area are identified. In Section 1.2, the foundations for the

work—Decision-Based Design and the Robust Concept Exploration Method—are presented.

Finally, Section 1.4 contains an overview of the dissertation.

1
1.1 FRAME OF REFERENCE: PRODUCT FAMILY AND PRODUCT
PLATFORM DESIGN

Today’s competitive and highly volatile market is redefining the way companies do

business. “Customers can no longer be lumped together in a huge homogeneous market, but

are individuals whose individual wants and needs can be ascertained and fulfilled” (Pine, 1993).

Companies are being called upon to deliver better products faster and at less cost for customers

who are more demanding in a market which is characterized by words such as mass

customization and rapid innovation. Even government agencies like NASA are re-examining the

way they operate and do business, adopting slogans such as “better, faster, cheaper.”

Erens (1997) refers to the market as a “buyer’s market” in which manufacturing

companies must satisfy individual customer requirements; he writes as follows:

“The sellers’ market of the fifties and sixties was characterized by high demand and a
relative shortage of supply. Firms produced large volumes of identical products,
supported by mass production techniques. ... The buyer’s market of the eighties and
beyond is forcing companies making specific high-volume products to manufacture a
growing range of products tailored to individual customer’s needs at the cost of
standard mass-produced goods.”

So why the growing concern for satisfying the individual customer? Stan Davis,

the person who coined the term mass customization, captures it best: “The more a company can

deliver customized goods on a mass basis relative to their competition, the greater is their

competitive advantage” (Davis, 1987). Simply stated, companies which offer customized goods

at minimal extra cost have a competitive advantage over those that do not. Pine (1993)

2
attributes the increasing attention on product variety and customer demand to the saturation of

the market and the need to improve customer satisfaction:

“Today, demand for new products frequently has to be diverted from older ones. It is
therefore important for new products to meet customer needs more completely, to be of
higher quality, and simply to be different from what is already in the marketplace.”

Similar themes pervade the texts by Wortmann, et al., (1997) who examine industry’s response

in Europe to the “customer-driven” market, and Anderson (1997) who examines the role of

agile product development for mass customization.

This increasing need to distinguish and differentiate products from competitors is further

evidenced by Hollins and Pugh (1990):

"The customer now has plenty of choice for almost every product within a price range.
With this increased choice, consumers have become more aware of the good and bad
features of a product...they select the product that most closely fulfills their opinion of
being the best value for the money. This is not just price but a wide range of non-price
factors such as quality, reliability, aesthetics..."

Chinnaiah, et al. (1998) also examine the trend toward mass customized goods, citing more

demanding customers and market saturation as impetus for the shift. Uzumeri and Sanderson

(1997) state that “The emergence of global markets has fundamentally altered competition as

many firms have known it” with the resulting market dynamics “forcing the compression of

product development times and expansion of product variety.” The study by Womack, et al.

(1990) of the automobile industry in the 1980s provides just one of numerous examples of this

trend.

3
Since many companies typically design new products one at a time, Meyer and Lehnerd

(1997) have found that the focus on individual customers and products results in “a failure to

embrace commonality, compatibility, standardization, or modularization among different

products or product lines.” Similarly, Erens (1997) states that “If sales engineers and designers

focus on individual customer requirements, they feel that sharing components compromises the

quality of their products.” The end result is a “mushrooming” or diversification of products and

parts with proliferating variety and costs. Mather (1995) states that “Rarely does the full

spectrum of product offerings get reviewed at one time to ensure it is optimal for the business.”

Consequently, companies are being faced with the challenge of providing as much

variety as possible for the market with as little variety as possible between products.

Toward this end, the approach advocated in this dissertation and by many strategic

marketing/management researchers and designers/engineers alike is to design and develop a

family of products with as much commonality between products as possible with minimal

compromise in quality and performance. Several engineering examples are presented in the

next section to provide context and foster a better understanding of the product family concept

and how product families have been successfully developed and realized. Research

opportunities in product family and product platform design then are discussed in Section 1.1.2.

1.1.1 Engineering Examples of Successful Product Families

The following examples from Sony, Lutron, Nippondenso, Black & Decker, Canon,

and Rolls-Royce exemplify successful product families and have been studied as such.

4
Additional examples which might interest the reader include: Swiss army knives and Swatch

watches (Ulrich and Eppinger, 1995), Xerox copiers (Paula, 1997), Anderson windows

(Stevens, 1995), Hewlett-Packard printers (see, e.g., Lee, et al., 1993), the Boeing 747 family

of aircraft (see, e.g., Rothwell and Gardiner, 1990), and the Kodak single use camera (see,

e.g., Clark and Wheelwright, 1993).

Sony - The Sony Walkman

The design of the Sony Walkman is a classic example of managing the design of a product

family (Sanderson and Uzumeri, 1997). Sony first introduced the Walkman in 1979, which has

dominated the personal portable stereo market for over a decade, and has remained the leader

both technically and commercially despite fierce competition from world-class competitors, e.g.,

Matushita, Toshiba, Sanyo and Sharp. Sony built all of their Walkman models around key

modules and platforms and used modular design and flexible manufacturing to produce a wide

variety of quality products at low cost. Incremental design changes accounted for only 20-30 of

the 250+ models Sony introduced in the U.S. in the 1980s. “The remaining 85% of Sony's

models were produced from minor rearrangements of existing features and cosmetic redesigns

of the external case...topological changes [such as these] can be made with little cost or risk”

(Sanderson and Uzumeri, 1995). The basic mechanisms in each platform were refined

continually while following a disciplined and creative approach of focusing its families on clear

design goals while targeting models to distinct market segments.

5
Lutron - Electronic Lighting Control Systems

When engineers at Lutron design a new product line, they begin with a fairly standard product

with very few options (see, e.g., Spira, 1993). They then work with individual customers to

extend the product line until they eventually have a hundred or so models which customers can

purchase. Then engineering and production work together to redesign the product line with 15-

20 standardized components that can be configured into the same hundred models from which

customers could initially chose. Additional customization work can be performed to meet

individual customer requirements; in its electronic lighting systems line, used in conference

rooms, ballrooms, and hotel lobbies, Lutron has rarely shipped the same system twice

(Spira, 1993).

Nippondenso - Automotive Panel Meters

Nippondenso Co. Ltd. makes automotive components for Toyota, other Japanese car makers,

and car makers in other countries. They design their panel meters using a combinatoric strategy

as illustrated in Figure 1.1. A panel meter is composed of six parts (in rare cases, only five),

and in order to reduce inventory and production costs, each type of part has been redesigned

so that its mating features to its neighbors are identical across the part type. This was done by

standardizing the design (denoted by SD in the figure) in an effort to reduce the number of

variants of each part. Inventory and manufacturing costs were reduced without sacrificing the

product offering. Each zigzag line on the right hand side of Figure 1.1 represents a valid type of

6
meter, and as many as 288 types of meters can be assembled from 17 different components

(Whitney, 1993).

Figure 1.1 Nippondenso Panel Meter Components (from Whitney, 1993)

Black & Decker - Universal Motor Platform

The most common component in all power tools is the universal motor which Black & Decker

redesigned in the early 1970s. The redesign was in response to the threat of required double

insulation on electrical devices to protect the user from electrical shock if the main insulation

system fails. Double insulation was incorporated into 122 basic tools with hundreds of

variations, from jig saws and grinders to edgers and hedge trimmers. Through standardization

of the product line, Black & Decker was able to produce all of its power tools using a line of

motors that varied only in the stack length and the amount of copper wrapped within the motor.

As a result, all of the motors could be produced on a single machine with stack lengths varying

7
from 0.8 in to 1.75 in and power output ranging from 60 to 650 watts. Furthermore, new

designs were developed using standardized components such as the redesigned motor, which

allowed products to be introduced, exploited and retired with minimal expense related to

product development; see (Lehnerd, 1987) for additional information.

Canon - Copiers

Canon has successfully dominated the low volume end of the copier market since the mid

1980s. Canon's copiers offer a wide range of functions and market uses, including: 500-70,000

copies per month, 8-200 copies per minute, 35-400% reduction/enlargement,

fixed/variable/automatic exposure control, single sheets to double-sided, stapled, collated

copies in either black and white or as many as six different colors. To provide this variety,

Canon has a number of different series (base models or platforms) from which variant

derivatives are created to cover most of the customer's economic and technical requirements.

About 80 percent of the components of these copiers are standard; the remaining 20 percent

are altered and modified to produce product variants within the product family, see (Rothwell

and Gardiner, 1990) for additional information.

Rolls-Royce - Aircraft Engine Platforms

Rolls-Royce designs its aircraft engines around a common platform and then “derates” or

“upgrades” the platform to suit specific customer needs (cf., Rothwell and Gardiner, 1990). An

example is the RTM322 engine which was designed to allow several versions to be produced to

cater to different market requirements and power outputs. As shown in Figure 1.2, the

8
RTM322 platform is common to multiple versions of the engine, namely, the turboshaft,

turbofan and turboprop. When the RTM322 engine is scaled by a factor of 1.8, the engine

platform becomes the core for the RB550 series which is produced in two versions: turboprop

and turbofan .

Figure 1.2 Rolls-Royce RTM322 Engine (Rothwell and Gardiner, 1990)

In light of these examples, the following definitions for product family, product platform,

and derivatives and product variants are offered to provide context for the remainder of the

dissertation.

A product family is a group of products which share common form features and
function(s), targeting one or multiple market niches. Here, form features refer
generally to the shape and characterizing features of a product; function refers generally
to the utilization intent of a product. The Sony Walkman product family is one such
example; it contains a variety of models with different features and functions, e.g.,
graphic equalizer, auto-reverse, and waterproof casing, to target specific market niches.
9
A product platform, in this dissertation, is the common set of design variables around
which a family of products can be developed. In general terms, a product platform is
the common technological base from which a product family is derived through
modification and instantiation of the product platform to target specific market niches
(cf., Erens, 1997; McGrath, 1995; Meyer and Lehnerd, 1997). The universal motor
platform developed by Black & Decker is an example of a successful product platform.
Product platforms are also prevalent in the automobile industry, for example, where
several car models are typically derivatives of a common platform (cf., Siddique and
Rosen, 1998); Kobe (1997) and Naughton (1997) describe GM’s and Honda’s global
platform strategies, respectively.

A derivative or product variant is a specific instantiation of a product platform within a


product family which possesses unique form features and function(s) from other
members in the product family. Paper copiers are good examples of products derived
from a common product platform; in addition to the Canon example discussed
previously, Xerox’s 1090 copier is a derivative of its 1075 model while both copiers
are part of Xerox’s 10 series of copiers (Jacobson and Hillkirk, 1986). Furthermore,
the Boeing 747-200, 747-300, and 747-400 are derivatives of the Boeing 747
(Rothwell and Gardiner, 1990).

A single product or individual product is a unique product that has no pre-defined


relationships to other products; any resemblance to other products is strictly through
coincidence or producer’s preference (Erens, 1997). A single product contrasts a
derivative product that has similarities to other products in the product family having
been derived from the same product platform.

In light of these examples and definitions, opportunities for making contributions in product

family and product platform design are discussed in the next section.

10
1.1.2 Opportunities in Product Family Design and Product Platform Design

To understand some of the research opportunities in product family and product

platform design, a closer look at the previous examples is needed. The examples from Lutron,

Nippondenso, and Black & Decker exemplify an a posteriori or bottom-up approach to

product family design. Each company redesigned or consolidated a group of distinct products

to create a more “efficient and effective” product family. Here, efficient and effective refers to

the increased economies of scale each company was able to realize by standardizing

components to reduce manufacturing and inventory costs without significantly compromising

product quality and performance.

The main drivers for this type of approach are as follows:

• simplify the product offering and reduce part variety by

• standardizing components so as to

• reduce manufacturing costs and inventory costs and

• reduce manufacturing variability (i.e., the variety of parts that are produced in a given
manufacturing facility) and thereby

• improve quailty and customer satisfaction.

While the cost savings in manufacturing and inventory begin almost immediately from this type of

approach, the rewards are typically long-term since the capital investments and redesign costs

can be significant. Black & Decker, for example, estimated that it would take seven years to

reach the break-even point when they redesigned their universal motor platform for Double

Insulation. As Lehnerd (1987) discusses, between capital expenditures, development, and

11
tooling, Black & Decker spent $17M to redesign their motors; however, by paying attention to

standardization and exploiting platform scaling around the motor stack length, all of their motors

could be produced on the same machines. As a result, material costs dropped from $0.77 to

$0.42 per motor while labor costs fell from $0.248 to $0.045 per motor, yielding an annual

savings of $1.82M per year. The cost of Black & Decker tools decreased by as much as 62%,

boosting sales, increasing production volumes, and further improving savings.

But must a company spend millions of dollars in costly redesign to achieve a good

product family? The answer is obviously no, and the examples from Rolls Royce, Canon, and

Sony demonstrate such an approach. These three companies exemplify an a priori or top-

down approach to product family design, i.e., strategically manage and develop a family of

products based on a common platform and its derivatives. McGrath (1995) states that “A clear

platform strategy leverages the resulting products, enabling them to be deployed rapidly and

consistently.” Furthermore, Wheelwright and Clark (1992) write as follows:

“Companies target new platforms to meet the needs of a core group of customers but
design them for easy modification into derivatives through the addition, substitution, or
removal of features. Well-designed platforms also provide a smooth migration path
between generations so neither the customer nor the distribution channel is disrupted.”

Finally, commonality and standardization across product families allow new designs to be

introduced, exploited, and retired with minimal expense related to product development

(Lehnerd, 1987).

12
As discussed in Section 1.1.1, Sony and Canon have been able to dominate their

respective markets despite serious local and global competition through a well managed product

platform implementation strategy. The Sony Walkman has been the leader in the personal

stereo market for decades; Sanderson and Uzumeri (1995) studied the success of the Sony

Walkman, commenting as follows:

“Sony's strategy employed a judicious mix of design projects, ranging from large team
efforts that produced major new model 'platforms' to minor tweaking of existing
designs. Throughout, Sony followed a disciplined and creative approach to focus its
sub-families on clear design goals and target models to distinct market segments. Sony
supported its design efforts with continuous innovation in features and capabilities, as
well as key investments in flexible manufacturing.”

Similiarly, Canon was able to steal, and henceforth dominate, the low-end copier market from

Xerox through careful development and realization of a family of products derived from

common platforms (Jacobson and Hillkirk, 1986). Companies like Xerox now are in the

process of re-engineering their product development processes to facilitate the design and

development of new families of copiers in record time (Paula, 1997). Along these same lines,

Rolls Royce can boast similar success. By scaling the RTM322 engine platform to satisfy a

range of thrust and power requirements, Rolls Royce was able to (a) reduce manufacturing and

inventory costs by using similar modules and components from one engine to the next and, more

importantly, (b) facilitate the costly certification phase of its engine development process.

Good product platforms do not just come off the shelf; they must be carefully planned,

designed, and developed. This requires intimate knowledge of customer requirements and a

13
thorough understanding of the market. However, as discussed in the literature review in Section

2.2.1, many of the tools and methods which have been developed to facilitate the

management and development of effective product platforms and product families are at

too high of a level of abstraction to be useful to engineering designers particularly for

modeling and design synthesis. Meanwhile, engineering design methods and tools for

synthesizing product families and product platforms are limited or slowly evolving. Consider the

brief summary in Table 1.1 of the product family examples from Section 1.1.1 and the

availability of design support. The majority of the examples from Section 1.1.1 require modular

design to facilitate upgrading and derating product variants through the addition and removal of

modules; a survey of these many of approaches is offered in Section 2.2.3. In additioin,

clustering approaches have been developed to reduce variability within a product family and

facilitate redesigning product families to improve component commonality, see Section 2.2.3.

Meanwhile, little to no attention has been paid to platform scaling issues for product

family design. The notion of a “scalable” or “stretchable” product platform is introduced by

Rothwell and Gardiner (1990) and may be loosely defined as follows:

Scalable refers to the capability of a product platform to be “scaled,” “stretched,” or


“leveraged” to satisfy specific market niches. For example, the Boeing 747 is a scalable
product platform. It has been “scaled up” and “scaled down” to create the Boeing
747-200, 747-300, and 747-400 to satisfy different market niches based on number of
passengers, flight range, etc. (Rothwell and Gardiner, 1990). The Rolls-Royce
RTM322 aircraft engine and the Black & Decker universal motor examples discussed
in Section 1.1.1 heavily exploit platform scaling.

14
Table 1.1 Product Family Examples: Approach and Available Support

Example Top-Down or Product Family Availability of Design


from §1.1.1 Bottom-Up Composition Support
Product platform with
Sony: Walkman Top-Down predominantly modular Modular design
design innovations
Combinatoric strategy
Lutron: Lighting Modular design and
Bottom-Up based on modular design
Control Systems clustering approaches
and part standardization
Nippondenso: Clustering approaches and
Bottom-Up Similar to Lutron
Panel Meters modular design
Modular design to
Black & Decker: Product platform scaled
Bottom-Up standardize interfaces; no
Universal Motor around stack length
support for scalability
Product platform with
Canon: Copiers Top-Down predominantly modular Modular design
design for variety
Product platform which is Modular design for some
Rolls Royce:
Top-Down both scaled and modular components; no support
RTM322 Engine
for upgrading for scalability

There are several reasons to investigate scalability in product platform design:

• While modular design has received considerable attention in engineering design


research, the design of parametrically scalable product platforms for a product family
has received little to none.

• In many product families, scalability can be exploited from both a technical standpoint
and a manufacturing standpoint to increase the potential benefits of having a common
product platform. The Rolls Royce RTM322 engine and the Black & Decker universal
motor are excellent examples of this.

• Finally, and perhaps most importantly, the concept of scalability and scalable product
platforms provides an excellent inroads into product family and product platform design

15
through the synthesis of current research efforts in Decision-Based Design and the
Robust Concept Exploration Method (described in Sections 1.2.1 and 1.2.2,
respectively), robust design (described in Section 2.3) and tools from
marketing/management science (described in Section 2.2.1).

Consequently, the primary research question investigated in this dissertation is as follows:

How can a common scalable product platform be modeled and designed for a

product family?

To address this question, the Product Platform Concept Exploration Method

(PPCEM) is developed in this dissertation to provide a Method which facilitates the synthesis

and Exploration of a common Product Platform Concept which can be scaled into an

appropriate family of products. The PPCEM and its associated tools and steps are introduced

in Section 3.1. The underlying assumption behind the PPCEM is that a common set of

specifications (i.e., design variable settings) can be found for a product platform which can then

be scaled in one or more of its “dimensions” to realize a product family. This product family can

then satisfy a wide variety of customer requirements with minimal compromise in individual

product quality and performance even though the product family is derived from a common

platform through scaling. Although the PPCEM is predominantly a method for parameteric or

variant design, it is asserted that commonality of product dimensions and specifications

promotes commonality of components which leads to reduced manufacturing and inventory

16
costs through better economies of scale and amortization of capital investment over a wider

variety of derivative products based on the common product platform. In special cases, such as

the Rolls Royce RTM322 engine platform mentioned earlier and the Boeing 747 series of

aircraft, an added benefit of scaling a common product platform is to expidite the testing and

certification phase of development (cf., Rothwell and Gardiner, 1990). The foundation for

developing this approach is presented in the next section. The specific research focus for the

dissertation then is outlined in Section 1.3.

1.2 FOUNDATIONS FOR DESIGNING SCALABLE PRODUCT PLATFORMS


FOR A PRODUCT FAMILY

The technology base for the dissertation is described in this section. An overview of

Decision-Based Design, the design paradigm subscribed to in this dissertation, and the

compromise Decision Support Problem is given in Section 1.2.1. This is followed by an

overview of the Robust Concept Exploration Method (from which the Product Platform

Concept Exploration Method is derived) in Section 1.2.2.

1.2.1 Decision-Based Design, the Decision Support Problem Technique, and the
Compromise Decision Support Problem

Decision-Based Design (DBD) is rooted in the notion that the principal role of a

designer in the design of an artifact is to make decisions (see, e.g., Muster and Mistree, 1988).

This role is useful in providing a starting point for developing design methods based on

paradigms that spring from the perspective of decisions made by designers (who may use

computers) as opposed to design that is predicated on the use of computers, optimization

17
methods (computer-aided design optimization), or methods that evolve from specific analysis

tools such as finite element analysis.

The implementation of Decision-Based Design that is employed in this dissertation is the

Decision Support Problem (DSP) Technique (see, e.g., Bras and Mistree, 1991), a technique

that supports human judgment in designing systems that can be manufactured and maintained.

In the DSP Technique, designing is defined as the process of converting information that

characterizes the needs and requirements for a product into knowledge about a product

(Mistree, et al., 1990). This definition is extended easily to product family design: the process

of converting information that characterizes the needs and requirements for a product family into

knowledge about a product family, or as is the case of this work, a common scalable product

platform. A complete description of the DSP Technique can be found in, e.g., (e.g., Mistree, et

al., 1990).

Among the tools available within the DSP Technique, the compromise DSP (Mistree, et

al., 1993) is a general framework for solving multiobjective, non-linear, optimization problems.

In this dissertation, the compromise DSP is central to modeling multiple design objectives and

assessing the tradeoffs pertinent to product family and product platform design. Examples of

these tradeoffs are discussed in the context of the two example problems in Chapters 6 and 7.

Mathematically, the compromise DSP is a multiobjective decision model which is a

hybrid formulation based on Mathematical Programming and Goal Programming (Mistree, et al.,

1993), see Figure 1.3. The compromise DSP is used to determine the values of the design

18
variables which satisfy a set of constraints and bounds and achieve as closely as possible a set

of conflicting goals. The compromise DSP is solved using the Adaptive Linear Programming

(ALP) algorithm which is based on sequential linear programming and is part of the DSIDES

(Decision Support in Designing Engineering Systems) software (Mistree, et al., 1993).

In the compromise DSP, goals either may be weighted in an Archimedean solution

scheme or rank-ordered into priority levels using a preemptive approach to effect a solution on

the basis of preference. For the preemptive approach, the lexicographic minimum concept

(Ignizio, 1985) is used to evaluate different design scenarios quickly by changing the priority

levels of the goals to be achieved. The capabilities of the lexicographic minimum concept are

employed to develop the product platform portfolio as discussed in Section 3.1.4, with further

examples in Sections 6.4 and 7.5. Differences between the Archimedean and preemptive

deviation functions and a description of the ALP algorithm, design and deviation variables,

system constraints, goals, and bounds are discussed by, e.g., Mistree, et al. (Mistree, et al.,

1993).

19
Given
An alternative to be improved. Assumptions used to model the domain of interest.
The system parameters:
n number of system variables
p + q number of system constraints (p equality constraints, q inequality constraints)
m number of system goals
gi(x) system constraint function
fk(di) function of deviation variables to be minimized at priority level kth for the
preemptive case.
Find
The values of the independent system variables:
xi i = 1, …, n;
The values of the independent system variables:
di-, di+ i = 1, …, m
Satisfy
System constraints (linear, nonlinear)
gi(x) = 0 for i = 1, .., p; gi(x) = 0 for i = p+1, .., p+q
System goals (linear, nonlinear)
Ai(x) + di- + di+ = Gi i = 1, …, m
Bounds
ximin = xi = ximax i = 1, …, n
di-, di+ = 0 ; i = 1, …, m; di- . di+ = 0 ; i = 1, …, m
Minimize
Preemptive deviation function (lexicographic minimum):
Z = [ f1(di- , di+), ..., fk(dk- , dk+) ]

Figure 1.3 Mathematical Form of a Compromise DSP


(Mistree, et al., 1993)

A solution to the compromise DSP is called a satisficing (Simon, 1996) solution,

because it is a feasible point that achieves the system goals to the “best” extent that is possible.

This notion of satisficing solutions is in philosophical harmony with the notion of developing a

broad and robust set of top-level design specifications. The efficacy of the compromise DSP in

creating ranged sets of top-level design specifications has been demonstrated in both aircraft

20
design (Lewis, et al., 1994; Simpson, et al., 1996) and ship design (Smith and Mistree, 1994).

Developing ranged sets of top-level design specifications is generalized into the notion of

developing a product platform portfolio which is discussed in Section 3.1.5. By finding a

“portfolio” of solutions rather than a single point solution, greater design flexibility can be

maintained during the design process. Finally, the compromise DSP also provides the

cornerstone of the Robust Concept Exploration Method which is reviewed in the next section.

1.2.2 The Robust Concept Exploration Method

The Robust Concept Exploration Method (RCEM) has been developed to facilitate

quick evaluation of different design alternatives and generation of top-level design specifications

with quality considerations in the early stages of design (see, e.g., Chen, et al., 1996a). It is

primarily useful for designing complex systems and facilitating computationally expensive design

analysis. The RCEM is created by integrating several methods and tools—robust design

methods (see, e.g., Phadke, 1989), the Response Surface Methodology (see, e.g., Myers and

Montgomery, 1995), and Suh's Design Axioms (Suh, 1990)—within the compromise DSP

(Mistree, et al., 1993). A review of the wide variety of applications that have successfully

employed the RCEM is given in (Simpson, et al., 1997b).

The RCEM is a four step process as illustrated in Figure 1.4. The corresponding

computer infrastructure is illustrated in Figure 1.5. The steps are described as follows.

Step 1 - Classify Design Parameters : Given the overall design requirements, this step
involves the use of Processor A, see Figure 1.5, to (a) classify different design

21
parameters as either control factors, noise factors, or responses following the
terminology used in robust design, and (b) define the concept exploration space.

Step 2 - Screening Experiments: This step requires the use of the point generator
(Processor B), simulation programs (Processor C), and an experiment analyzer
(Processor D) shown in Figure 1.5 to set up and perform initial screening
experiments and analyze the results. The results of the screening experiments are used
to (a) fit low-order response surface models, (b) identify significant main effects, and (c)
reduce the design region.

Step 3 - Elaborate the Response Surface Model: This step also requires the use of the
point generator (Processor B), simulation programs (Processor C), and experiment
analyzer (Processor D) to set up and perform secondary experiments and analyze the
results. The results from the secondary experiments are used to (a) fit second-order
response surface models (using Processor E) which replace the original computer
analyses, (b) identify key design drivers and the significance of different design factors
and their interactions, and (c) quickly evaluate different design alternatives and answer
"what-if" questions in Step 4.

Step 4 - Generate Top-Level Design Specifications with Quality Considerations :


Once accurate response surface models have been created, Step 4 involves the use of
the compromise DSP (Processor F in Figure 1.5) to determine top-level design
specifications with quality considerations. The original analyses or simulation programs
are replaced by response surfaces which are functions of both control and noise factors.
Different quality considerations and multiple objectives are incorporated in the
compromise DSP which is then solved to determine robust, top-level design
specifications.

22
RCEM Steps: Methods, Tools, and Math Construct:
Overall Design Requirements

STEP 1 Robust Design Principle /


Classify design parameters Techniques

STEP 2
Conduct “screening experiments”
Response Surface Methods /
DOE/ANOVA Statistical Methods
STEP 3
Elaborate response surface models

STEP 4 Compromise Decision Support


Generate robust top-level design specifications Problem

Figure 1.4 Steps and Tools of the RCEM


(adapted from Chen, et al., 1996a)

F. The Compromise DSP


Find
Control Variables
Satisfy Robust, Top-Level
Overall Design Constraints
Requirements Design Specifications
Goals
"Mean on Target"
"Minimize Deviation"
“Maximize the independence”
Bounds
Minimize E. Response Surface Model
Deviation Function

A. Factors and Ranges


Noise z C. Simulation
Factors Programs
x Product/ (Rigorous Analysis
Control Process y Tools)
Response y=f( x, z)
Factors
? ˆy = f( x,? z)
k 2 l 2
? ? ˆy= ? (ŽzŽf )? ?ˆz
i
+ ? Žf ? ? ˆx i
( )
i=1 i i=1 Žxi
B. Point Generator
D. Experiments Analyzer
Design of Experiments
Plackett-Burman Eliminate unimportant factors Input and Output
Full Factorial Design Reduce the design space to the region of
Fractional Factorial Design Processor
Taguchi Orthogonal Array interest
Central Composite Design Plan additional experiments Simulation Programs
etc.

Figure 1.5 RCEM Computer Infrastructure


(adapted from Chen, et al., 1996a)

The RCEM is taken as the foundation for the research work in this dissertation for

several reasons, namely,


23
• integration of robust design principles with the compromise DSP (see Section 2.3),

• demonstrated effectiveness for complex systems and robust design, see, e.g., (Chen, et
al., 1997),

• increased computational efficiency achieved through metamodeling (see Section 2.4),


and because it is a

• domain independent method which is particularized easily through specification of a


simulation program or analysis code, see, e.g., (Chen, et al., 1997).

The usefulness of these features of the RCEM to this research work are elaborated throughout

the dissertation, particularly in Sections 3.1 and 3.2 wherein the PPCEM is introduced. The

research objectives for the dissertation are described in the next section.

1.3 RESEARCH FOCUS IN THE DISSERTATION

The research focus in this dissertation is embodied as follows:

• a set of research questions that capture motivation and specific issues to be addressed,

• a set of corresponding research hypotheses that offer a context by which the research
proceeds, defining the structure of the verification studies performed in this work, and

• a set of resulting research contributions that embody the deliverables from the research
in terms of intellectual value, a repeatable method of solution, limitations, and avenues of
further investigation.

The research questions are presented in Section 1.3.1 along with the corresponding research

hypotheses. The research hypotheses (and supporting posits) are discussed in more detail in

Section 3.2 along with issues of verification and validation. The resulting research contributions

are introduced in Section 1.3.2.

24
1.3.1 Research Questions and Hypotheses in the Dissertation

The principal goal in this dissertation is the development of a method to facilitate the

design of a scalable product platform around which a family of products can be developed. As

discussed in the previous section, Decision-Based Design and the RCEM provide the

foundation on which this work is built. Given this foundation and goal, the motivation for this

research is embodied in the primary research question identified in Section 1.1.2 which is

repeated here.

Primary Research Question:

Q1. How can a common scalable product platform be modeled and designed for a

product family?

This research question is related directly to the principal goal in this research which is to

advance product family design through the development of a method to design a scalable

product platform for a product family. The following hypothesis is investigated in this

dissertation in response to the primary research question.

Hypothesis 1: The Product Platform Concept Exploration Method provides a method

for designing a common product platform which can be scaled to realize a product

family.

25
Since Question 1 is quite broad, three supporting research questions and sub-

hypotheses are proposed to facilitate the verification of Hypothesis 1. The supporting questions

and sub-hypotheses are stated as follows.

Q1.1. How can product platform scaling opportunities be identified from overall

design requirements?

Q1.2. How can robust design principles be used to facilitate designing a common

scalable product platform?

Q1.3. How can individual targets for product variants be aggregated and modeled for

product platform design?

Sub-Hypothesis 1.1: The market segmentation grid can be utilized to help identify

scale factors for a product platform.

Sub-Hypothesis 1.2: Robust design principles can be used to facilitate the design of a

common scalable product platform by minimizing the sensitivity of a product

platform to variations in scale factors.

Sub-Hypothesis 1.3: Individual targets for product variants can be aggregated into an

appropriate mean and variance and used in conjunction with robust design

principles to effect a common product platform for a product family.

26
There is a one-to-one correspondence between each supporting question and sub-

hypothesis. The sub-hypotheses are stated here primarily to provide context for the literature

review in the next chapter and the development of the PPCEM in Section 3.1. The strategy for

verification and testing of the hypotheses is presented in Section 3.3.

In addition to the primary research question related to the design of scalable product

platforms, two secondary research questions are also investigated in this dissertation.

Secondary Research Questions:

Q2. Is kriging a viable metamodeling technique for building approximations of

deterministic computer analyses?

Q3. Are space filling designs better suited for building approximations of deterministic

computer analyses than classical experimental designs?

As discussed in Section 1.2.2, metamodeling techniques—design of experiments and

response surface models—are employed in the RCEM to facilitate concept exploration and the

implementation of robust design. As is discussed in Section 2.4, alternative metamodeling

techniques such as kriging may be better suited for building approximations of deterministic

computer analyses than the response surface models currently employed in Steps 2 and 3 of the

27
RCEM (see Section 1.2.2). Moreover, the traditional or “classical” experimental designs which

are typically used to sample the design space by querying the computer code to generate data

to build these approximations, may not be well-suited for deterministic computer analyses either;

hence, alternative “space filling” designs also are investigated as part of the research in this

dissertation. The specific hypotheses, which are investigated in response to the secondary

research questions, entail affirmative answers to each question.

Hypothesis 2: Kriging is a viable metamodeling technique for building approximations

of deterministic computer analyses.

Hypothesis 3: Space filling experimental designs are suited better for building

metamodels of deterministic computer experiments than classical experimental

designs.

The motivation for these last two research questions and hypotheses is discussed in

Section 2.4 wherein the limitations of response surface modeling and design of experiments

techniques within the RCEM are discussed in greater detail. It is worth noting that Hypotheses

2 and 3 are related to Hypothesis 1 but have implications which extend beyond product family

and product platform design, see Section 3.2.2.

The relationship between the hypotheses and the various sections of the dissertation are

summarized in Table 1.2. The hypotheses are elaborated more in the literature review in the

28
next chapter in the sections listed in the table and revisited in Chapter 3 after the Product

Platform Concept Exploration Method is presented. Verification and validation issues are

discussed in Section 3.3, and testing of the individual hypotheses commences in Chapter 4,

lasting until Chapter 7. Although it is not noted in the table, Chapter 8 contains a review of the

hypotheses and their verification. The resulting contributions from these hypotheses are

described in the next section to provide context for the development of the research in the

dissertation.

Table 1.2 Relationship Between Hypotheses and Dissertation Sections

Sections Sections
Hypothesis Discussed Tested
H1 Product Platform Concept Exploration Method Chp 3 Chp 6 & 7
§2.2.1, §3.1.1, §6.2, §7.1.3
SH1.1 Usefulness of market segmentation grid
§3.1.2, §3.2
§2.3, §3.1.2, §6.3-6.5,
SH1.2 Robust design of scalable product platform
§3.1.4, §3.2 §7.4-7.6
§2.3.3, §3.1.4, §6.3-6.5,
SH1.3 Aggregating product family specifications
§3.2 §7.4-7.6
Utility of kriging for metamodeling deterministic §2.4.1, §2.4.2, Chp 4, §5.2,
H2
computer experiments §3.1.3, §3.2 §7.3
§2.4.3, §3.1.3,
H3 Utility of space filling experimental designs §5.3
§3.2

1.3.2 Contributions from the Research

The hypotheses and sub-hypotheses, taken together, define the research presented in

this dissertation and hence the contributions from the research. As evidenced by the principal

29
goal in the dissertation and Hypothesis 1, the PPCEM is the primary contribution in the

dissertation. Additional contributions from the dissertation are the following:

Contributions related to Hypothesis 1 and Sub-Hypotheses 1.1-1.3:

• The notion of scale factors in product platform design and a means of identifying them
for a product platform: Sections 2.3, 3.1.1, 3.1.2, 6.2, and 7.1-7.2.

• An abstraction of robust design principles for realizing scalable product platforms for
product family design: Sections 2.3, 3.1.2, 3.1.4, 6.3-6.5, and 7.4-7.6.

• Non-commonality and performance deviation measures for performing product variety


tradeoff studies, see Sections 3.1.5 and 7.6.2.

Contributions related to Hypothesis 2:

• An algorithm to build, validate, and use a kriging model: Section 2.4.2, Chapters 4, 5,
and 7, and Appendix A.

• A preliminary comparison of the predictive capability of second-order response surface


models and kriging models: Chapter 4.

• An investigation of the effect of five different spatial correlation functions on the


accuracy of a kriging model: Section 2.4.2 and Chapter 5.

Contributions related to Hypothesis 3:

• An investigation of the effect of eleven different experimental designs on building an


accurate kriging model: Section 2.4.3 and Chapter 5.

• An algorithm for generating minimax Latin hypercube designs: Section 2.4.3 and
Appendix C.

30
This being the first chapter of the dissertation, these contributions cannot be substantiated;

therefore, they are revisited in Section 8.1 after all of the research findings have been

documented and discussed. An overview of the dissertation is presented next.

1.4 OVERVIEW OF THE DISSERTATION

To facilitate this discussion, an overview of the chapters in the dissertation is shown in

Figure 1.6. Having lain the foundation by introducing the research questions and hypotheses for

the work in this chapter, the next chapter contains a literature review of related research,

elucidating the problems and opportunities in product family and product platform design.

Three research areas are reviewed: (1) product family and product platform design with

particular emphasis on scalability and sizing, (2) robust design and its application in engineering

design, and (3) statistical metamodeling and its role in engineering design, see Sections 2.2, 2.3,

and 2.4, respectively. A discussion of how these disparate research areas relate to one another

is offered in Section 2.1

The PPCEM is introduced in Chapter 3 as elements from Chapters 1 and 2 are

synthesized into a method for designing a scalable product platform for a product family. The

PPCEM and its associated steps are presented in Section 3.1. After the PPCEM is presented,

the research hypotheses are revisited in Section 3.2, and supporting posits are stated and

substantiated. Section 3.3 contains an outline of the strategy for verification and testing of the

hypotheses which includes a preview of Chapters 4 and 5—wherein Hypotheses 2 and 3 are

31
tested—and Chapters 6 and 7 wherein the PPCEM is applied to two example problems,

verifying Hypotheses 1 and Sub-Hypotheses 1.1 through 1.3.

Testing of the hypotheses begins in Chapter 4, but Chapters 4 and 5 entail a brief

departure from product platform design yet are an integral part of the development of the

PPCEM. In Chapter 4, an initial feasibility study of the usefulness of kriging is performed to

familiarize the reader with the method and to begin to verify Hypotheses 2 by comparing the

accuracy of kriging models to second-order response surface models, the current standard in

metamodeling. In Chapter 5, an extensive study of six engineering test problems selected from

the literature is conducted to determine the utility of kriging metamodels and various

experimental designs, testing and verifying Hypotheses 2 and 3.

Once the kriging/DOE study is completed in Chapter 5, the first of two examples used

to demonstrate the PPCEM and verify its associated hypotheses is given in Chapter 6: the

design of a family of universal electric motors. This first example employs the PPCEM without

any metamodeling, providing “proof of concept” that the method works. Then, in Chapter 7 the

PPCEM is applied to the design of a family of General Aviation aircraft, making full use of the

kriging metamodels and robust design capabilities. In each chapter, an overview of the problem

is given along with pertinent analysis information, the steps of the PPCEM are performed, and

the ramifications of the results are discussed.

32
Relevance Hypotheses
Chapter 1 • Introduction, motivation, and
Problem Identification

Foundations for Product technical foundation


Introduce
Family and Product • Identify research objectives,
Platform Design hypotheses, and contributions

• Elaborate opportunities in product


Chapter 2
family and platform design
Literature Review: Product • Extend robust design principles Elaborate
Family Design, Robust • Introduce metamodeling concerns:
Design, and Metamodeling kriging and experimental designs

Chapter 3
• Introduce PPCEM and its steps
Product Platform
Method

• Review hypotheses and posits Revisit


Concept Exploration
• Outline verification strategy
Method

• Familiarize reader with kriging


Chapter 4
• Compare kriging and response
Initial Kriging Verify H2
surface metamodels
Feasibility Study • Set the stage for Chapter 5 study

• Investigate utility of kriging and


Chapter 5 space filling DOE Verify
Hypothesis Testing

Kriging/DOE Study • Make kriging metamodeling H2 & H3


recommendations

• Demonstrate implementation of
Chapter 6 Verify
PPCEM without metamodels
Design of a Family • Provide proof of concept and
H1; SH1.1,
of Universal Motors SH1.2, & SH1.3
initial verification of method

• Demonstrate full implementation Verify


Chapter 7
of PPCEM, including metamodels H1 & H2;
Design of a Family of
• Provide further verification of SH1.1, SH1.2,
General Aviation Aircraft method & SH1.3

• Summarize research findings,


Closure

Chapter 8
contributions, and limitations Summarize
Closing Remarks • Identify avenues of future work

Figure 1.6 Overview of Dissertation Chapters

33
Chapter 8 is the final chapter in the dissertation and contains a summary of the

dissertation, emphasizing answers to the research questions and resulting research contributions

in Sections 8.1 and 8.2, respectively. Possible avenues of future work are discussed in Section

8.3. Finally, some closing remarks are given in Section 8.4.

There are six appendices which supplement the dissertation. Appendix A contains a

description of the kriging algorithm which is employed in Chapters 4, 5, and 7. Appendix B

contains detailed descriptions of the experimental designs investigated in the kriging/DOE study

in Chapter 5; the minimax Latin hypercube design, which is introduced in Section 2.4.3 and

investigated in Chapter 5, is described separately in Appendix C as it is unique to this

dissertation. Appendix D contains descriptions of the six engineering test problems used in the

kriging/DOE study, and supplemental information for the kriging/DOE study is given in

Appendix E. Supplemental information for the General Aviation aircraft problem in Chapter 7 is

given in Appendix F.

Finally, a pictorial overview of the dissertation is illustrated in Figure 1.7. It proceeds

from bottom to top, beginning with the foundation provided in this chapter: Decision-Based

Design and the Robust Concept Exploration Method. This figure provides a road map for the

dissertation, and it is referred to at the end of each chapter to help guide the reader through the

work as the research progresses from chapter to chapter.

34
Chp 8: Achievements and Recommendations

Family of Universal Family of General


Kriging/DOE Testbed Electric Motors Aviation Aircraft

Chp 6 Chp 7

Chp 5

Platform

Nozzle Design
Product Platform Concept Exploration Method

Chp 4 Chp 3

Space Modeling Conceptual Scalable Market


Filling Kriging Mean and Noise Product Segmentation
DoE Variance Factors Platform Grid

Metamodeling Robust Design Principles Product Family Design

§2.3 §2.2 §2.1

FOUNDATIONS: Decision-Based Design & the Robust Concept Exploration Method

Figure 1.7 Pictorial Overview of the Dissertation

35
2.
CHAPTER 2

A LITERATURE REVIEW: PRODUCT FAMILY AND


PRODUCT PLATFORM DESIGN, ROBUST DESIGN,
AND METAMODELING

Given the research focus identified in Section 1.3, a survey of relevant work in product

family and product platform design, robust design, and metamodeling is presented in this chapter

in Sections 2.2, 2.3, and 2.4, respectively. A thorough description of what is in this chapter and

how these disparate fields of research relate to each other is offered in Section 2.1. In Section

2.2, the tools and methods for designing product families and product platforms introduced in

Section 1.1.2 are discussed in more detail. Section 2.3 then contains a review of robust design

principles, focusing on robust design opportunities in product family and product platform

design. This segues into a discussion of metamodeling and approximation techniques in Section

2.4 to facilitate the implementation of robust design. In particular, the kriging approach to

metamodeling is introduced in Section 2.4.2 as a viable alternative for building approximations

of deterministic computer experiments, and a variety of space filling experimental designs for

querying a computer code to build kriging models are described in Section 2.4.3. Section 2.5

31
concludes the chapter with a summary of what has been presented and a preview of what is

next.

32
2.1 WHAT IS PRESENTED IN THIS CHAPTER

In the preceding chapter, product families and product platforms were introduced along

with several illustrative examples. In this chapter, a literature review of tools and methods which

facilitate the development of product families and product platforms is presented; the focus is on

three areas: (1) approaches for product family and product platform design, (2) robust design

principles and their implementation, and (3) metamodeling, in Sections 2.2, 2.3, and 2.4,

respectively. At first glance, these three research areas appear unrelated; however, transitional

elements presented at the end of each section preface the discussion in the section that follows

as the literature review moves from the general area of product family design to the specific area

of metamodeling, see Figure 2.1. The relevant hypotheses covered in each section are noted in

Figure 2.1.

33
H3 H2

SH1.3 SH1.2

Space
Filling Kriging H1 SH1.1
DoE
Modeling Conceptual
Metamodeling Mean and Noise
Variance Factors
Scalable Market
Robust Design Principles Product Segmentation
Platform Grid

Product Family Design

§2.4 §2.3 §2.2


FOUNDATIONS: Decision-Based Design & the Robust Concept Exploration Method

Figure 2.1 Transition of Literature Review in Chapter 2

As shown in Figure 2.1, the discussion in Section 2.2 explores in greater depth some of

the tools and approaches for product family and product platform design including: product

family maps and the market segmentation grid (Meyer, 1997); approaches to product family

and product platform design; and finally, the notion of a scalable product platform (Rothwell and

Gardiner, 1990). The work by Rothwell and Gardiner then is used to provide a transition to a

discussion of robust design principles in Section 2.3 by relating Rothwell and Gardiner’s

concept of “robust design” for product families to the idea of a “conceptual noise factor” in a

distributed design environment as introduced in (Chang and Ward, 1995; Chang, et al., 1994).

This notion of a “conceptual noise factor” then is extended to scale factors within a scalable

34
product platform, providing a means to abstract robust design principles for application in

product family design.

In Section 2.3, the focus also shifts from extending Taguchi’s robust design to its

implementation within the Robust Concept Exploration Method (RCEM), i.e., through the use

of metamodels and design capability indices. A brief overview of metamodeling is presented in

the beginning of Section 2.4, providing a transition from robust design to utilizing metamodels to

facilitate its implementation as alluded to in Figure 2.1. The general approach to metamodeling

also is discussed in the beginning of Section 2.4, followed by a closer look at some of the

limitations of second-order response surface models in engineering design in Section 2.4.1. This

discussion provides the impetus for a closer look at two specific aspects of metamodeling—

model selection and experimental sampling—which also are investigated as part of this research.

Specifically, kriging and space filling experimental designs are examined as potential alternatives

to the response surface methods and classical design of experiments (DOE) currently employed

in the RCEM. Taken together, this literature review provides the necessary elements for the

development of the Product Platform Concept Exploration Method for designing scalable

product platforms for a product family as presented in Chapter 3. Toward this end, the state-

of-art in product family and product platform design is discussed in the next section.

35
2.2 PRODUCT FAMILY AND PRODUCT PLATFORM DESIGN TOOLS AND
METHODS

As stated in Section 1.1, in order to provide as much variety as possible for the market

with as little variety as possible between products, many researchers advocate a product

platform and product family approach to satisfy effectively a wide range of customer needs. In

Section 2.2.1, several attention directing tools developed to facilitate product family and

product platform design are presented. In Section 2.2.2, metrics for assessing product platform

effectiveness are discussed. Finally, in Section 2.2.3, methods for product family design are

reviewed.

2.2.1 Attention Directing Tools for Product Family and Product Platform Design

A large portion of the work in strategic marketing and management is focused on either

categorizing or mapping the evolution and development of product families. These maps

typically are applied a posteriori to a product family but can be used a priori to identify new

directions for product development within the product family. Examples of product family maps

include the work by Meyer and Utterback (1993) and Wheelwright and Sasser (1989); a brief

description of each follows.

Meyer and Utterback (1993) use the Product Family Map shown in Figure 2.2 to trace

the evolution of a product family. In their map, each generation of the product family employs a

platform as the foundation for targeting specific products at different (or complimentary)

markets. Improved designs and new technologies spawn successive generations, and cost

reductions and the addition and removal of features can lead to new products. Multiple

36
generations can be planned from existing ones, expanding to different markets or revitalizing old

ones. A more formal map, with four levels of hierarchy in the product family (i.e., product

family, product platforms, product extensions, and specific products) also is introduced in their

work in an effort to assess the dynamics of a firm’s core capabilities for product development;

several examples can be found in their paper.

Time

Platform Development Family B

Product 7
Product 8

Adaptation of Core Technologies to New Markets

Platform Development Family A

Product 1
Product 2
Product 3
Product 4

Cost Reduction and New Features


Plan Multiple Generations

New Generation Platform Family A


Product 1’
Product 2’
Product 3’
Product 4’
Product 5
Product 6

New Niches

Figure 2.2 Product Family Map (adapted from Meyer and Utterback, 1993)

37
In related work, Wheelwright and Sasser (1989) have developed the Product

Development Map to trace the evolution of a company’s product lines, see Figure 2.3. In

addition to mapping the evolution of the product line, they also categorize a product line into

“core” and “leveraged” products, dividing leveraged products into “enhanced,” “customized,”

“cost reduced,” and “hybrid” products.

“These distinctions—core, hybrid, and the others—are immediately useful because they
give managers a way of thinking about their products more rigorously and less
anecdotally. But the various turns on the product map—the various “leverage points”—
also serve as crucial indicators of previous management assumptions about the
corporate strengths and market forces shaping product evolutions.” (Wheelwright and
Sasser, 1989, p. 114)

38
Enhanced Customized

Increasing Functionality, Value, Price

• • •Prototype
•••••• Core

Hybrid

Cost-
reduced

Core

Time

Figure 2.3 Generic Product Development Map


(adapted from Wheelwright and Sasser, 1989)

As shown in Figure 2.3., the core product, typically derived from an engineering

prototype, provides the engineering platform upon which further enhancements are made.

Enhanced products are developed from the core by adding distinctive features to target specific

market niches; enhanced products are typically the first products leveraged from the core

product. Enhanced products can be customized further to provide more choice if necessary.

Cost-reduced products are “scaled” or “stripped” down versions (e.g., less expensive materials

and fewer features) of the core which are targeted at price-sensitive markets. Finally the hybrid

39
product is an entirely new design, resulting from the combination of characteristics of two or

more core products. As an example, the evolution of three generations of a family of vacuum

cleaners is mapped and discussed in their article.

These product family maps are useful attention directing tools for product family design

and development but offer little direction for designing a scalable product platform. Toward this

end, the market segmentation grid developed by Meyer (1997) facilitates identifying leveraging

strategies for a product platform, see Figure 2.4.

High Cost
High Performance

What Market Niches


Mid-Range
Will Your Product Platforms Serve?

Low Cost
Low Performance

Segment A Segment B Segment C

Derivative Products

Product Platform

Figure 2.4 Product Platform Market Segmentation Grid


(adapted from Meyer, 1997)

In a market segmentation grid, the major market segments serviced by a company’s

products are listed horizontally in the grid. The vertical axis reflects different tiers of price and

performance within each market segment. Several example instantiations of this grid can be

40
found in (Meyer, 1997; Meyer and Lehnerd, 1997) for companies such as Hewlett Packard,

Compaq, Steelcase, and Herman Miller.

This simple market segmentation grid can be used by firms to segment their markets,

helping to define a clear product platform strategy. For instance, a marketing strategy which

employs no leveraging is shown in Figure 2.5a. Companies which fail to maintain a good

platform leveraging strategy often have too many products that share too little technology,

resulting in a myriad of products, higher costs, and lower margins.

Three types of platform leveraging strategies can be identified within the market

segmentation grid: horizontal leveraging, vertical leveraging, and a beachhead approach as

shown in Figure 2.5b-d. All three leveraging strategies enable a more efficient and effective

product family to be developed. Examples of these leveraging strategies include the following

(Meyer, 1997):

Horizontally leveraging - subsystems of components within a product family are


leveraged from one market segment to the next within a given price/performance tier.
The main benefit of a horizontal leveraging strategy is to facilitate the introduction of new
products across a series of related market niches without having to “reinvent the wheel.”
Black & Decker often employs such a strategy: “We don’t need to reinvent the power
tool in every country, but rather, we have a common product and adapt it to individual
markets” states one of their top executives (DiCamillo, 1988). Another example given
by Meyer and his coauthors is the Gillette Sensor-Excel razor which uses exactly the
same razor cartridge in the male and female market segments while the shape, color,
and general design of the handles are completely different for men’s and women’s.

41
Vertically leveraging - a product platform is leveraged to address a range of
price/performance tiers within a specific market segment. A company which excels in
the high-end segment of its market may scale down its platform into lower
price/performance tiers by removing functionality from its high-end platform to achieve
lower price products. The other option is to scale up a low-end platform by adding
more powerful component technologies or modules to meet the higher performance
demands for the higher tiers. The main benefit of this approach is the capability of the
company to leverage its knowledge about a particular market niche without having to
develop a new platform for each price/performance tier. The Rolls Royce RTM322
engine and Canon’s low-end copiers discussed in Section 1.1.1 exemplify this
approach.

High Cost Platform 1 Platform 2 Platform 3 High Cost High End Platform Leverage
High Performance High Performance

Platform 4 Platform 5
Mid-Range Mid-Range

Low Cost Low Cost Low End Platform Leverage


Low Performance Low Performance
Segment A Segment B Segment C Segment A Segment B Segment C

(a) No Leveraging (b) Horizontal Leveraging

High Cost Platform A High Cost


High Performance High Performance
Scaled Down

Scaled Up

Mid-Range Mid-Range

Low Cost Low Cost Platform


Low Performance Low Performance
Platform C
Segment A Segment B Segment C Segment A Segment B Segment C

(c) Vertical Leveraging (d) Beachhead Strategy

Figure 2.5 Platform Leveraging in the Market Segmentation Grid


(adapted from Meyer, 1997)

42
Beachhead approach - combines horizontal and vertical leveraging to achieve perhaps the
most powerful platform leveraging strategy. In a beachhead approach, a company
develops a low-cost effective platform for a particular market segment and then scales
up the performance characteristics of the platform and adds other features to target new
market segments. The example of Compaq computers is offered in (Meyer, 1997).
Compaq entered the personal computer market in 1982 and, after establishing a
foothold in the portable computer market niche, slowly introduced a stream of new
products for other market segments and different price/performance tiers, including a
line of desktop PCs for business and home use. Of the examples discussed in Section
1.1.1, the Sony Walkman, Black & Decker’s universal electric motor platform, and
Lutron’s lighting systems also exemplify this type of approach to platform leveraging.
Sony initiated a beachhead approach from the start with their Walkman product lines.
The same is not true for Black & Decker and Lutron. Both companies began with no
leveraging strategy and only after redesigning their product lines as discussed in Section
1.1.2 where they able to achieve a more efficient and effective beachhead approach.
Consequently, they are now both leaders in their respective fields.

The market segmentation grid provides a useful attention directing tool to help map and

idenfity product platform leveraging opportunities within a product family, providing an answer

to the question:

Q1.1. How can product platform scaling opportunities be identified from overall

design requirements?

43
Keep in mind, however, that the market segmentation grid is only an attention directing tool;

considerable engineering “know-how” and planning is needed to develop a successful product

leveraging strategy and exploit scaling opportunities within a product family. The market

segmentation grid is simply a way of representing that strategy, providing a clear mapping of

product leveraging opportunities within the product family. Use of the market segmentation grid

to help identify scaling opportunities within the Product Platform Concept Exploration Method is

further elaborated in Section 3.1.1. In the next section, metrics for assessing product platforms

are discussed.

2.2.2 Product Platform Assessments and Cost Models

Several metrics and cost models have been developed to assess either the efficiency

and effectiveness of a product platform or the commonality between a group of products within

a product family. Meyer, et al. (1997), in particular, define two metrics—platform efficiency

and platform effectiveness—to manage the research and development costs of product

platforms and product families. Platform efficiency is defined as follows:

R & D Costs for Derivative Product


Platform Efficiency = [2.1]
R & D Costs for Platform Version

which assesses how much it costs to develop derivative products relative to how much it costs

to develop the product platform within the product family. The platform efficiency metric can

also be used to compare different platforms across different product families to assess the

capability of R&D teams to develop robust and efficient platforms.

44
Platform effectiveness is a ratio of the revenue a product platform and its derivatives

creates to the cost required to develop them and is measured as follows:

Product Sales
Platform Effectiveness = [2.2]
Product Development Costs

where the effectiveness of the platform can be assessed at the individual product level or for a

group of products within distinct platform versions.

These metrics require costing and revenue information which is typically known only

after the product platform and its derivatives have been developed and reached the market.

These metrics prove useful for managing research and development within the product family

and determining when to renew or re-focus product platform efforts; however, they offer little

for designers during the concept exploration and design process.

A more relevant measure of the effectiveness of a product platform is to measure the

commonality of parts within a product family. Many commonality indices have been proposed

for assessing the degree of commonality within a product family. Products which share more

parts and modules within a product family achieve greater inventory reductions, exhibit less part

variability, improve standardization, and shorten development and lead times because more

parts are reused and fewer new parts have to be designed (cf., Collier, 1981). McDermott and

Stock (1994) discuss the benefits of commonality on new product development time, inventory,

and manufacturing; they also cite several researchers who have shown that part commonality

45
across a range of products has reduced inventory costs while maintaining a desired level of

customer service. Particular measures for assessing commonality includes the following:

• Collier (1981; 1982) proposes the Degree of Commonality Index, an analytical


measure based on a firm’s bills of materials, which can be applied to a single product, a
product line, or a product family. He uses regression analysis to relate the Degree of
Commonality Index to inventory carrying cost, set-up cost, total costs, average work
center load, and the variability in work center loads. He concludes that component part
commonality can have a significant impact on system performance, reducing
manufacturing costs, total costs, and delivery performance.

• Kota and Sethuraman (1998) introduce the Product Commonality Index for determining
the level of part commonality in a product family. Through the study of a family of
portable personal stereos, they illustrate methods to “measure and eliminate non-value
added variations, suggest robust design strategies including modularity and
postponement of product differentiation.” Their approach provides a means to
benchmark product families based on their capability to simultaneously share parts
effectively and reduce the total number of parts.

• Siddique, et al. (1998) propose a commonality index to aid in the configuration design
of common automotive platforms. They are working with an automobile manufacturer
to reduce the number of platforms they utilize across their entire range of cars and
trucks in an effort to reduce development times, costs, and product variety. Ongoing
research efforts for measuring the “goodness” of a common platform are discussed in
(Siddique, 1998).

Commonality measures such as these are based primarily on the ratio of the number of

shared parts, components, and modules to the total number of parts, components, and modules

in the product family. Taking this one step further, Martin and Ishii (1996) seek to assess the

46
cost of producing product variety through the measurement of three indices: commonality,

differentiation point, and set-up costs. The commonality index is similar to that proposed by

Collier (1981) and measures the percentage of common components within a group of products

in a product family. The second index measures the differentiation point for product variety

within an assembly or manufacturing process; the idea being that the later the differentiation

point can be postponed the lower the costs of producing the necessary variety (cf., Lee and

Billington, 1994; Lee and Tang, 1997). Finally, the set-up cost index assesses the cost

contributions needed to provide variety compared to the total cost for the product. The indirect

costs of providing product variety then is taken as a weighted linear combination of these

indices; the weightings for the individual indices may vary from industry to industry. The direct

costs of providing product variety, they assert, are relatively straightforward to determine.

Generalizations are made regarding the costs of product variety based on these indices;

however, there is no work to substantiate their claims or the usefulness of the indices. The

generalizations could be made just as easily without the indices.

In later work, Martin and Ishii (1997) introduce a process sequence graph which

provides a qualitative assessment of the flow of a product through the assembly process and its

differentiation point. A product family of eighteen instrument panels is analyzed, citing that

differentiation for product variety begins in the second step in the assembly process. This leads

them to investigate process re-sequencing to improve component commonality and postpone

differentiation to reduce production costs and lead-times. The end result is a graph of Variety

47
Voice of the Customer (V2OC) versus percentage commonality for the family of instrument

panels as shown in Figure 2.6.

Figure 2.6 V2OC Rating vs. Commonality (from Martin and Ishii, 1997)

In Figure 2.6, commonality is a measure the number of common or standardized

assemblies shared between products to the total number of assemblies in the product family and

is again very similar to that of Collier (1981). The V2OC measure assesses “the importance of

a component’s variety to the aggregated market—not the individual buyer. V2OC is a measure

the importance of a component to a customer, as well as the heterogeneity of the market with

response to that component” (Martin and Ishii, 1997). They do not describe how to measure

V2OC or explain how the V2OC ratings for the instrument panel family are created;

consequently, V2OC does not provide a useful measure for product variety. The resulting

graph, however, is insightful and similar to the product variety tradeoff graph which is introduced

in Section 3.1.5 and illustrated in Section 7.6.2 in the context of the General Aviation aircraft

example. The reasoning behind the target region in the figure is not discussed in their paper

48
either; however, intuition suggests that components with low V2OC rating (i.e., are not

important to the customer) can be common from one product to the next while it is important to

customize components (i.e., decrease their commonality) that have a high V2OC. This idea is

explored in greater depth in Sections 3.1.5 and 7.6.2 wherein a non-commonality index based

on the dissimilarity of a family of products defined by a parameter set is introduced for

assessing and studying product variety tradeoffs. In the meantime, methods for designing

families of products are reviewed in the next section.

2.2.3 Engineering Methods for Product Family Design

The majority of engineering design research has been directed at improving the

efficiency and effectiveness of designers in the product realization process, and until recently, the

focus has been on designing a single product. For instance, Suh (1990) offers his two axioms

for design: (1) maintain independence of functional requirements, and (2) minimize the

information content of a design. Pahl and Beitz (1988; 1996) offer their four phase approach to

product design which involves the following: clarification of the task, conceptual design,

embodiment design, and detail design. Similarly, Hubka and Eder (1988; 1996) advocate an

approach which involves the following: elaboration of the assigned problem, conceptual design,

laying out, and elaboration. Pugh (1991) introduces the notion of total design which has at its

core market/user needs and demands, the product design specification, conceptual design,

detail design, manufacturing, and selling. In the well-known review of mechanical engineering

49
design research conducted by Finger and Dixon (1989a; 1989b), scant trace of product family

and product platform design is found.

Perhaps the most developed method for product family design which currently exists is

the work by Erens (1997). Erens, in conjunction with several of his colleagues (Erens and

Breuls, 1995; Erens and Verhulst, 1997; Erens and Hegge, 1994; McKay, et al., 1996),

develops a product modeling language for product variety. The primary focus is on the product

modeling language as an efficient representation of product architecture and modularity; it offers

little aid for design synthesis and analysis, only representation. The product modeling language

allows product families to be represented in three domains: functional, technological, and

physical. Use of the product modeling langurage is demonstrated in the context of a family of

office chairs, a family of overhead projectors, and a family of cardio-vascular Xray machines.

Excerpts from the family of office chairs example is illustrated in Figure 2.7. The office chair

itself is shown in Figure 2.7a, and the variety of options from which to choose: upholstery,

materials, colors, fixtures, etc., are shown in Figure 2.7b. In Figure 2.7c, the general

representation of the product architecture for the office chair is depicted, and the hierarchy in

the product variety model is illustrated in Figure 2.7d. As illustrated in this example, the product

modeling language provides an effective means for representing product variety but offers little

aid for design synthesis and analysis.

50
(a) An office chair (b) Office chair options

(c) Office chair architecture (d) Office chair product variety model

Figure 2.7 Representing a Family of Office Chairs (Erens, 1997)

In other work, Fujita and Ishii (1997) outline a series of tasks—design specification

analysis, system structure synthesis, configuration, and model instantiation—for product variety

design as their foundation for a formal approach for the design and synthesis of product families.

They decompose product families into systems, modules, and attributes as shown in Figure 2.8.

Under this hierarchical representation scheme, product variety can be implemented at different

levels within the product architecture. For instance, two shared modules and two sets of shared

attributes are shown in Figure 2.8. A formal algorithm has not yet been developed however.

51
System Modules Attributes

Configuration/Geometry
Shared Shared

Architecture
Different

Shared

Functional/Physical

Figure 2.8 Product Variety Decomposed into Systems, Modules, and Attributes (from
Fujita and Ishii, 1997)

Clustering approaches based on similarity or commonality of products have been

investigated for product family design. Stadzisz and Henrioud (1995) cluster products based on

geometric similarities to obtain product families in order to decrease product variability within a

product family in order to minimize the required flexibility of the associated assembly system. A

similar Design for Mass Customization approach is developed in (Tseng, et al., 1996) which

groups similar products into families based on product topology or manufacturing and assembly

similarity and provides a series of steps to formulate an optimal product family architecture

based on grouping according to fulfillment of functional requirements. Shirley (1990) describes

52
a process for redesigning a set of related products through similarity and clustering of common

products around a “core product concept,” i.e., a product platform. The resulting product

family is composed of a set of product variants which share characteristics in common with the

core product; the redesign of a family of hydraulic cylinders is used as an example.

Numerous researchers have focused on the implications of modularity on product

variety and in the context of a product platform and product family. Modularity greatly

facilitates the addition and removal of features to upgrade and derate a product platform (cf.,

Ulrich, 1995). Ulrich and Tung (1991), Ulrich (1995), and Ulrich and Eppinger (1995)

investigate product architecture and modularity and its the impact on product change, product

variety, component standardization, product performance, and product development

management. Similarly, Uzumeri and Sanderson (1995) emphasize flexibility and

standardization as a means for enhancing product flexibility and offering a wide variety of

products. Meanwhile, Chen, et al. (1994) suggest designing flexible products which can be

adapted readily in response to large changes in customer requirements by changing a small

number of components or modules. Sanderson (1991) investigates how modular designs

reduce the cost of offering product variety. Rosen (1996) investigates the use of discrete

mathematics as a formal foundation for configuration design of modular product architectures.

He emphasizes, as do Ulrich and Eppinger (1995), that the design of product architectures is

“critical in being able to mass customize products to meet differentiated market niches and

satisfy requirements on local content, component carry-over between generations, recyclability,

53
and other strategic issues.” A Product Module Reasoning System (Newcomb, et al., 1996)

currently is being developed “to reason about sets of product architectures, to translate design

requirements into constraints on these sets, to compare architecture modules from different

viewpoints, and to directly enumerate all feasible modules without generate-and-test or heuristic

approaches” (Rosen, 1996).

Pahl and Beitz (1996) also discuss the advantages and limitations of modular products

to fulfill various overall functions through the combination of distinct modules. Because such

modules often come in various sizes, modular products often involve size ranges where the initial

size is the basic design and derivative sizes are sequential designs. In the context of a scalable

product platform, the initial size constitutes the product platform and the derivative sizes are its

product variants. Their approach for designing size ranges is as follows (Pahl and Beitz, 1996):

• Prepare the basic design for the range either from a new or existing product;

• Use similarity laws to determine the physical relationships between geometrically similar
product ranges;

• Determine appropriate “theoretical” steps sizes within the desired size range;

• Adapt “theoretical” step sizes to overriding standards or technical requirements;

• Check the product size range against assembly layouts, checking any critical
dimensions; and

• Improve and document the design and prepare production drawings.

54
In the context of their approach, the method developed in this dissertation facilitates the

development of the basic design (i.e., the platform) and the sequential designs (i.e., derivative

products) simultaneously.

The concept of sizing leads into an area of product platform design that has received

little attention—product platforms that can be “scaled” or “stretched” into derivative products

for a product family (in combination to being upgraded/degraded through the addition/removal

of modules). The implications of design “stretching” and “scaling” within the context of

developing a family of products are discussed first in (Rothwell and Gardiner, 1988; 1990), see

Figure 2.9. Rothwell and Gardiner (1988) use the term “robust designs” to refer to designs that

have sufficient inherent design flexibility or “technological slack” to enable them to evolve into a

design family of variants that meet a variety of changing market requirements by “uprating,”

“rerating,” and “derating” a platform design as shown in Figure 2.9. The process of developing

these designs is shown in Figure 2.9 and consists of three phases, namely, composite,

consolidated, and stretched designs as explained in the bottom of each column.

55
Figure 2.9 Robust Designs (from Rothwell and Gardiner, 1990)

Rothwell and Gardiner (1990) provide several examples of successful robust designs

and discuss how they “allow for change because essentially they contain the basis for not just a

single product but rather a whole product family of uprated or derated variants.” Consider the

Rolls Royce RB211 engine family illustrated in Figure 2.10. The original RB211 consisted of

seven modules which could be easily upgrade or scaled down to improve or derate the engine.

For example, by replacing the large front low pressure fan with a scaled down fan, the lower

thrust, derated, 535C engine was derived. Further improvements are made by scaling different

components of the engine to improve fuel consumption while increasing thrusts. Rolls Royce

takes advantage of similar stretching and scaling in its RTM322 engine which was discussed

previously in Section 1.1.1.

56
Figure 2.10 Rolls Royce RB211 Engine Family
(from Rothwell and Gardiner, 1990)

Several other products also have benefited from platform scaling. For example, Black

& Decker scales the stack length of their universal motor platform to vary the output power of

the motor for a wide variety of applications, see Section 1.1.1 and (Lehnerd, 1987). The

Boeing 747-200, 747-300, and 747-400 are scaled derivatives of the Boeing 747 (Rothwell

and Gardiner, 1990). Many automobile manufacturers also scale their passenger car platforms

to offer, for example, two-door coupes, two- and four-door sedans, three- and five-door

hatchbacks, and maybe a wagon which are all derived from the same platform (Rothwell and

Gardiner, 1990). Honda, for instance, is taking full advantage of platform scaling to compete in

today’s global market by developing two scaled versions of their Accord for the U.S. and

Japanese markets from one platform (Naughton, et al., 1997). Siddique, et al. (1998)

document efforts at Ford to improve the commonality of their product platforms to capitalize on

commonality and stretching within their automotive product families.

57
Despite the apparent advantages of scalable product platforms, a formal approach for

the design and synthesis of stretchable and scalable platforms does not exist. Rothwell and

Gardiner state that it has “become increasingly possible to develop a robust design which has

the deliberate designed-in capability of being stretched;” however, they only offer the process

shown in Figure 2.9 as a guide to designers. Consequently, developing a method to model and

design scalable product platforms around which a family of products can be developed through

scaled derivatives of the product platform is the principal objective in this dissertation. In an

effort to realize such a method, an extension of robust design principles is offered in the next

section, providing a means to turn Rothwell and Gardiner’s idea of “robust design” for scalable

product platforms into a reality.

2.3 ROBUST DESIGN

Generally speaking, the fundamental motive underlying robust design, as originally

proposed by Taguchi, is to improve the quality of a product or process by not only striving to

achieve performance targets but also by minimizing performance variation. Taguchi’s methods

have been widely used in industry (see, e.g., Byrne and Taguchi, 1987; Phadke, 1989) for

parameter and tolerance design. Reviews of such applications can be found in, e.g., (Nair,

1992).

In robust design, the relationship between different types of design parameters or

factors can be represented with a P-diagram as shown in Figure 2.11, where P represents either

58
product or process (Phadke, 1989). The three types of factors which serve as inputs to the P-

diagram and that influence the (output) response y are as follows:

• Control Factors (x) – parameters that can be specified freely by a designer; the
settings for the control factors are selected to minimize the effects of noise factors on the
response y.

• Noise Factors (z) – parameters not under a designer’s control or whose settings are
difficult or expensive to control. Noise factors cause the response, y, to deviate from
their target and lead to quality loss through performance variation. Noise factors may
include system wear, variations in the operating environment, uncertain design
parameters, and economic uncertainties.

• Signal factors (M) – parameters set by the designer to express the intended value for
the response of the product; signal factors are those factors used to adjust the mean of
the response but which no effect on the variation of the response.

Control Factors
x

Signal Factors Response


M Product / Process y ?y , ?y

?z , ?z
z
Noise Factors

Figure 2.11 P-Diagram of a Product/Process in Robust Design


(adapted from Phadke, 1989)

This robust design terminology is used to classify design parameters and responses and

to identify sources of variability. The objective in robust design is to reduce the variation of

59
system performance caused by uncertain design parameters, thereby reducing system sensitivity.

Variations in noise factors, shown in Figure 2.11 as normally distributed with mean ? z and

standard deviation ? z, lead to variation in performance responses, which are represented in

Figure 2.11 as normally distributed with mean ? y and standard deviation ? y.

In an effort to generalize robust design for product design, Chen, et al. (1996a) develop

a general robust design procedure based on two sources of variation:

Type I - Robust design associated with the minimization of the deviation of performance
caused by the deviation of noise factors (uncontrollable parameters).

Type II - Robust design associated with the minimization of the deviation of performance
caused by the deviation of control factors (design variables).

The idea behind the two major types of robust design applications are illustrated in Figure 2.12.

As indicated by the P-diagrams for Type I and Type II applications, the deviation of the

response is caused by variations in the noise factor, z, the uncontrollable parameter in Type I

applications. Type II is different from Type I in that its input does not include a noise factor.

The variation in performance is caused solely by variations in control factors or design variables

in the region ±?x.

The traditional Taguchi robust design method is of Type I as shown in the top half of

Figure 2.12. A designer adjusts control factors, x, to dampen the variations caused by the

noise factor, z. The two curves represent the performance variation as a function of noise factor

when x is at two different levels, x = a and x = b. If the design objective is to achieve a

60
performance as closely as possible to the target, M, the designs at both levels are acceptable

because their means are the target M. However, introducing robustness, when x = a, the

performance varies significantly with the deviation of noise factor, z; however, when x = b, the

performance deviates much less. Therefore, x = b is more robust than x = a as a design

solution because x = b dampens the effect of the noise factors more than when x = a.

Type I
y Control Factor
x = Control Factors
Objective or x=a
Deviation
M = Signal Factors

Function
y = Response

x= b

z =Noise Factors Noise Factor, z

Type II
Objective or
Deviation
x = Control Factors Function

Robust
Solution
y = Response
M = Signal Factors

²x ²x
Optimal
Solution

M
x
x µ robust
opt Design
(x = a) (x = b) Variable

Figure 2.12 Two Types of Robust Design


(adapted from Chen, et al., 1996b)

61
The concept behind Type II robust design is represented in the lower half of Figure

2.12. For purposes of illustration, assume that performance is a function of only one variable, x.

In general, for this type of robust design, to reduce the variation of the response caused by the

deviations of design variables, a designer is interested in the flat part of a curve near the

performance target instead of seeking the peak or optimum value. If the objective is to move

the performance function towards target M and if a robust design is not sought, then the point x

= a is chosen. However, for a robust design, x = b is a better choice. This is because if the

design variable varies within ±?x of its mean, the resulting variation of response of the design at

x = b is much smaller than that at x = a, while the means of the two responses are essentially

equal. Implementation of these two types of robust design are discussed in the next section.

2.3.1 Implementation of Robust Design: Taguchi’s Method, in the Robust Concept


Exploration Method, and with Design Capability Indices

Design of experiments, specifically orthogonal arrays (OA), are typically employed in

Taguchi’s robust design method to systematically vary and test the different levels of each of the

control factors. Taguchi advocates the use of an inner-array and outer-array approach to

implement robust design (cf., e.g., Byrne and Taguchi, 1987). The inner-array consists of an

OA which contains the control factor settings; the outer-array consists of the OA which contains

the noise factors and their settings which are under investigation. The combination of the inner-

array and outer-array constitutes the product array. The product array is used to test various

combinations of the control factor settings systematically over all combinations of noise factors

62
after which the mean response and standard deviation may be approximated for each run using

the equations:

1 n
• Response mean: y? ?y
n i ?1 i
[2.3]

(y i ? y )2
n
• Standard deviation: S? ?i?1 n ? 1 [2.4]

Preferred parameter values then can be determined through analysis of the signal-to-noise (SN)

ratio; factor levels that maximize the appropriate SN ratio are optimal. There are three

“standard” types of SN ratios (see, e.g., Phadke, 1989):

• Nominal the best (for reducing variability around a target):

??y 2 ??
SN T ? 10 log ?? 2 [2.5]
??S ??

• Smaller the better (for making the system response as small as possible):

??1 n 1 ??
SN L ? ? 10log ?? ? 2 ?? [2.6]
??n i ?1 yi ??

• Larger the better (for making the system response as large as possible):

??1 n 2 ??
SN S ? ? 10 log ?? ? y i [2.7]
??n i ?1 ??

63
Once all of the SN ratios have been computed for each run of an experiment, there are

two common options for analysis: Analysis of Variance (ANOVA) and a graphical approach.

ANOVA can be used to determine the statistically significant factors and the appropriate setting

for each. In the graphical approach, the SN ratios and average responses are plotted for each

factor against its levels. The graphs then are examined to “pick the winner,” i.e., pick the factor

levels which (1) best maximize SN and (2) bring the mean on target (or maximize or minimize

the mean, as the case may be).

There are many criticisms of Taguchi’s implementation of robust design through the

inner and outer array approach: it requires too many experiments, the analysis is statistically

questionable because of the use of orthogonal arrays, it does not accommodate constraints, and

the responses should be modeled directly instead of the SN ratios (see, e.g., Montgomery,

1991; Nair, 1992; Otto and Antonsson, 1993; Shoemaker, et al., 1991; Tribus and Szonyi,

1989; Tsui, 1992). Consequently many variations of the Taguchi method have been proposed

and developed; a review of numerous robust design optimization methods can be found in (Otto

and Antonsson, 1993; Simpson, et al., 1997a; Simpson, et al., 1997b; Su and Renaud, 1996;

Tsui, 1992; Yu and Ishii, 1998).

To facilitate the implementation of robust design within the RCEM, second-order

response surface models are created and used to approximate the design space, replacing the

computer analysis code or simulation routine used to model the system. The major elements of

64
the response surface model approach for robust design applications are as follows (see, e.g.,

Myers and Montgomery, 1995; Shoemaker, et al., 1991):

• combining control and noise factors in a single array instead of using Taguchi's
inner- and outer-array approach,

• modeling the response itself rather than expected loss, and

• approximating a prediction model for loss based on the fitted-response model.

Instead of using Taguchi’s orthogonal array as the combined array for experiments, central

composite designs are employed in the RCEM to fit second-order response surface models for

integration with Taguchi's robust design. The response surface model postulates a single, formal

model of the type:

yˆ = f(x,z) [2.8]

where yˆ is the estimated response and x and z represent the settings of the control and noise

variables, respectively. In Equation 2.8, it is assumed that the noise variables are independent.

From the response surface model, it is possible to estimate the mean and variance of the

response. For Type I applications in which the deviations of noise factors are the source of

variation:

• Mean of response: ? ˆ = f(x,? z) [2.9]


y

???f ?? 2
2
m
• Variance of response: ?

2
? ?i ? 1 ?? ?? ?
???z i ?? z i
[2.10]

65
where ? ?represents the mean values, m is the number of noise factors in the response model,

and ? z i is the standard deviation associated with each noise factor. In Type II robust design,

i.e., when the deviations of control factors are the source of variation, ? z and ? z i in Equations

2.9-2.10 are replaced by the mean and deviation of the variable control factors. Using this

approach, robust design can be achieved by having separate goals for “bringing the mean on

target” and “minimizing the deviation” within a compromise DSP (cf., Chen, et al., 1996b).

When satisfying the design requirements and reducing the variation of system

performance are equally important, it is effective to model the two aspects of robust design

as separate goals in the compromise DSP. For instance, when designing a power plant, it may

be required to bring the power output as close as possible to its target value while at the same

time, reduce the variation of the system performance so that the power output remains constant

during operation. Moreover, setting an overall design requirement at a specific value during the

early stages of design may sometimes be crucial because a small variation may require significant

changes in other design requirements or incur substantial costs in order to compensate for it.

However, modeling the two aspects of robust design as two separate goals may not be an

effective approach when satisfying a range of design requirements is the major concern,

see Figure 2.13.

In Figure 2.13, the quality distributions of two different designs (I and II) are illustrated.

Both designs have the same mean value but different deviations. If the two aspects of robust

design are modeled as separate goals, obviously the design with the least deviation (Design I)

66
would be chosen because both designs have the same performance mean. However, in this

particular situation where the mean of the quality performance lies outside the range of

requirements, a smaller fraction of the performance falls inside the upper and lower requirement

limits (URL and LRL, respectively) with a thinner bell shape, i.e., the shadowed area which is

enclosed by A, B, and C is smaller than the area enclosed by A', B' and C. This is acceptable

in manufacturing when the process itself can be manually shifted to bring the mean back on

target, but when designing a system to accommodate noise, this option is not always

available.

LRL Target URL


Design I
Requirement Range

Designs
Distribution

meeting
A’ requirements

A Design II

B’ B C mean, ?

Figure 2.13 A Motivating Example for Design Capability Indices

Design capability indices have been developed with exactly this in mind. They are

based on process capability indices from statistical process control and apply in the same

manner to a design with variation as to a manufacturing process with variation. Specifically, a

design capability index (see Figure 2.14) is computed to assess the capability of a family of

67
designs to satisfy a ranged set of design requirements (Chen, et al., 1996c; Simpson, et al.,

1997a).

URL LRL Target URL LRL


URL - ? ? - LRL URL -? ? - LRL

C dk • 1 C dk • 1
C •1
Cdk = Cdu dk Cdk = Cdl
3? 3? 3? 3?

? ? ?
C du C dl C du C dl

Smaller is Better Nominal is Better Larger is Better


• Target between URL and LRL
• Target below URL • All variants between URL • Target above LRL
• All variants below URL • All variants above LRL
• Desire C dk• 1, Cdk= C du and LRL • Desire C dk • 1, Cdk= C dl
• Desire C dk• 1, Cdk= min {C dl, C du}

Figure 2.14 Implementation of Design Capability Indices for Robust Design


Applications

Assume that the system performance is normally distributed with mean, ? , and standard

deviation, ? . The design capability indices Cdl, Cdu, and Cdk measure the extent to which a

family of designs satisfies a ranged set of design requirements as specified by upper and lower

requirement limits (URL and LRL, respectively). As shown the figure, when nominal is better,

i.e., upper and lower design requirement limits are given, finding a family of designs with Cdk = 1

satisfies the design requirements. In this scenario, Cdk is computed using Equation 2.11; Cdk is

taken as the minimum of Cdl and Cdu.

68
ˆ ? LRL )
(?? (URL ? ??
ˆ)
C dl ? ;C du ? ; Cdk ? min{ Cdl , Cdu} [2.11]
3??
ˆ 3??
ˆ

When smaller is better (e.g., “the motors should weigh less than 0.5 kg”) designs with a

Cdk = 1 are capable of satisfying the requirement. In this case, Cdk = Cdu as shown in Figure

2.14, and designs with a Cdu < 1 do not meet this requirement because a portion of the

distribution falls outside of the URL. Similarly, when larger is better (e.g., “the efficiency of

these motors should be 30% or better”), designs with a Cdk = 1 are capable of meeting this

requirement; Cdk = Cdl.

There are some assumptions associated with the use of Cdk. For example, Cdk = 1

implies only 99.73% of the designs conform to requirements assuming that the system

performance parameters—which are based on the range of designs—are normally distributed.

However, the type of distribution of system performance depends on the actual system

response and the statistical distribution of each design variable or uncertainty parameter. When

the system function is complex, it may be difficult to perform a judicious evaluation to determine

performance distribution. As an approximation, if it is assumed that the uncertain parameters

deviate by ±3? z (as is typical in a six sigma approach to quality) around their nominal value ? z,

and that each system response varies by ±3? y around its mean value, ? y, which can be

calculated by:

?y = y (? x) [2.12]

69
The standard deviation, ? y, is approximated using a first order Taylor series expansion

(assuming ? x is small):

???y ??
2
m
ˆ2
?? ? ? ?? ??? 2z [2.13]
i ?1 ???z i ??
y

Modifications to the process capability indices for different variances have been

proposed (see, e.g., Johnson, et al., 1992; Ng and Tsui, 1992; Rodriguez, 1992), and design

capability indices could be modified similarly. For example, if a uniform distribution is used for

each response instead of a normal distribution, then Cdk, Cdu, and Cdl become as follows:

ˆ ? LRL )
(?? (URL ? ??
ˆ)
C dl ? ;C du ? ;Cdk ? min{ Cdl , Cdu} [2.14]
3??
ˆ 3??
ˆ

where the standard deviation is computed using the following:

(b ? a) 2
ˆ2
?? ? [2.15]
12

where a and b are the lower and upper limits of the range of y.

The compromise DSP mentioned in Section 1.2.1 is modified to implement Cdk as

shown in Figure 2.15. Design capability indices can be used for constraints and/or goals,

depending on whether satisfying a range of design requirements is a wish or a demand. In all

three cases—smaller is better, nominal is better, and larger is better—if a design requirement is

a wish, then making Cdk as close to one as possible is a goal in the compromise DSP. When a

requirement is a demand, then Cdk = 1 is taken as a constraint. Note that when a deviation
70
function solely includes design capability indices, the negative deviation variable, di-, is always

minimized, and di+ is always zero.

Given:
• Functions y(x), including those ranged design requirements which are constraints, gi(x),
and those which are objectives, Ai(x)
• Deviations of the uncontrollable variables, ? Z
• Target upper and lower design requirement limits, URLi and LRLi
Find:
• Design variables, x
Satisfy:
• Constraints: Cdk-constraints = 1 (or use worst-case analysis)
• Goals: Cdk-objectives + di- - di+ = 1
• Bounds on the design variables
• di- , di+ = 0 ; di- • di+ = 0
Minimize:
• Deviation Function: Z = [f1(di-, ..., di+)...fk(di-, ..., di+)]

Figure 2.15 Compromise DSP Formulation with Design Capability Indices

Working with these two types of robust design implementations, it is possible to

develop an extension for product family design, specifically for scalable product platforms,

which addresses the question:

Q1.2. How can robust design principles be used to facilitate the design a common

scalable product platform?

Extensions of robust design for product family design are discussed in the next section.

71
2.3.2 Robust Design for Product Family Design: Scale Factors

There have been two known allusions to using robust design in product family design.

First, Lucas (1994) describes a way that the results of a robust design experiment can be used

to identify the need for product differentiation. When large effects are present in the system,

different product types can be sent to customers having different features as opposed to

designing one product which is robust over the entire range of effects. He states that this is

common practice in the chemical industry where, for example, different polymer viscosities are

desired by different customers and better results often are obtained by customizing the product

for its specific environment rather than delivering a single robust product.

Second, Chang, et al. (1994) and Chang and Ward (1995) introduce the notion of

“conceptual robustness” which is pertinent to this research. The term “conceptual robustness” is

developed by Chang and his colleagues for mathematically modeling and computationally

supporting simultaneous (or concurrent) engineering in a distributed design environment. By

treating variations in the design proposed by other members of the development team as

“conceptual noise,” robust design principles can be used to make “conceptually robust”

decisions which are robust against these variations (Chang, et al., 1994). The “conceptually

robust” design of a two-axis CNC milling machine is used as an illustrative example. In (Chang

and Ward, 1995), this idea is applied to modular design which is a “function-oriented design

that can be integrated into different systems for the same functional purpose without (or with

minor) modifications.” The design of an air conditioning system for ten different automobiles is

used to demonstrate their approach.

72
It is this idea of a “conceptual noise factor” that enables the utilization of robust design in

the context of product family design, particularly in the design of a scalable product platform.

By identifying an appropriate “scale factor” for a scalable product platform, robust design

principles can be used to minimize the sensitivity of the product platform to variations in

a scale factor. In this regard, a “conceptually robust” product platform can be realized which

has minimum sensitivity to variations in the scale factor, realizing a robust product family. For

the work in this dissertation, then, a scale factor is defined as follows:

• Scale factor - factor around which a product platform can be “scaled” or “stretched”
to realize derivative products within a product family.

In essence, a scale factor is a noise factor within a scalable product family or, to borrow

terminology from Chang, et al. (1994)), a “conceptual noise factor” around which a

“conceptually robust” product platform can be developed for a product family. Examples of

scale factors include the stack length in a motor, as in the Black & Decker universal motor

example (Lehnerd, 1987), the number of passengers on an aircraft ,as in the Boeing 747 family

(Rothwell and Gardiner, 1990), or the number of compressor stages in an aircraft engine, as in

the Rolls Royce RTM322 example (Rothwell and Gardiner, 1990). Scale factors may be either

discrete or continuous; however, continuous scale factors are investigated primarily in this

dissertation. The specific relationship between different types of scale factors and different

platform leveraging strategies is discussed in Section 3.1.2.

73
Given the definition for a scale factor, a third type of robust design now can be identified

for product family design, complementing the two types of robust design discussed previously:

Type III - Robust design associated with minimizing the sensitivity of a product platform to
variations in a scale factor.

As defined, Type III robust design is nearly identical to Type I robust design as shown in Figure

2.16. Notice that the P-diagram on the left of the figure has been modified to accommodate

scale factors because essentially they are treated as noise factors in the product platform design

process.

Type III
Platform Design
Variable
x = Control Factors
y x=a
y = Response

Objective or x= b
M = Signal Factors

Deviation
Function

z =Noise S = Scale Scale Factor, S


Factors Factor(s)

Figure 2.16 Type III Robust Design: Scale Factors for Product Platforms

It should be noted that these scale factors are not the same “scaling/leveling factors”

shown in the P-diagram in (Taguchi and Phadke, 1986) or (Suh, 1990) which are used to scale

74
a response to achieve a desired value. Using the same diagram shown in Figure 2.12 for the

Type I robust design, the idea behind Type III robust design is illustrated in the right hand side

of Figure 2.16. Given two possible settings (x = a and x = b) for one of the design variables, x,

which defines the platform, the setting x = b should be selected because it minimizes the

sensitivity of the product platform to variations in the scale factor.

Consequently, if a family of products can be leveraged or scaled around an identifiable

scale factor, then robust design can be used to minimize the sensitivity of the product platform to

changes in these scaling factors. In this manner, a scalable product platform can be developed

and instantiated to realize a family of products. This raises the following question then.

Q1.3. How can individual targets for product variants be aggregated and modeled for
product platform design?

Using the concept of a scale factor for a product platform, it is now possible to

aggregate the individual targets for product variants within a product family around an

appropriate mean and a standard deviation. Robust design principles then can be used to

“match” the mean and standard deviation of the product family with the desired mean and

standard deviation of the performance requirements by either of the following as discussed in

Section 2.3.1:

• creating separate goals for “bringing the mean on target” and “minimizing the deviation”
of the product platform for variations in the scale factor within a compromise DSP, or

75
• using design capability indices to assess the capability of a family of designs to satisfy a
ranged set of design requirements.

To demonstrate these implementations, the former approach is utilized in the universal electric

motor problem in Chapter 6; the latter is employed in the General Aviation aircraft example in

Chapter 7. The General Aviation aircraft example also makes use of metamodels to facilitate

the implementation of robust design and design capability indices and expedite the concept

exploration process as mentioned earlier. Metamodeling techniques are discussed next.

2.4 METAMODELING TECHNIQUES

To facilitate the implementation of robust design, metamodeling techniques often are

employed to create approximations of the mean and variation of a response in the presence of

noise. A metamodel is a “model of a model” (Kleijnen, 1987) which is used as a surrogate

approximation for the actual analysis (i.e., computer code) during the design process. The

general approach to response surface modeling is shown in Figure 2.17. In statistical terms,

design variables are factors, and design objectives are responses; the factors and responses to

be investigated for a particular design problem provide the input for the approach of Figure

2.17, and the solutions (improved or robust) are the output. To identify these solutions, this

approach includes three sequential stages: screening, modeling building, and model exercising.

The first step (screening) is employed only if the problem includes a large number of

factors (usually greater than 10); screening experiments are used to reduce the set of factors to

those that are most important to the response(s) being investigated. Statistical experimentation

76
is used to define the appropriate design analyses which must be run to evaluate the desired

effects of the factors. Often two level fractional factorial designs or Plackett-Burman designs

are used for screening (cf., Myers and Montgomery, 1995), and only main (linear) effects of

each factor are investigated.

Given:
Large # of YES
Factors, Run Screening
Responses Factors? Experiment

NO Screening
Run Modeling Reduce #
Experiment(s) Factors

Build Predictive
Model ( y )
Model
Building

Noise YES Build Robustness


Factors? Model ( ? y , ? y)

NO

Search Design
Model Space
Exercising
Solutions
(improved or robust)

Figure 2.17 General Approach to Metamodeling (Koch, et al., 1997)

In the second stage (model building) of the approach in Figure 2.17, response surface

models are created to replace computationally expensive analyses and facilitate fast analysis and

77
exploration of the design space. If little curvature appears to exist, a two level fractional

factorial experiment is designed, and the first-order polynomial of the form

k
y = ?0 + • ? i xi [2.16]
i=1

is used to approximate the response(s). If significant curvature exists, then a second-order

polynomial of the form:

k k
y = ?0 + • ? ixi + • ? ii xi2 + • • ? ij xixj [2.17]
i=1 i=1 i j
i<j

is commonly used. Among the various types of experimental design for fitting a second-order

response surface model, the central composite design (CCD) is probably the most widely used

experimental design for regularly shaped (spherical or cuboidal) design spaces (cf., Myers and

Montgomery, 1995). In the case of irregularly shaped design spaces, D-optimal designs have

been successfully employed to build second order response surface models (see, e.g., Giunta, et

al., 1994).

If noise factors are included for robust design, the mean and variance of each response

must be estimated, and predictive metamodels for both are constructed. As discussed in

(Koch, et al., 1998), there are essentially three approaches which can be employed to construct

metamodels for robust design applications:

1. Statistical expected value and Taylor series expansion approximations: A single


experimental design is created which contains both control and noise factors from which

78
a response surface model is built (see, e.g., Chen, et al., 1996b; Shoemaker, et al.,
1991). The mean value of a response is estimated by evaluating the response surface at
the mean of the noise factor, and the variance is estimated using a Taylor series
approximation. This is the approach currently employed in the RCEM as described
previously in Section 2.3.1.

2. DOE-based Monte Carlo simulation approach: A combined response surface and


Monte Carlo simulation approach is used to create mean and variance metamodels
(see, e.g., Mavris, et al., 1996; Mavris, et al., 1995). An initial experiment is
constructed to vary both control and noise factors and build response surface models of
each response similar to the first approach. A second experiment is created to vary
only the control factors, running a Monte Carlo simulation for each experiment to vary
the noise factors. Approximations for mean and variance are constructed based on this
second set of experiments.

3. Product array approach: Uses the inner- and outer-array approach advocated by
Taguchi (see, e.g., Montgomery, 1991; Phadke, 1989) to develop separate
approximations for the mean and variance of each response. The inner-array prescribes
settings for the control factors, and the outer-array prescribes settings for the noise
factors. This experimentation strategy leads to multiple response values for each set of
control factor settings from which a response mean and variance can be computed from
which metamodels can be constructed.

Of the three approaches, the product array approach typically yields the most accurate

approximations because the metamodels are built directly from the original analysis code rather

than from an estimate based on an approximation (cf., Koch, et al., 1998).

As shown in Figure 2.17 and as evidenced by the preceding discussion, building

approximations of computer analysis and simulation codes involves the following: (a) choosing

79
an experimental design to sample the computer code, (b) choosing a model to represent the

data, and (c) fitting the model to the observed data. There are a variety of options for each of

these steps as shown in Figure 2.18, and some of the more prevalent approximation techniques

have been identified. For example, response surface methodology usually employs central

composite designs, second-order polynomials, and least squares regression analysis. The

reader is referred to (Simpson, et al., 1997b) for a recent review of numerous mechanical and

aerospace engineering applications of many of the metamodeling techniques shown in Figure

2.18 with particular emphasis on response surface methodology, neural networks, inductive

learning, and kriging.

By far the most popular technique for building metamodels these days is the response

surface approach which typically employs second-order polynomial models fit using least

squares regression techniques (Myers and Montgomery, 1995). These response surface

models replace the existing analysis code while providing the following:

• an understanding of the relationship between (input) design variables x and (output)


responses y,

• easier integration of domain-dependent and/or geographically distributed computer


codes, and

• fast analysis tools for optimization and exploration of the design space.

80
SAMPLE
EXPERIMENTAL MODEL MODEL APPROXIMATION
DESIGN CHOICE FITTING TECHNIQUES

(Fractional) Polynomial Least Squares Response Surface


Factorial (linear, quadratic) Regression Methodology
Central Composite Splines Weighted
Box-Behnken (linear, cubic) Least Squares
Regression
D-Optimal Realization of a
G-Optimal Stochastic Process Best Linear
Kriging
Unbiased Predictor
Orthogonal Array Kernel Smoothing
Plackett-Burman Best Linear
Radial Basis Predictor
Hexagon Functions
Hybrid Log-Likelihood
Network of
Latin Hypercube Neural
Neurons Backpropagation
Networks
Select By Hand
Rulebase or Entropy Inductive
Random Selection Decision Tree (info.-theoretic) Learning

Figure 2.18 Techniques for Metamodeling

An added advantage of response surfaces is that they can smooth the data in the case of

numerical noise which may hinder the performance of some gradient-based optimizers (cf.,

Giunta, et al., 1994). This “smoothing” effect is both good and bad, depending on the problem.

Su and Renaud (1996) present an example where a second-order response surface smoothes

out the variability in a response so that the robust solution is lost in the approximating function; a

“flat region” does not exist in a second-order response surface, only an inflection point. Su and

Renaud’s example is investigated in more detail in Section 4.1 wherein the kriging process is

demonstrated step-by-step as it applies to their example to familiarize the reader with kriging.

In the meantime, additional limitations of response surfaces are discussed in the next section,

81
providing motivation for investigating alternative metamodeling techniques for use in engineering

design.

2.4.1 Limitations of Response Surface Approaches

Response surfaces typically are second-order polynomial models which make them

easy to use and implement; however, they have limited capabilities to model accurately non-

linear functions of arbitrary shape. Some two-variable examples of the types of surfaces that a

second-order response surface can model are illustrated in Figure 2.19. Obviously, higher-

order response surfaces can be used to model a non-linear design space; however, instabilities

may arise (cf., Barton, 1992), or it may be too difficult to take a sufficient number of sample

points in order to estimate all of the coefficients in the polynomial equation, particularly in high

dimensions. Hence, many researchers advocate the use of a sequential response surface

modeling approach using move limits (see, e.g., Toropov, et al., 1996) or a trust region

approach (see, e.g., Rodriguez, et al., 1997). More generally, the Concurrent Sub-Space

Optimization procedure uses data generated during concurrent subspace optimization to

develop response surface approximations of the design space which form the basis of the

subspace coordination procedure (Renaud and Gabriele, 1994; Renaud and Gabrielle, 1991;

Wujek, et al., 1995). The Hierarchical and Interactive Decision Refinement methodology uses

statistical regression and other metamodeling techniques to recursively decompose the design

space into subregions and fit each region with a separate model during design space refinement

(Reddy, 1996). Finally, the Model Management Framework (Booker, et al., 1995; Dennis and

82
Torczon, 1995) is being developed collaboratively by researchers at Boeing, IBM, and Rice to

implement mathematically rigorous techniques to manage the use of approximation models in

optimization.

Many of the previously mentioned sequential approaches are being developed for

single objective optimization applications. Since much of engineering design is multiobjective

in nature, it is often difficult to isolate a small region of good design which can be accurately

represented by a low-order polynomial response surface model. Koch, et al. (1997) discuss

the difficulties encountered when screening large variable problems with multiple objectives as

part of the response surface approach. Barton (1992) states that the response region of interest

will never be reduced to a “small neighborhood” which is good for all objectives during

multiobjective optimization. Hence, there is a need to investigate alternative metamodeling

techniques which have sufficient flexibility to build accurate global approximations of the design

space and which are suitable for modeling computer experiments which are typically

deterministic, i.e., contain no random error or variability.

83
y = 80 + 4x1 + 8x 2 -4x12 - 12x22 -12 x 1x2 y = 80 + 4x1 + 8x2 - 3x 21 -12x22 - 12 x1x2

x2 x1
x2 x1

x2 x1

x2 x1
y = 80 -4x1 + 12x 2 -3x 21 -12x22 -12 x1x2 y = 80 + 4x1 + 8x2 - 2x12 -12x 22 -12 x1x 2

Figure 2.19 Sample Two-Variable Second-Order Response Surfaces


(adapted from Box and Draper, 1987)

The approach investigated in this dissertation is called kriging, and it is introduced in the

next section. This discussion of kriging is followed by a discussion of different experimental

designs which can be used to sample the design space in Section 2.4.3. These two sections lay

the foundation for the work in Chapters 4 and 5 wherein Hypotheses 2 and 3 are tested

explicitly to determine the utility of kriging and space filling experimental designs for building

approximations of deterministic computer experiments.

84
2.4.2 The Kriging Approach to Metamodeling

Kriging has its roots in the field of geostatistics—a hybrid discipline of mining

engineering, geology, mathematics, and statistics (cf., Cressie, 1993)—and is useful for

predicting temporally and spatially correlated data. Kriging is named after D. G. Krige, a South

African mining engineer who, in the 1950s, developed empirical methods for determining true

ore grade distributions from distributions based on sampled ore grades (Matheron, 1963).

Several texts which describe kriging and its usefulness for predicting spatially correlated data

(see, e.g., Cressie, 1993) and mining (see, e.g., Journel and Huijbregts, 1978) exist. These

metamodels are extremely flexible due to the wide range of correlation functions which can be

chosen for building the metamodel. Furthermore, depending on the choice of the correlation

function, the metamodel can either “honor the data,” providing an exact interpolation of the data,

or “smooth the data,” providing an inexact interpolation (Cressie, 1993). In this work, as in

most applications of kriging, the concern is solely on spatial prediction; it is assumed that the

data are not correlated temporally.

These days, kriging goes by a variety of names including DACE (Design and Analysis of

Computer Experiments) modeling—the title of the inaugural paper by Sacks, et al. (1989)—and

spatial correlation metamodeling (see, e.g., Barton, 1994). There are also several types of

kriging (cf., Cressie, 1993): ordinary kriging, universal kriging, lognormal kriging, and trans-

Gaussian kriging. In this dissertation, ordinary kriging is employed, following the work in, e.g.,

(Booker, et al., 1995; Koehler and Owen, 1996), and only the term kriging is used.

85
Unlike response surfaces, however, kriging models have found limited use in engineering

design applications since introduction into the literature by Sacks, et al. (1989). Consequently,

the following question is addressed in this dissertation:

Q2. Is kriging a viable metamodeling technique for building approximations of

deterministic computer analyses?

As introduced in Section 1.3.1, Hypothesis 2 is that kriging is a viable

metamodeling technique for building approximations of deterministic computer

analyses. Initial applications of kriging in engineering design include:

• Giunta (1997) and Giunta, et al. (1998) perform a preliminary investigation into the use
of kriging for the multidisciplinary design optimization of a High Speed Civil Transport
aircraft.

• Sasena (1998) compares and contrasts kriging and smoothing splines for approximating
noisy data.

• Schonlau, et al. (1997) use a global/local search algorithm based on kriging for shape
optimization of an automobile piston engine.

• Osio and Amon (1996) develop a multistage numerical optimization strategy based on
kriging which they demonstrate on the thermal design of embedded electronic package
which has 5 design variables.

• Booker (1996) and Booker, et al. (1996) using a kriging approach to study the
aeroelastic and dynamic response of a helicopter rotor during structural design.

86
Some researchers have also employed kriging-based strategies for numerical optimization (see,

e.g., Cox and John, 1995; Trosset and Torczon, 1997). A look at the mathematics of kriging is

offered next.

Mathematics of Kriging

Kriging postulates a combination of a polynomial model and departures of the following form:

y(x) = f(x) + Z(x) [2.18]

where y(x) is the unknown function of interest, f(x) is a known polynomial function of x, and

Z(x) is the realization of a stochastic process with mean zero, variance ? 2, and non-zero

covariance. The f(x) term in Equation 2.18 is similar to the polynomial model in a response

surface, providing a “global” model of the design space. In many cases f(x) is simply taken to

be a constant term ? (cf., Koehler and Owen, 1996; Sacks, et al., 1989; Welch, et al., 1990).

Only kriging models with constant underlying global models are investigated in this work as well.

While f(x) “globally” approximates the design space, Z(x) creates “localized” deviations

so that the kriging model interpolates the ns sampled data points. The covariance matrix of Z(x)

which dictates the local deviations is as follows:

Cov[Z(xi),Z(xj)] = ? 2 R([R(xi,xj)] [2.19]

where R is the correlation matrix, and R(xi,xj) is the correlation function between any two of the

ns sampled data points xi and xj. R is a ns x ns symmetric, positive definite matrix with ones

along the diagonal. The correlation function R(xi,xj) is specified by the user.

87
In this work, five different correlation functions are examined for use in the kriging

model, see Table 2.1. In all of the correlation functions listed in Table 2.1, ndv is the number of

design variables, ? k are the unknown correlation parameters used to fit the model, and dk = xki -

xkj which is the distance between the kth components of sample points xi and xj. The

correlation functions of Equations 2.20 and 2.21 are from (Sacks, et al., 1989); the correlation

functions of Equations 2.22-2.24 are from (Mitchell and Morris, 1992b).

These five correlation functions are chosen primarily because of the frequency with

which they appear in the literature; the Gaussian correlation function, Equation 2.20, is the most

popular one in use. Correlation functions with multiple parameters per dimension exist;

however, correlation functions with only one parameter per dimension are considered in this

dissertation to facilitate finding the maximum likelihood estimates (MLEs) or “best guess” of the

?k used to fit the model. As mentioned in Section 1.3.2, one of the contributions in this work is

to study the effects of these five different correlation functions on the accuracy of a kriging

model; this study is performed in Chapter 5.

88
Table 2.1 Summary of Correlation Functions

Name Spatial Correlation Function # Deriv. Eqn. #


?
n dv
Exponential k? 1
exp( ? ? k d k ) 1 [2.20]
?
n dv 2
Gaussian k? 1
exp( ? ? k d k ) 8 [2.21]
?? 1 ??
?k dk ?
1 ? 6?? k d k ? ? 6?? k d k ?
2 3
?? 2 ??
n dv ?? ??
2?1? ? k d k ?
1
?
3
Cubic spline k? 1 ??
? ? k d k ? 1?? 1 [2.22]
2
?? 0 ? k dk ? 1 ??
?? ??
?? ??
? ?(1 ? ? k dk )exp( ? ? k d k )?
n dv
Matérn linear
k? 1 1 [2.23]
function
?? ?
2
d
2 ??
?
n dv k k
Matérn cubic (1 ? ? k
k? 1 ?? dk ? ) exp(? ? k d k )?? 2 [2.24]
function ?? 3 ??

Once a correlation function has been selected, predicted estimates, yˆ (x), of the

response, y(x), at untried values of x are given by Equation 2.25:

yˆ ? ?ˆ ?? r T( x)R ?1 (y ? f?ˆ )? [2.25]

where y is the column vector of length ns (number of sample points) which contains the values of

the response at each sample point, and f is a column vector of length ns which is filled with ones

when f(x) in Equation 2.18 is taken as a constant. In Equation 2.25, rT (x) is the correlation

vector of length ns between an untried x and the sampled data points {x1, x2, ..., xns} and is

given by Equation 2.26.

rT (x) = [R(x,x1), R(x,x2), ..., R(x,xns)]T [2.26]

89
Finally, the ?ˆ ?in Equation 2.25 is estimated using Equation 2.27.

?ˆ ?? (f T R?1f )? 1 f T R ?1y [2.27]

When f(x) is assumed to be a constant, then ?ˆ ?is a scalar which simplifies the calculation of

Equation 2.27 and all others involving ?ˆ ?.

The estimate of the variance, ?ˆ ?2 , from the underlying global model (not the variance of

the randomness in the observed data itself) is as follows:

(y ? f ?ˆ )?T R?1 (y ? f ?ˆ )?
?ˆ ?2 ? [2.28]
ns

where f is again a column vector of ones because f(x) is assumed to be a constant. The

maximum likelihood estimates (i.e., “best guesses”) for the ? k used to fit the model are found by

maximizing Equation 2.29 over ? k > 0 (Booker, et al., 1995):

[n s ln ( ?ˆ 2?) ? ln |R| ]
? [2.29]
2

Both ?ˆ ?2 and |R| are functions of ? k. While any values for the ? k create an interpolative

approximation model, the “best” kriging model is found by solving the k-dimensional

unconstrained nonlinear optimization problem given by Equation 2.29; this process is discussed

further in the next section. It is worth noting that in some cases using a single correlation

parameter gives sufficiently good results (see, e.g., Booker, et al., 1995; Osio and Amon, 1996;

Sacks, et al., 1989). In this work, however, a unique ? value for each dimension always is

90
considered based on past difficulties with scaling the design space to [0,1]k during the model

fitting process. The algorithms used in this dissertation to build and predict with a kriging model

are included in Appendix A.

Once the MLEs for each theta have been found, the final step is to validate the model.

Since a kriging model interpolates the data, residual plots and R2 values—the usual model

assessments for response surfaces (cf., Myers and Montgomery, 1995)—are meaningless

because there are no residuals. Therefore, validating the model using additional data points is

essential if they can be afforded. If additional validation points can be afforded, then the

maximum absolute error, average absolute error, and root mean square error (RMSE) for the

additional validation points can be calculated to assess model accuracy. These measures are

summarized in Table 2.2. In the table, nerror is the number of random test points used, then yi is

the actual value from the computer code/simulation, and yˆ i is the predicted value from the

approximation model.

91
Table 2.2 Error Measures for Kriging Metamodels

Name Error Measure Eqn. #

max. abs. error max. |y i ? yˆ i | i = 1, ..., n error [2.30]

?
1 n error
y i ? yˆ i
avg. abs. error n error i ?1 [2.31]

?
n error
(yi ? yˆ i )2
i? 1
RMSE n error [2.32]

However, sometimes taking additional validation points is not possible due to the added

expense of running additional experiments on the computer code/simulation; thus, an alternative

model assessment which requires no additional points is needed. One such approach is the

leave-one-out cross validation (Mitchell and Morris, 1992a). In this approach, each sample

point used to fit the model is removed one at a time, the model is rebuilt without that sample

point, and the difference between the model without the sample point and actual value at the

sample point is computed for all of the sample points. The cross validation root mean square

error (cvrmse) is computed using Equation 2.33:

?
ns
( yi ? yˆ i )2
i? 1
cvrmse = [2.33]
ns

92
The MLEs for the ? k are not re-computed for each model; the initial ? k MLEs based on the full

sample set are used. Mitchell and Morris (1992a) describe an approach which facilitates cross

validation since it can be time consuming depending on the sample size.

Before a kriging metamodel (or any metamodel for that matter) can be created, the

design space must be sampled in order to obtain data to fit the model. Hence, an important

step in any metamodeling approach is the selection of an appropriate sampling strategy, i.e., an

experimental design by which the computer analysis or simulation code is queried. In the next

section, space filling and classical experimental designs are discussed.

2.4.3 Classical and Space Filling Experimental Designs

Many researchers (see, e.g., Currin, et al., 1991; Sacks and Schiller, 1988) argue that

classical experimental designs, such as the central composite designs and Box-Behnken designs,

are not well-suited for sampling deterministic computer experiments. Sacks, et al. (1989) state

that the “classical notions of experimental blocking, replication and randomization are irrelevant”

when it comes to deterministic computer experiments that have no random error; hence, designs

for deterministic computer experiments should “fill the space” as opposed to possess properties

for estimating the variability in the data.

Booker (1996) summarizes the difference between classical experimental designs and

new space filling designs well. In the classical design and analysis of physical experiments,

random variation is accounted for by spreading the sample points out in the design space and by

taking multiple data points (replicates), see Figure 2.20a. In deterministic computer

93
experiments, replication at a sample point is meaningless; therefore, the points should be chosen

to fill the design space. One approach is to minimize the integrated mean square error over the

design region (cf., Sacks, et al., 1989); the space filling design illustrated in Figure 2.20b is an

example of such a design.

As part of the research in this dissertation, the following question is addressed.

Q3. Are space filling designs better suited for building approximations of deterministic

computer analyses than classical experimental designs?

As stated in Section 1.3.1, Hypothesis 3 is that space filling designs are better

suited for building approximations of deterministic computer analyses than classical

experimental designs . In an effort to test this hypothesis, an investigation into the utility of

several classical and space filling experimental designs is conducted in Chapter 5. Eleven

different types of experimental designs investigated in this dissertation: two classical experimental

designs and nine space filling experimental designs. The different designs are described next;

detailed descriptions of each design are also given in Appendix B and C.

94
1.0 1.0
0.5 0.5
x2 0.0 x2 0.0
-0.5 -0.5
-1.0 -1.0
-1.0 -0.5 0.0 0.5 1.0 -1.0 -0.5 0.0 0.5 1.0
x1 x1

(a) Classical design w/replicates (b) Space filling design w/o replicates

Figure 2.20 Example Classical and Space Filling Experimental Designs

Classical Experimental Designs

Classical experimental designs are so named because they have been developed for what are

considered to be the more “classical” applications of response surface metamodeling: physical

experiments which are plagued by variability and random error (see, e.g., Box and Draper,

1987; Myers, et al., 1989; Myers and Montgomery, 1995). Among these designs, the central

composite and Box-Behnken designs are well known and easily generated; hence, they are

employed in this work to serve as a basis for comparison against the sampling capability of

space filling designs. A brief description of these two types of designs follows.

95
A central composite design (CCD) is a X2

combination of 2k factorial points, 2k star points, and a

center point for k factors as shown in Figure 2.21. CCDs X1

are the most widely used experimental design for fitting Star pts
X3 Center pt
Factorial pts
second-order response surfaces (Myers and Montgomery,
Figure 2.21 Central
1995). Different CCDs are formed by varying the distance Composite Design

from the center of the design space to the star points; in this

work, three types of CCDs are considered:

• ordinary central composite design (CCD) - star points are placed a distance of ±? (?
> 1) from the center with the cube points placed at ±1 from the center,

• face centered central composite (CCF) design - star points are positioned on the
faces of the cube, and

• inscribed central composite (CCI) design - star points are positioned at ±1/? from
the center with the cube points placed at ±1.

In addition, combinations of the CCD and CCF, and CCI and CCF are investigated based on

the suggestions and observations discussed in (Koch, 1997).

A Box-Behnken design is formed by combining 2k X2

factorial and incomplete block designs and are an important

alternative to central composite designs because they require X1

only three levels of each factor (Box and Behnken, 1960).


X3
However, Myers and Montgomery (1995) warn that these

96
designs should not be used when accurate predictions at the
Figure 2.22 Box-Behnken
Design
extremes (i.e., the corners) are important. An example 13

point Box-Behnken design for three factors is shown in

Figure 2.22.

Space Filling Experimental Designs

Numerous space filling experimental designs have been developed in an effort to provide more

efficient and effective means for sampling deterministic computer experiments. For instance,

Koehler and Owen (1996) describe several Bayesian and Frequentist types of space filling

experimental designs, including maximin and minimax designs, maximum entropy designs,

integrated mean squared error (IMSE) designs, orthogonal arrays, Latin hypercubes, scrambled

nets and randomized grids. Latin hypercube designs were introduced in (McKay, et al., 1979)

for use with computer codes and compared to random sampling and stratified sampling.

Minimax and maximin designs were developed by Johnson, et al. (1990) specifically for use

with computer experiments. Sherwy and Wynn (1987; 1988) and Currin, et al. (1991) use the

maximum entropy principle to develop designs for computer experiments. Similarly, Sacks et

al. (1989) discuss entropy designs in addition to IMSE designs and maximum mean squared

error designs for use with deterministic computer experiments. Finally, a review of several

Bayesian experimental designs for linear and nonlinear regression is given in (Chaloner and

Verdinelli, 1995).

97
Comparisons of the different types of space filling experimental designs are few; often

the novel space filling design being described is compared against Latin hypercube designs and

random sampling (see, e.g., Kalagnanam and Diwekar, 1997; Park, 1994; Salagame and

Barton, 1997), but rarely is it compared against other space filling designs. An exception are

the maximin Latin hypercubes (Morris and Mitchell, 1995) which are compared against maximin

designs (Johnson, et al., 1990) and Latin hypercubes; the authors conclude by means of an

example that maximin Latin hypercube designs are better than either maximin or Latin

hypercube designs alone. In this dissertation, one of the contributions is to compare and

contrast a wide variety of space filling designs against themselves and classical experimental

designs. Toward this end, nine space filling experimental designs are investigated: Latin

hypercubes, orthogonal arrays, orthogonal Latin hypercubes, orthogonal array-based Latin

hypercubes, maximin Latin hypercubes, minimax Latin hypercubes, optimal Latin hypercubes,

Hammersley point designs, and uniform designs. An overview of each of these designs follows;

complete descriptions can be found in Appendix B and C.

A Latin hypercube is a matrix of n rows and k


X2

columns where n is the number of levels being examined and

k is the number of design variables. Each column contains


X1
the levels 1, 2, ..., n, randomly permuted, and the k columns

are matched at random to form the Latin hypercube


Figure 2.23 Latin Hypercube
(McKay, et al., 1979). These designs are the earliest space Design

98
filling experimental design intended for use with computer Design

experiments. An example two factor, nine point Latin

hypercube for is shown in Figure 2.23.

An orthogonal array (OA) is a matrix of n rows


X2

and k columns with every element being one of q symbols: 0,

..., q-1 (Owen, 1992). An orthogonal array has an


X1
associated strength t depending on the number of
X3
combinations of l levels appearing in any of the r columns of

the OA. The orthogonal arrays used in this work are limited
Figure 2.24 Orthogonal Array
Design
to q2 runs where q is a prime power. An example nine point

OA in three dimensions is shown in Figure 2.24.

99
Orthogonal array-based Latin hypercube X2

designs combine the orthogonality properties of an

orthogonal array with the good stratification capabilities of a


X1
Latin hypercube (Tang, 1993). Tang provided an algorithm

to generate these designs. An example of a six point OA-


Figure 2.25 OA-Based Latin
based Latin hypercube is shown in Figure 2.25. Hypercube

An orthogonal Latin hypercube is a Latin


X2
hypercube which has orthogonal columns. These designs

retain the orthogonality of traditional experimental designs

(e.g., CCDs) while attempting to maintain a good spread of X1

points throughout the design space. These designs are

constructed from purely algebraic means using the process Figure 2.26 Orthogonal Latin
Hypercube
described in (Ye, 1997). An example nine point orthogonal

Latin hypercube for two factors is shown in Figure 2.26.

A maximin Latin hypercube is a Latin hypercube


X2

design which provides a good compromise between the

maximin criterion (which maximizes the minimum distance


X1
between any two sample points (Johnson, et al., 1990)) and

the good projection properties of Latin hypercube designs.


Figure 2.27 Maximin Latin
The simulated annealing algorithm in (Morris and Mitchell, Hypercube

100
1995) is used to construct these designs for varying sample Hypercube

sizes. An example seven point design is shown in Figure

2.27.

An optimal Latin hypercube is found by imposing


X2

a specific criterion on a Latin hypercube in the same fashion

as the maximin Latin hypercubes. In this work, integrated


X1
mean square error (IMSE) optimal Latin hypercubes (Park,

1994) are employed. Park has provided his algorithm to


Figure 2.28 IMSE Optimal
generate these designs. An example eight point IMSE Latin Hypercube

optimal Latin hypercube design for two factors is shown in

Figure 2.28.

A Hammersley sampling sequence design is a


X2

good design for placing n points in a k-dimensional

hypercube (Kalagnanam and Diwekar, 1997), providing


X1
better uniformity properties over a k-dimensional space than

Latin hypercubes. An example nine point Hammersley


Figure 2.29 Hammersley
design for two factors is shown in Figure 2.29. Diwekar Design

(1995) has provided an algorithm to generate HSS designs

for use in this dissertation.

101
A uniform design is a design based strictly on
X2

number-theoretic methods; these designs are based on

generating vectors of good lattice points based on the mean


X1
square error criterion (Fang and Wang, 1994). All of the

uniform designs considered in this work have q levels which


Figure 2.30 Uniform Design
is equal to the number of sample points, n where n is strictly

an odd number. An example nine point uniform design for

two factors is shown in Figure 2.30.

In addition to the maximin Latin hypercube designs from (Morris and Mitchell, 1995), a

minimax Latin hypercube design is introduced in this dissertation. Only a brief description of this

unique design is given here; a detailed description of the design an a discussion of how it is

generated are included in Appendix C. From an intuitive stand point, because prediction with

kriging relies on the spatial correlation between data points, a design which minimizes the

maximum distance between the sample points and any point in the design space should yield an

accurate predictor. Such a design is referred to as a minimax design (Johnson, et al., 1990).

While the minimax criterion ensures good coverage of the design space by minimizing the

maximum distance between points, it does not ensure good stratification of the design space

(i.e., when the sample points are projected into 1-dimension, many of the points may overlap

(cf., Johnson, et al., 1990)). Meanwhile, because a Latin hypercube ensures good stratification

of the design space, combining it with the minimax criterion provides a good compromise

102
between the two much as the maximin Latin hypercubes developed by Morris and Mitchell

(1995) does. Example 9, 11, and 14 point maximin Latin hypercube designs are shown in

Figure 2.31. The specifics of the genetic algorithm used to generate these minimax Latin

hypercube designs are detailed in Appendix C.

X2 X2 X2

X1 X1 X1

Figure 2.31 Example 9, 11, and 14 Point Minimax Latin Hypercubes

In Chapter 5, the minimax Latin hypercube designs are compared against the other

classical and space filling experimental designs discussed in this section. A look ahead to that

and the next chapters is offered in the next section.

2.5 A LOOK BACK AND A LOOK AHEAD

Through the review of the literature which is presented in this chapter, the necessary

elements for a method to model and design a scalable product platform for a product family

have been identified by elucidating the research questions (and hypotheses) introduced in

Section 1.3.1. In the next chapter, these constitutive elements are integrated to create the

Product Platform Concept Exploration Method (PPCEM), providing a Method which facilitates

103
the synthesis and Exploration of a common Product Platform Concept which can be scaled

into an appropriate family of products. The relationship between the individual sections in this

chapter and the PPCEM developed in the next chapter are illustrated in Figure 2.32. In

particular, the market segmentation grid is revisited in Section 3.1.1 as it applies to the PPCEM.

The concept of a “conceptual noise factor” is formalized into a scale factor in Section 3.1.2

which is fundamental to the utilization of the PPCEM. Metamodeling techniques within the

PPCEM are discussed in Sections 3.1.3. Aggregation of the individual product specifications

into an appropriate comproimse DSP formulation for the product family is described in Section

3.1.4, and development of the product platform portfolio for the product family is explained in

Section 3.1.5.

In addition to the presentation of the PPCEM, the research hypotheses are revisited in

Section 3.2 in the next chapter. Supporting posits are stated for each hypothesis, and the

verification strategy for testing the hypotheses is elaborated in Section 3.3. The discussion in

Section 3.3 sets the stage for the example problems which are presented in Chapters 4 through

7 to verify the hypotheses.

104
Platform

Product Platform Concept Exploration Method

Chp 3

§3.1.3 §3.1.3 §3.1.4 §3.1.2 §3.1.5 §3.1.1

Space Modeling Conceptual Scalable Market


Filling Kriging Mean and Noise Product Segmentation
DoE Variance Factors Platform Grid

Metamodeling Robust Design Principles Product Family Design

§2.3 §2.2 §2.1

FOUNDATIONS: Decision-Based Design & the Robust Concept Exploration Method

Figure 2.32 Pictorial Review of Chapter 2 and Preview of Chapter 3

105
3.
CHAPTER 3

THE PRODUCT PLATFORM CONCEPT


EXPLORATION METHOD

Platform

Product Platform Concept Exploration Method

In this chapter, the elements of the previous chapters are synthesized to meet the

principal objective in this dissertation, namely, to develop the Product Platform Concept

Exploration Method (PPCEM) for designing a common scalable product platform for a product

family. An overview of the PPCEM and its associated steps and tools is given in Section 3.1

with each step of the PPCEM and its constituent elements elaborated in Sections 3.1.1 through

3.1.5; the resulting infrastructure of the PPCEM is presented in Section 3.1.6. In Section 3.2,

the research hypotheses are revisited from Section 1.3.1 and supporting posits are identified.

Section 3.3 follows with an outline of the strategy for verification and testing of the research

hypotheses. Section 3.4 concludes the chapter with a recap of what has been presented and a

look ahead to the metamodeling studies in Chapters 4 and 5 and the example problems in

Chapters 6 and 7 which are used to test the research hypotheses and demonstrate the

application of the PPCEM.

92
93
3.1 OVERVIEW OF THE PPCEM AND RESEARCH HYPOTHESES

As stated in Section 1.3.2, the principal contribution in this dissertation is the Product

Platform Concept Exploration Method (PPCEM) for designing a common scalable product

platform for a product family. As the name implies, the PPCEM is a Method which facilitates

the synthesis and Exploration of a common Product Platform Concept which can be scaled

into an appropriate family of products. The steps and associated tools (with relevant sections

noted in parentheses) in the PPCEM are illustrated in Figure 3.1.

PPCEM Steps Tools


Overall Design Requirements

Step 1 Market
Create Market Segmentation Grid Segmentation
Grid (§ 2.2)

Step 2
Classify Factors and Ranges Robust Design
Principles
(§ 2.3)
Step 3
Build and Validate Metamodels
Metamodeling
Techniques
Step 4 (§ 2.4)
Aggregate Product Platform Specifications

Compromise
Step 5 Decision Support
Develop Product Platform Portfolio Problem (§1.2)

Product Platform Portfolio

94
Figure 3.1 Steps and Tools of the PPCEM

There are five steps to the PPCEM as illustrated in Figure 3.1. The input to the

PPCEM are the overall design requirements, and the output of the PPCEM is the product

platform portfolio which is described in Section 3.1.5. The tools utilized in each step of the

PPCEM are shown on the right hand side of Figure 3.1; their involvement in the various steps of

the PPCEM is elaborated further in Sections 3.1.1 through 3.1.5 wherein the implementation of

each step of the PPCEM is described. These steps prescribe how to formulate the problem

and describe how to solve it; the actual implementation of each step is liable to vary from

problem to problem.

3.1.1 Step 1 - Create the Market Segmentation Grid

Given the overall design requirements, Step 1 in the PPCEM is to create the market

segmentation grid as shown in Figure 3.2. As discussed in Section 2.2.1, the market

segmentation grid provides a link between management, marketing, and engineering design to

help identify and map which type of leveraging can be used to meet the overall design

requirements and realize a suitable product platform and product family. In the PPCEM, the

market segmentation grid serves as an attention directing tool to help identify potential

opportunities for horizontal leveraging, vertical leveraging, or a beachhead approach to product

platform design. Examples of this step are given in Sections 6.2 and 7.1.3.

95
Overall Design
Requirements

A. Market Segementation Grid

IDENTIFY
LEVERAGING:
1. vertical
2. horizontal
3. beachead
Platform

Figure 3.2 Step 1 - Create the Market Segmentation Grid

3.1.2 Step 2 - Classify Factors and Ranges

Once the market segmentation grid has been created, Step 2 of the PPCEM is to

classify factors as illustrated in Figure 3.3. Factors are classified in the following manner:

Overall Design
Requirements

A. Market Segementation Grid

IDENTIFY
LEVERAGING:
1. vertical
2. horizontal
3. beachead
Platform

B. Factors and Ranges


Noise z
Factors
Product
x y
Platform
Control Response
Factors Scale
s Factors

Figure 3.3 Step 2 - Factor Classification

96
• Responses are performance parameters of the system; in the problem formulation, they
may be constraints or goals or both and are identified from the overall design
requirements and the market segmentation grid.

• Control factors are variables that can be freely specified by a designer; settings of the
control factors are chosen to minimize the effects of variations in the system while
achieving desired performance targets and meeting the necessary constraints. Signal
factors also are lumped within control factors because it is often difficult to know, a
priori, which design variables are control factors and can be used to minimize the
sensitivity of the design to noise variations and those which are signal factors and have
no influence on the robustness of the system.

• Noise factors are parameters over which a designer has no control or which are too
difficult or expensive to control.

• Scale factor is a factor around which a product platform is leveraged either through
vertical scaling, horizontal scaling, or a combination of the two.

Appropriate ranges for the control and noise factors are identified during this step, and

constraints and goal targets for the responses are also identified.

The relationship between different leveraging capabilities and types of scale factors

considered in this dissertation are illustrated in Figure 3.4. As discussed in Section 2.2.1, three

types of leveraging can be mapped using the market segmentation grid: (1) vertical leveraging,

(2) horizontal leveraging, and (3) a beachhead approach which is a combination of vertical and

horizontal leveraging. Correspondingly, two types of scale factors can be identified—

parametric and conceptual/configurational—related to the type of scaling which is identified

in conjunction with the market segmentation grid as shown in Figure 3.4.

97
As shown in Figure 3.4, the relationship between each type of scale factor and the three

types of leveraging are as follows:

• Vertical leveraging: parametric scale factors, such as the length of a motor to


provide varying torque as demonstrated in Chapter 6 or the number of compressor
stages in an aircraft engine as in the Rolls Royce RTM322 engine example (see
Rothwell and Gardiner, 1990) mentioned in Section 1.1.1.

• Horizontal leveraging: conceptual/configurational scale factors , such as the size of


evaporator in a family of air conditioners (see Chang and Ward, 1995) or the number of
passengers carried by the Boeing 747 family of aircraft (Rothwell and Gardiner, 1990)
and as demonstrated in the General Aviation aircraft example in Chapter 7.

• Beachhead approach: combination of parametric, conceptual, and/or


configurational scale factors as needed.

98
Leveraging Scale Factors

High Cost Platform A Scale factors are:


High Performance
parametric, e.g.,

Scaled Down

Scaled Up
Mid-Range
• the length of a motor to
provide varying torque
Low Cost • number of compressors
Low Performance
Platform C in an engine
Segment A Segment B Segment C

(a) Vertical

High End Platform Leverage Scale factors are:


High Cost
High Performance conceptual and/or
configurational, e.g.,
Mid-Range
• number of persons
Low Cost Low End Platform Leverage on an aircraft
Low Performance • size of evaporator in a
Segment A Segment B Segment C family of air conditioners
(b) Horizontal

High Cost
High Performance Scale factors are:
a combination of
Mid-Range • parametric, conceptual,
Platform
• and/or configurational
Low Cost
Low Performance scaling factors
Segment A Segment B Segment C
(c) Beachhead

Figure 3.4 Relationship of Scale Factors to the Market Segmentation Grid

If known, an appropriate range—upper and lower limit—is identified for each scale factor

during this step of the PPCEM; otherwise, finding this range becomes part of the design

process. Examples of Step 2 are offered in Sections 6.2 and 7.3. Once the responses, control,

noise, and scale factors and corresponding constraints/targets and ranges have been identified,

the next step is to build metamodels for each response.

99
3.1.3 Step 3 - Build and Validate Metamodels

Step 3 in the PPCEM is to build and validate metamodels relating the control, noise,

and scale factors to the responses using the elements of the PPCEM shown in Figure 3.5. The

goal in Step 3 is to build surrogate approximations of the computer analyses or simulation

routines which are inexpensive to run. Because robust design principles are being used, these

metamodels are either functions of control, noise, and scale factors as discussed in Sections

2.3.1 and 2.4, or approximate the mean and standard deviation of each response for known

variations in the noise and scale factors. If the analytic equations or simulation routine are not

sufficiently expensive to warrant metamodeling, this step can be skipped provided that the mean

and standard deviation of each response (as a result of variation in the scale factor and any

relevant noise factors) can be computed easily. The universal motor example in Chapter 6

forgoes metamodel construction because the analyses permit the mean and standard deviation

of each response to be easily computed; however, such is not the case in Chapter 7, the design

of a family of General Aviation aircraft. In Chapter 7, extensive use of metamodels occurs to

facilitate the implementation of robust design and search for a good aircraft platform.

100
E. Metamodeling

Build metamodels
Validate metamodels
Use metamodels
D. Analysis or
Simulation E1. Model Choice
Routine
Response Kriging
Surface

B. Factors and Ranges C. Design of Experiments


E2. Model Fitting & Analysis
Noise z CLASSICAL: Plackett-Burman,
Factors Fractional Factorial, CCD, etc. Analysis of Variance
Product Eliminate unimportant factors
x y SPACE FILLING: minimax
Platform
Control Response & maximin Latin hypercubes, Reduce the design space
Factors Scale rnd orthogonal arrays, uniform to the region of interest
s Factors designs, Hammersley pts, etc. Plan additional experiments

Figure 3.5 Step 3 - Metamodel Development

The steps for building and validating the metamodels follow the traditional metamodeling

approach described in Section 2.4.1. An experimental design is selected by which the

computer analysis or simulation program used to model the system is queried to obtain data.

The experimental design is used to sample the design space identified by the ranges (i.e.,

bounds) on the control, noise, and scale factors. The resulting sample data then is used to build

a metamodel (e.g., a kriging model, Section 2.4.2) for each response. Model accuracy then is

assessed through additional validation points or otherwise appropriate procedures for the

specific type of metamodel employed.

If identifying an appropriate region of interest for constructing the metamodels is difficult,

a sequential approximation strategy with screening experiments may be employed. Description

of such an approach can be found in, e.g., (Chen, et al., 1997; Koch, et al., 1997; Myers and

Montgomery, 1995). The difficulty, then, lies in defining an appropriate design space. In this

dissertation, the design space for the General Aviation aircraft example problem is known,

101
making identification of a good design space appear trivial. In reality it is not, and there is often

great difficulty finding an appropriately good design space. Identifying and quantifying a “good”

design space is not addressed in this dissertation; it is a possibility for future work (see Section

8.3).

3.1.4 Step 4 - Aggregate Product Platform Specifications

Once the necessary metamodels have been built and validated, Step 4 in the PPCEM is

to aggregate the product platform specifications. This involves formulating an appropriate

compromise DSP to model the necessary constraints and goals for the product family and

product platform based on the overall design requirements, the market segmentation developed

in Step 1, and the factor classification and ranges developed in Step 2, see Figure 3.6. It is

imperative that product constraints or goals given in the overall design requirements that are not

captured within the desired platform leveraging strategy be included in the compromise DSP.

102
F. The Compromise DSP
Find
Control Variables
Satisfy
Overall Design Constraints
Requirements Goals
{ "Mean on Target"
"Minimize Deviation" } orvalues
{ Cdk }
Bounds
Minimize
Deviation Function
A. Market Segementation Grid

IDENTIFY
LEVERAGING:
1. vertical
2. horizontal
3. beachead
Platform

B. Factors and Ranges


Noise z
Factors

x Product y
Control Platform Response
Factors Scale
s Factors

Figure 3.6 Step 4 - Aggregating the Product Platform Specifications

Two approaches for aggregating the product platform specifications are demonstrated

in this dissertation.

1. Separate goals for “bringing the mean on target” and “minimizing the variation” are
created (see Section 2.3.2) for the product family. This follows the implementation of
robust design principles which is traditionally used in the RCEM, except they are being
applied to a product family as opposed to a single product. The procedure is as
follows:

a. identify targets from market segmentation grid and overall design requirements for
each derivative product;

b. compute target means for the platform based by averaging individual targets;

103
c. Compute standard deviations for the platform based on individual targets by
dividing the range of each target by six, assuming a normal distribution with ±3?
variations, or by 12 , assuming a uniform distribution; and

d. create separate goals for “bringing the mean on target” and “minimizing the
variation” as necessary.

2. Design capability indices (see Section 2.3.2) to assess the capability of a family of
designs to satisfy a ranged set of design requirements. The procedure is as follows:

a. identify upper and lower requirement limits (URL and LRL, respectively) from the
market segmentation grid and overall design requirements for each derivative
product;

b. compute the mean and standard deviation as the average and standard deviation of
the individual instantiations of the product family for a given set of design variables;
and

c. formulate Cdk targets and constraints depending on whether “smaller is better,”


“nominal is best,” or “larger is better” (refer to Section 2.3.1).

The first approach is utilized in the universal motor example in Chapter 6 (see Section 6.3 in

particular for more details on its implementation). Meanwhile, design capability indices are

employed in the General Aviation aircraft example in Chapter 7 (see Section 7.4).

The compromise DSP is used to determine the values for the control (design) variables

which best satisfy the product family goals (“bringing the mean on target” and “minimizing the

variation” in the first; making Cdk = 1 in the second) while satisfying the necessary constraints.

Constraints can be either worse case scenario, evaluated on an individual basis, or aggregated in

a similar manner as the goals, constraining the mean and deviation of the responses or the

104
appropriate Cdk. The compromise DSP is exercised in Step 5 of the PPCEM to obtain the

product platform portfolio as described next.

3.1.5 Step 5 - Develop Product Platform Portfolio

Step 5 of the PPCEM is to solve the compromise DSP using the aggregate product

platform specifications to develop the product platform portfolio. This step makes use of the

metamodels created in Step 3 in conjunction with the compromise DSP and the aggregate

product specifications formulated in Step 4; design scenarios for exercising the compromise

DSP are abstracted from the overall design requirements, see Figure 3.7. The resulting “pool”

of solutions obtained by exercising the compromise DSP as such is referred to as a solution

portfolio. This portfolio of solutions can provide a wealth of information about the appropriate

settings for the design variables for the product platform based on different design scenarios or

robustness considerations; hence, the solution portfolio is called the product platform

portfolio.

105
F. The Compromise DSP
Find
Control Variables
Satisfy Product
Overall Design Constraints Platform
Requirements Goals Portfolio
{ "Mean on Target"
"Minimize Deviation" }or values
{ Cdk }
Bounds
Minimize
Deviation Function E. Metamodeling
Build metamodels
Validate metamodels
Use metamodels

E1. Model Choice

Response Kriging
Surface

E2. Model Fitting & Analysis


Analysis of Variance
Eliminate unimportant factors
Reduce the design space
to the region of interest
Plan additional experiments

Figure 3.7 Step 5 - Developing the Product Platform Portfolio

The concept of a solution portfolio is not new to this research; it is simply a more

appropriate name for what has been previously referred to as a ranged set of specifications

(see, e.g., Lewis, et al., 1994; Simpson, et al., 1996; Simpson, et al., 1997a; Smith and

Mistree, 1994). The objective when using the PPCEM is to generate a variety of options

for product platforms; it is not necessarily used to evaluate these options or select one

from them. It facilitates generating these options with the end result being the product

platform portfolio, namely, the “pool” of solutions (i.e., design variable settings) which should

be maintained in order for the product platform to have sufficient flexibility to meet the desired

design scenarios in the event that one scenario is preferred to the next.

In addition to developing the product platform portfolio for the product family, product

variety tradeoff studies also can be performed by making use of two measures—the Non-

106
Commonality Index (NCI) and the Performance Deviation Index (PDI)—which are described

as follows:

Non-commonality Index (NCI): NCI is used to assess the amount of variation between
parameter settings of each product within a product family; the smaller the variation, the
smaller NCI, and the more common the products within the product family. Computing
NCI is perhaps best illustrated through example; consider the 3 products shown in
Figure 3.8. Assume that each product is described by three design variables: x1, x2, and
x3 (if these three hypothetical products were electric motors, then x1, x2, and x3 might be
the outer radius of the motor, the length of the motor, and the number of windings in the
motor). First, the dissimilarity of each design variable settings for each product within
the family is computed as follows:

1. Compute the mean of each of the xj within the product family and the absolute value
of the difference between ? j and xj for each of the i products.

2. Normalize each difference by the range of that particular design variable: [upper
bound (ubj) - lower bound (lbj)]; this measures the relative variation in the values of
the design variables to the total range for that design variable.

3. Compute the average of the resulting normalized differences; this value is denoted
DIi in the figure and is the dissimilarity of the settings of xi for the group of products.

The scale factor around which the product family is derived is not included in this
computation. NCI is taken as a weighted sum of the individual DIi, where the weights
reflect the relative difficulty or cost associated with allowing each parameter to vary.
For an electric motor for instance, it may be easier or cheaper to allow the number of
windings (x3) to vary between different motor models but not so to allow the outer
radius to vary (x1). In this case, w1 would be much larger than w3 to reflect this within
the product family.

107
Dissimilarity of x1
2 3
Product Descriptions
1 ??
??2.6 ? 2.5 ? 2.6 ? 2.5 ? 2.6 ? 2.8 ??
??
DI1 ?
Product 1 3 ?? (3 ? 2) ??

Product 2

Product 3
µ1 DI1 ? 0.133

x1: 2.5 2.5 2.8 Dissimilarity of x2


10 20
x2: 12.1 10.0 10.9
1 ??11 ?12.1 ? 11? 10 ? 11 ? 10.9 ??
x3: 13.0 18.0 35.0 DI 2 ?
3 ?? (20? 10) ??

µ2 DI 2 ? 0.0733

Dissimilarity Index
Dissimilarity of x3
1 n ? j ? x i, j
DI j ? ? 0 100
?
n i?1 ub j ? lb j ? DI3 ?
1 ??
??22 ? 13 ? 22 ? 18 ? 22 ? 35 ??
??
3 ?? (100 ? 0) ??
j = 1, ..., # design variables
i = 1, ..., # products in family µ3 DI3 ? 0.0867

# d.v .
NCI =? ? wj DI j ? 0.55(0.133) ? 0.3(0.0733 ) ? 0.15(0.0867 ) ? 0.108
Non-commonality Index:DisIndx
j?1

Figure 3.8 Estimating the Dissimilarity Index for a Product Family

Performance Deviation Index: Assuming that a market niche is defined by a set of goal
targets and constraints and that the necessary constraints are met for each individual
derivative product, then the deviation variables in the compromise DSP are a direct
measure of how well each derivative product meets its targets. The Performance
Deviation Index (PDI) for a product family thus is taken as a linear combination
(possibly weighted) of the deviation variables for each derivative product within the
product family as given by Equation 3.1:

n
PDI ? ? w i Zi [3.1]
i? 1

108
where i = {1, ..., # products in family}, and Zi is the corresponding deviation function
for each derivative product within the product family. Weightings may be used to bias
the measure for certain products within the family.

Example computations of the NCI and PDI for a family of products are demonstrated in the

General Aviation aircraft example (see Section 7.6.2) and are explained in more detail in the

context of that example.

NCI and PDI are, in and of themselves, ad hoc measures for a product family, similar to

the commonality indices and platform efficiency and effectiveness measures discussed in Section

2.2.2. However, having these measures for non-commonality and performance deviation for a

family of products allows product variety tradeoff studies to be performed, see Figure 3.9 and

Figure 3.10.

high
Worst Designs
High NCI
High PDI

Designs based
on Common
Product Platform
PDI

Individually
Best Designs Optimized
Low NCI Designs
Low PDI
low
low high
NCI

Figure 3.9 Product Non-Commonality Versus Performance

109
By plotting NCI and PDI for a family of designs as illustrated in Figure 3.9, regions of

good and bad product family designs can be identified; the worst designs have high NCI and

PDI, while the best have low NCI and PDI. Individually optimized designs within a product

family, where commonality is not important, are liable to have a low PDI but a high NCI for the

resulting group of products. On the other hand, a product family based on a common product

platform is liable to have a low NCI; ideally, a low PDI is desirable but may be difficult to

achieve depending on the amount of commonality desired between products within the resulting

product family.

0.5
KEY:

PPCEM
0.4 Designs
Benchmark
Designs
0.3
²PDI i ²NCI i, ²PDI i =
PDI

²PDI lost
change in NCI and
PDI by allowing i
0.2 variables to vary
b/n each design

²NCI gain , ²PDI lost =


²NCI i
0.1 gain in NCI and
loss in PDI from
²NCI having common
gain
product platform
0.0
0.0 0.1 0.2 0.3 0.4 0.5
NCI

Figure 3.10 Sample Product Variety Tradeoff Study

110
NCI vs. PDI curves of the form shown in Figure 3.10 can be generated by trading off

product commonality for product performance and vice versa. By designing each product

individually, benchmark designs can be created which have a low PDI. Meanwhile, the

platform designs obtained by implementing the PPCEM have a low NCI. What is of interest to

study is the resulting ?PDIlost and the ?NCIgain to assess the tradeoff between commonality and

performance. If ?PDIlost is too large, the non-commonality of the designs can be increased

(?NCIi) in hopes of obtaining a large decrease in performance deviation (?PDIi). Ideally,

traversing the front of the envelope provides the largest (?PDIi) with the smallest (?NCIi). This

curve is generated in the General Aviation aircraft example (Section 7.6.3), and additional

insights are offered in the context of each problem.

3.1.6 Infrastructure of the PPCEM

By assembling the various elements of the PPCEM, the complete infrastructure of the

PPCEM is shown in Figure 3.11. As illustrated in the figure, the PPCEM consists of

“Processors” A-F which are employed as the overall design specifications are transformed into

the product platform portfolio. As described in the previous sections, each step employs one or

more “processors” within this infrastructure.

111
F. The Compromise DSP
Find
Control Variables
Satisfy Product
Overall Design Constraints Platform
Requirements Goals Portfolio
{"Mean on Target"
"Minimize Deviation" }orvalues
{ C dk }
Bounds
Minimize
Deviation Function E. Metamodeling
A. Market Segementation Grid Build metamodels
Validate metamodels
IDENTIFY Use metamodels
LEVERAGING: D. Analysis or
1. vertical Simulation E1. Model Choice
2. horizontal Routine
3. beachead Response Kriging
Platform
Surface

B. Factors and Ranges C. Design of Experiments


Noise z E2. Model Fitting & Analysis
CLASSICAL: Plackett-Burman,
Factors Fractional Factorial, CCD, etc. Analysis of Variance
x Product y SPACE FILLING: minimax Eliminate unimportant factors
Control Platform Response & maximin Latin hypercubes, Reduce the design space
Factors Scale rnd orthogonal arrays, uniform to the region of interest
s Factors designs, Hammersley pts, etc. Plan additional experiments

Figure 3.11 Infrastructure of the PPCEM

Step 1 - Create Market Segme ntation Grid relies on human judgment and
engineering “know-how” as Processor A in the PPCEM to map the overall
design requirements into an appropriate market segmentation grid and identify
leveraging opportunities.

Step 2 - Classify Factors and Ranges relies on the human judgment and Processor
B in the PPCEM to map the overall design requirements and market
segmentation grid into appropriate control, noise, and scale factors and identify
corresponding ranges for each. The responses being investigated also need to
be identified in this step of the process.

Step 3 - Build and Validate Metamodels relies on human judgment and Processors
C, D, and E for construction and validation of the necessary metamodels.

112
Step 4 - Aggregate Product Platform Specifications relies on human judgment to
formulate a compromise DSP, Processor F, using information from Processors
A and B and the overall design requirements.

Step 5 - Develop the Product Platform Portfolio involves exercising Processor F in


combination with the metamodels in Processor E to obtain the product platform
portfolio which best satisfies the given overall design requirements.

Referring back to Figure 1.5, the structure of the PPCEM is very similar to the RCEM;

this is not a coincidence. In essence, the PPCEM is derived from the RCEM through a series

of modifications based on the research questions and hypotheses presented in Section 1.3.1.

The research hypotheses are revisited next.

3.2 RESEARCH HYPOTHESES AND POSITS FOR THE PPCEM

As stated in Section 1.3, there are three main hypotheses in this dissertation:

Hypothesis 1: The Product Platform Concept Exploration Method provides a method

for developing a scalable product platform for a product family.

113
Hypothesis 2: Kriging is a viable alternative for building metamodels of deterministic

computer analyses.

Hypothesis 3: Space filling experimental designs are better suited for building

metamodels of deterministic computer experiments than classical experimental

designs.

As discussed in Section 1.3.1, Hypotheses 2 and 3 help to support Hypothesis 1 by increasing

the efficiency of the PPCEM but have ramifications beyond the PPCEM itself. To facilitate

verification of Hypothesis 1, three sub-hypotheses have been formulated; these sub-hypotheses

are as follows:

Sub-Hypothesis 1.1: The market segmentation grid can be utilized to help identify

scale factors for a product platform.

Sub-Hypothesis 1.2: Robust design principles can be used to facilitate the design of a

common scalable product platform by minimizing the sensitivity of a product

platform to variations in scale factors.

Sub-Hypothesis 1.3: Individual targets for product variants can be aggregated into an

appropriate mean and variance and used in conjunction with robust design

principles to effect a common product platform for a product family .

114
It is upon these hypotheses that the PPCEM and the work in this dissertation are

grounded. The relationship between these hypotheses and the modifications to RCEM which

form the PPCEM are presented in the next section. This is followed in Section 3.2.2 with

supporting posits for the research hypotheses.

3.2.1 Relationship of the Research Hypotheses to the RCEM

As the research hypotheses are addressed, modifications to the RCEM are made and

the PPCEM thus is realized. The relationships between the research hypotheses, designated as

H1, H1.1, ..., H3, and the specific elements of the RCEM are illustrated in Figure 3.12.

Addressing the first hypothesis provides an interface with marketing, enabling the identification

of scalable product platforms; this is accomplished through the addition of a new module to the

RCEM, namely, the market segmentation grid. Scale factors, Sub-Hypotheses 1.1, then are

identified through the use of the market segmentation grid around which a scalable product

platform is to be designed. Addressing this hypothesis results in a modification to the factor

classification step of the RCEM.

115
Robust
Design of
Scalable Product
Platforms Platform
Portfolio
H1.2
H1.3
F. The Compromise DSP

Find
Control Variables
Satisfy
Overall Design Robust, Top-Level
Constraints
Requirements Design Specifications
Goals
"Mean on Target"
"Minimize Deviation"
“Maximize the independence”
Market Bounds
Segmentation Minimize E. Response Surface Model
Grid Deviation Function

H1.1 A. Factors and Ranges

Noise z C. Simulation
Factors Programs
Classify x Product/ (Rigorous Analysis
y Tools)
Scale Control Process Response y=f( x, z)
Factors Factors
? ˆy = f( x,? z)
k 2 l 2
? ? ˆy= ? (ŽzŽf )? ?ˆz + ? Žf ? ? ˆx i
( )
i=1 i i i=1 Žxi
B. Point Generator
D. Experiments Analyzer
Design of Experiments
Plackett-Burman Eliminate unimportant factors H2
Full Factorial Design Reduce the design space to the region of
Fractional Factorial Design interest
Taguchi Orthogonal Array
Central Composite Design Plan additional experiments
etc. Kriging
H3
Space
Filling
DoE

Figure 3.12 Relationships Between Hypotheses and RCEM Modifications

Sub-Hypothesis 1.2 relates to designing the actual product platform; robust design

principles are abstracted for use in product family design by aggregating product family targets

and constraints into appropriate means and variances. The resulting formulation allows robust

design principles, already embodied in the RCEM in the form of separate goals for “bringing the

mean on target” and “minimizing the variation,” to be utilized when solving the compromise

DSP. Notice that the goal for “minimize the independence” is not included in the compromise

DSP formulation because the intent is to rely solely on robust design principles to designing a

suitable product platform which can be scaled into a product family. As the research

116
hypotheses are addressed and the RCEM is correspondingly modified, the PPCEM is realized;

compare Figure 3.12 to Figure 3.11.

Hypotheses 2 and 3 relate specifically to the metamodeling capabilities of the RCEM

with Hypothesis 2 affecting Processor E and Hypothesis 3 affecting Processor B as shown in

Figure 3.12. The intent is not to replace the current response surface and design of experiments

capabilities of the RCEM; rather, it is to augment the current capabilities with kriging and novel

space filling experimental designs, enabling better approximations of deterministic computer

experiments. The specific posits which support these claims and the research hypotheses are

detailed in the next section.

3.2.2 Supporting Posits for the Research Hypotheses

There are several posits which support the research hypotheses which have been

revealed during the literature review in Chapter 2 and in the discussion in Section 1.1. Six

posits support Hypotheses 1 and Sub-Hypotheses 1.1-1.3.; they are the following:

Posit 1.1: The RCEM provides an efficient and effective means for developing robust

top-level design specifications for complex systems design.

Posit 1.2: Metamodeling techniques, specifically, design of experiments and the

response surface methodology, can be used to facilitate concept exploration and

optimization, thus increasing a designer’s efficiency.

117
Posit 1.3: Robust design principles can be used to minimize the sensitivity of a design

to variations in uncontrollable (i.e., noise) factors and/or variations in design

parameters (i.e., control factors).

Posit 1.4: Robust design principles can be used effectively in the early stages of the

design process by modeling the response itself with separate goals for “bringing the

mean on target” and “minimizing the variation.”

Posit 1.5: The compromise DSP is capable of effecting robust design solutions through

separate goals for “bringing the mean on target” and “minimizing variation” of noise

factors and/or variations in the design variables.

Posit 1.6: The market segmentation grid can be used to identify opportunities for

platform leveraging in product family design.

These posits are founded on and substantiated by the following references.

• Posit 1.1 is substantiated in (Chen, 1995) by explicitly testing and verifying the
efficiency and effectiveness of the RCEM for developing robust top-level design
specifications for complex systems design.

• Posit 1.2 is substantiated through the numerous applications of design of experiments


and response surface models that have been used to illustrate the usefulness of
metamodeling techniques in design, facilitating concept exploration and optimization and
increasing a designer’s efficiency; over twenty-five such applications are reviewed in
(Simpson, et al., 1997b). Furthermore, Chen (1995) explicitly tests and verifies this
posit as part of the development of the RCEM.

118
• Posit 1.3, Posit 1.4, and Posit 1.5 are substantiated by the work in (Chen, et al.,
1996b); Chen and her coauthors describe a general robust design procedure which can
minimize the sensitivity of a design to variations in noise factors and/or design
parameters (Posit 1.3) by having separate goals for “bringing the mean on target” and
“minimizing the variation” (Posit 1.4) of each response in the compromise DSP (Posit
1.5). These posits are further substantiated in (Chen, 1995) as part of the development
of the RCEM.

• Posit 1.6 is substantiated by the discussion in Section 2.2.1, i.e., the market
segmentation grid can be used as an attention directing tool to identify leveraging
opportunities in product platform design (cf., Meyer, 1997; Meyer and Lehnerd, 1997);
identifying these leveraging opportunities provided the initial impetus for developing the
market segmentation grid.

These six posits help to support Hypothesis 1 and Sub-Hypotheses 1.1-1.3. The strategy for

testing and verifying these hypotheses is outlined in Section 3.3. Before the verification strategy

is outlined, however, posits for Hypotheses 2 and 3 are stated.

The following two posits have been identified in support of Hypothesis 2.

Posit 2.1: Building an interpolative kriging model is not predicated on the assumption

of underlying random error in the data.

Posit 2.2: Kriging provides very flexible modeling capabilities based on the wide

variety of spatial correlation functions which can be selected to model the data.

119
• Posit 2.1 is more fact than assumption; it is substantiated by, e.g., Sacks, et al. (1989);
Koehler and Owen (1996); and Cressie (1993).

• Posit 2.2 is substantiated by many researchers, most notably Sacks, et al. (1989);
Welch, et al. (1992); Cressie (1993); and Barton (1992; 1994).

Posits 2.1 and 2.2 both help to verify Hypothesis 2; the strategy for testing Hypothesis 2 is

outlined in Section 3.3.

Finally, the following two posits support Hypothesis 3:

Posit 3.1: The experimental design conditions of replication, blockability, rotatability

have no significant meaning when sampling deterministic computer experiments.

• Posit 3.1 is taken from Sacks, et al. (1989) who state that the “classical notions of
experimental blocking, replication, and randomization are irrelevant” for deterministic
computer experiments which contain no random error. Moreover, any experimental
design text (see, e.g., Montgomery, 1991) can verify that replication, blockability, and
rotatability are developed explicitly to handle and account for random (measurement)
error in a physical experiment for which classical experimental designs have been
developed.

Since kriging (using an underlying constant model) is being advocated in this dissertation

for metamodeling deterministic computer experiments, an additional posit in support of

Hypothesis 3 is the following.

120
Posit 3.2: Since kriging (with an underlying constant model) models rely on the spatial

correlation between data, confounding and aliasing of main effects and two-factor

interactions have no significant meaning when predicting a response.

• Posit 3.2 is substantiated by Sacks, et al. (1989); Currin, et al. (1991); Welch, et al.
(1990); and Barton (1992; 1994). In physical experimentation, great care is taken to
ensure that aliasing and confounding of main effects and two-factor interactions do not
occur to ensure accurate estimation of coefficients of the polynomial response surface
model (see, e.g., Montgomery, 1991).

The experimental procedure for testing Hypothesis 3 is introduced in the next section along with

the specific strategy for verification and testing of all of the other hypotheses.

3.3 STRATEGY FOR VERIFICATION AND TESTING OF THE RESEARCH


HYPOTHESES

The question of whether testing the proposed hypotheses really answers the research

questions is a difficult one, and it raises the issues of verification and validation as discussed by

Peplisnki (1997). According to the Concise Oxford English Dictionary (1982), to validate is to

make valid, to ratify or confirm. The root, valid, is then defined as:

• (of reason, objection, argument, etc.) sound, defensible, well-grounded;

• (law) sound and sufficient, executed with proper formalities (valid contract);

• legally acceptable (valid passport).

121
With respect to engineering design research, the intent of the validation process is to show the

research and its products to be sound, well grounded on principles of evidence, able to

withstand criticism or objection, powerful, convincing and conclusive, and provable.

In the Merriam-Webster hypertext dictionary under a discussion of the synonyms for

“confirm”, “verify” is used in the context of establishing the correspondence of actual facts or

details with those proposed or guessed at, while “validate” is used in the context of establishing

validity by authoritative affirmation or by factual proof. The boundary between verification and

validation is thus shifting and often open to interpretation; in many cases the two words are used

interchangeably.

In this research, definitions for “verification” and “validation” are applied that, while not

inconsistent with the general uses above, are more specific and tailored for efforts in engineering

design research. In practice, the verification and validation of design methods is much more

than a debugging process. Three primary phases can be identified: firstly, problem justification;

secondly, completeness and consistency checks of the methodology, and thirdly, validation of

performance. (This classification is based on a discussion of the validation of expert systems by

Ignizio (1990).) Verification then refers to the second phase of the process and is focused

primarily on internal consistency and completeness, while validation as the third phase of the

process is focused on consistency with external evidence, ideally through testing the design

method on actual case studies. This validation of performance is perhaps the area most open to

interpretation by peers and experts in the field alike.

122
If what is to be validated is a closed form mathematical expression or algorithm, it can

be proven, or validated, in a traditional and formal mathematical sense. For example, the case

of showing a solution vector, x, belongs to the set of feasible solutions for a given mathematical

model is a closed problem. Alternatively, if the problem is open, if the subject is dealing with

some “heuristic,” non-precise scheme, the issue of validation becomes one of “correctness

beyond reasonable doubt.” The validation of design methods falls into this category. In

this case it is achieved ultimately by results and usefulness and through a convincing

demonstration to (and an acceptance and ratification by) one’s peers in the field. An analogy

with mathematics and the concept of “necessary” and “sufficient” conditions can be drawn with

respect to the validation of heuristics. Heuristics are aimed toward satisfying the necessary

conditions only; it is not possible to develop an absolute proof for an open problem by

definition.

As anticipated, the operations research literature provides some useful insight into the

validation of heuristics, in the context of heuristic programming. In discussing the nature of

problem solving by heuristic programming Lin (1975) makes the following remarks:

We therefore define a valid heuristic algorithm (to solve a given problem) as any
procedure which will produce a feasible solution acceptable to the design engineer,
within limits of computing time, and consider the problem solved if we can construct
a valid heuristic procedure to solve it. We see that in the domain where a heuristic
algorithm operates, there are elements of technique, experimentation, judgment and
persuasion, as well as compromise.

The issue of justification is addressed by Ignizio, et al. (1972):

123
Specific heuristic programs are justified, not because they attain an analytically
verifiable optimum solution, but rather because experimentation has proven that they
are useful in practice.

In summary, while noting that judgment is subjective and based on faith, the validation of a

heuristic, and therefore the validation of design methods, can be established if (Smith, 1992):

• the solutions are feasible and acceptable to the design engineer,

• the time and consumed resources are within reasonable limits, and

• the solutions are, above all, useful.

It is against these three issues that a verification and validation strategy is developed.

Meanwhile, verification and testing of the hypotheses has already begun by stating and

substantiating posits in support of each hypothesis. What is tested in the remainder of the

dissertation is the “intellectual leap of faith” required to jump from the posits to the hypotheses.

The relationships between the next four chapters and the individual research hypotheses are

summarized in Table 3.1.

Table 3.1 Relationship Between Hypothesis Testing and Chapters

Hypothesis Chp 4 Chp 5 Chp 6 Chp 7


H1 Product Platform Concept Exploration X X
Method
SH1.1 Usefulness of market segmentation X X
grid
SH1.2 Robust design of scalable product X X
platform
SH1.3 Aggregating product family X X
specifications
H2 Utility of kriging for metamodeling X X X
deterministic computer experiments

124
H3 Utility of space filling experimental designs X X

The relationships listed in Table 3.1 are elaborated further in the next two sections. The

strategy for testing Hypothesis 1 and the related sub-hypotheses is outlined in Section 3.3.1.

The strategy for testing Hypotheses 2 and 3 is outlined in Section 3.3.2.

3.3.1 Testing Hypothesis 1 and Sub-Hypotheses 1.1-1.3

The PPCEM is hypothesized to provide a method for designing a scalable product

platform for a product family. To verify this, two example problems are utilized to demonstrate

the effectiveness of the PPCEM: the design of a family of universal electric motors (Chapter 6)

and the design of a family of General Aviation aircraft (Chapter 7). These two examples have

been chosen to demonstrate different capabilities of the PPCEM.

• Design of a family of universal electric motors is used to demonstrate the following:

- vertical scaling of a product platform (see Section 6.2),

- a parametric scale factor: stack length (see Section 6.2), and

- product family aggregated around mean and standard deviation of stack length and
separate goals for “bringing the mean on target” and “minimize the variation” are
employed to design the product platform (see Section 6.3).

Note that metamodels are not employed in this first example because mean and standard

deviation of the responses can be estimated directly from the relevant analysis equations (see

Section 6.3).

• Design of a family of General Aviation aircraft is used demonstrate the following:

125
- horizontal scaling of a product platform (see Section 7.1.3),

- a configurational scale factor: number of passengers (see Section 7.2),

- metamodels for mean and standard deviation of the GAA family to facilitate
implementation of robust design and development of the aircraft platform (see
Section 7.3), and

- design capability indices to assess quickly the capability of the family of aircraft to
satisfy the range of requirements (see Section 7.4).

The first example parallels Black & Decker’s vertical scaling strategy for its universal motors

(Lehnerd, 1987) discussed in Section 1.1.1 and is used to provide “proof of concept” that the

PPCEM works. The second example is based on a previous application of the RCEM to

develop a “common and good” set of top-level design specifications for a family of General

Aviation aircraft (see, e.g., Simpson, 1995; Simpson, et al., 1996). The General Aviation

aircraft problem is employed in this work to demonstrate further the effectiveness of the

PPCEM while exploring the problem itself in greater depth.

In each example, the product platform obtained using the PPCEM is compared to (a)

the initial baseline design to show improvement over the starting design, and (b) individually

designed, benchmark products which are aggregated into a product family to provide a

reference to compare against the PPCEM product family (i.e., design the family of products

with the PPCEM and without the PPCEM and discuss the differences in product performance,

computational expense, and usefulness). Product variety tradeoff studies are also performed for

the family of General Aviation aircraft, examining the tradeoff between commonality of the

126
aircraft and their corresponding performance for the PPCEM family and the individually

designed benchmark group of aircraft.

Testing of the sub-hypotheses related to Hypotheses 1 entail the following:

Testing Sub-Hypothesis 1.1 - The procedure for using the market segmentation grid to
identify scale factors for a product platform is shown in Figure 3.4 and described in
Section 3.1.2. Further verification of this sub-hypothesis requires demonstrating that
this procedure can be used to identify scale factors for a product platform. In the
universal motor example in Chapter 6, the market segmentation grid is used to identify a
vertical leveraging strategy and parametric scaling factor (stack length); in the General
Aviation aircraft example, a horizontal leveraging strategy and configurational scale
factor (number of passengers) are used.

Testing Sub-Hypothesis 1.2 - If appropriate scale factors can be identified for a product
platform (i.e., if Sub-Hypothesis 1.1 is true), then the principles of robust design can be
employed to develop a product platform which has minimum sensitivity to variations in
the scale factor and is thus robust for the product family. Verification of this sub-
hypothesis requires implementation of the approach, and the two examples provide such
a demonstration.

Testing Sub-Hypothesis 1.3 - The procedure for aggregating the individual targets of the
product variants is outlined in Section 3.1.4. As with Sub-Hypothesis 1.1, further
verification of this sub-hypothesis requires demonstrating that this procedure can be
used to model and design a family of products; the approaches outlined in Section 3.1.4
are used in the two examples to illustrate both methods for aggregating product family
specifications.

Verification of these sub-hypotheses also helps to support Hypothesis 1.

127
3.3.2 Testing Hypotheses 2 and 3

The testing of Hypotheses 2 and 3 occurs predominantly in Chapter 5; however, an

initial feasibility study of the utility of kriging is presented in Chapter 4 to familiarize the reader

with kriging. In Chapter 4, a simple yet realistic engineering example—the design of an

aerospike nozzle—is used to compare the predictive capability of a kriging model against that of

second-order response surfaces. The specific aspect of Hypothesis 2 being tested in Section

4.2 is whether or not kriging, using an underlying constant global model in combination with a

Gaussian correlation function (one of the five being investigated in this dissertation, see section

2.4.2) is as accurate of a predictor of the response values as a full second-order response

surface.

Chapter 5 continues from where Chapter 4 leaves off. To test the utility of kriging and

space filling designs (and thus Hypotheses 2 and 3) a testbed of six engineering test problems is

created to:

• test the effect of different correlation functions on the accuracy of the kriging model for a
wide variety of engineering analysis equations (linear, quadratic, cubic, reciprocal,
exponential, etc.);

• correlate the types of functions (analysis equations) which kriging models can and
cannot approximate accurately; and

• test the effect of eleven different experimental designs on the accuracy of the resulting
kriging model.

Of the eleven experimental designs mentioned in the last bullet, two are classical designs—

central composite and Box-Behnken—and the remaining nine are space filling (see Section

128
5.1.3 and Appendices B and C for a description of each). In this manner, Hypothesis 3 is

explicitly tested by comparing the accuracy of the kriging model built from a space filling

experimental design against that of a classical experimental design. The first two bullets relate to

testing Hypothesis 2, and the particulars of that portion of the study are described in Section

5.2. A detailed overview of the whole study is offered in Section 5.1.

3.4 A LOOK BACK AND A LOOK AHEAD

The elements of the previous chapters are synthesized in this chapter to meet the

principal objective in this dissertation, namely, to develop the Product Platform Concept

Exploration Method (PPCEM) for designing common scalable product platforms for a product

family, see Figure 3.13. There are five steps to the PPCEM which prescribe how to formulate

the problem and describe how to solve it. As such, the PPCEM provides a Method which

facilitates the synthesis and Exploration of a common Product Platform Concept which can

be scaled into an appropriate family of products.

Testing and verification of the PPCEM is outlined in the previous section and takes

place predominantly in Chapters 6 and 7 of the dissertation. In the meantime, testing of

Hypothesis 2 (which has implications for Step 3 of the PPCEM) commences in the next chapter

wherein an initial feasibility study of the utility of kriging is given. At the end of Chapter 4,

several questions are posed which preface the kriging/DOE study in Chapter 5. The

implications of the results of the study on metamodeling within the PPCEM (Step 3) are

discussed at the end of Chapter 5.

129
Platform

Nozzle Design
Product Platform Concept Exploration Method

Chp 4 Chp 3

Space Modeling Conceptual Scalable Market


Filling Kriging Mean and Noise Product Segmentation
DoE Variance Factors Platform Grid

Metamodeling Robust Design Principles Product Family Design

FOUNDATIONS: Decision-Based Design & the Robust Concept Exploration Method

Figure 3.13 Pictorial Review of Chapter 3 and Preview of Chapter 4

130
4.
CHAPTER 4

INITIAL KRIGING FEASIBILITY STUDY:


DESIGN OF AN AEROSPIKE NOZZLE

Nozzle Design

In this chapter, the process of testing Hypothesis 2 and establishing kriging as a viable

alternative for approximating deterministic computer experiments commences. In particular, the

accuracy of simple kriging models is compared to that of second-order response surface

models through two examples. The first is a simple one-dimensional example in Section 4.1

which is used to familiarize the reader with (a) the process of creating a kriging model and (b)

some of the differences between a kriging model and a second-order response surface. The

second example is a simple, yet realistic, engineering problem—the design of an aerospike

nozzle—which is given in Section 4.2. In the aerospike nozzle example, a comparison of

second-order response surface models and kriging models is conducted by means of error

analysis (Section 4.2.2), visualization (Section 4.2.3), and optimization (Section 4.2.4) These

examples establish that a simple kriging model can compete with a second-order response

123
surface, thereby setting the stage for an extensive investigation into the utility of kriging and

space filling experimental designs in Chapter 5 to test Hypotheses 2 and Hypothesis 3.

124
4.1 OVERVIEW OF KRIGING MODELING AND A 1-D EXAMPLE

Having presented the mathematics behind kriging in Section 2.4.2, a simple one variable

example best illustrates the difference between the approximation capabilities of a second-order

response surface model and a kriging model. This example comes from Su and Renaud (1996)

who fabricated this example to demonstrate some of the limitations of using second-order

response surface models, see Figure 4.1. The function is an eighth-order function given by

Equation 4.1.

9
f ( x) ? ? a i (x i ? 900)(i ?1) [4.1]
i?1

a1 = -659.23
a2 = 190.22
a3 = -17.802
a4 = 0.82691
a5 = -0.021885
a6 = 0.0003463
a7 = -3.2446 x 10-6
a8 = 1.6606 x 10-8
a9 = -3.5757 x 10-11

A second-order response surface model is fit to five sample points within the region of

the optimum (x = 932) using least squares regression. The five sample points are given in Table

4.1. The original function, the location of the five sample points, and the resulting second-order

response surface are shown in Figure 4.1.

125
Table 4.1 Sample Points for 1-D Example

No. x x (scaled) y
1 922 0.00 43.976
2 927 0.25 20.143
3 932 0.50 13.963
4 937 0.75 17.330
5 942 1.00 22.698

80
Su and Renaud (1996) Fcn
70
Sample Points
60
2nd Order RS Model

50

40

30

20

10
915 925 935 945 955 965
x

Figure 4.1 One Variable Example Problem

A kriging model using a constant for the global model and the Gaussian correlation

function of Equation 2.17 is fit to the same five points in order to compare a kriging model

against a second-order response surface model. The process of fitting a kriging model is

described step-by-step to foster a better understanding of what is involved in building a kriging

model.
126
In order to fit a kriging model to the five sample points, the x values are scaled to [0,1]

as shown in Table 4.1, and the response values are written as a column vector, yT = {43.977,

20.143, 13.963, 17.330, 22.698}. Because a constant underlying global model is selected for

the kriging model, f is simply a column vector of ones: fT = {1, 1, 1, 1, 1}. Using a Gaussian

correlation function for the localized portion of the model, Equation 2.21, is particularized for

this example as:

R(xi,xj) = ??
exp( ? ? | x i ? x j | 2) i, j ? 1,2,3,4, 5;i ? j??
?? ?? [4.2]
?? 1 i? j ??

The correlation function for each sample point is then computed as follows:

i = 1, j = 1, R(x1,x1) = 1

i = 1, j = 2, R(x1,x2) = Exp(- ? |0.0 - 0.25|2) = exp(-0.0625? )

i = 1, j = 3, R(x1,x3) = Exp(- ? |0.0 - 0.5|2) = exp(-0.25? )

i = 1, j = 4, R(x1,x4) = Exp(- ? |0.0 - 0.75|2) = exp(-0.5625? )

i = 1, j = 5, R(x1,x5) = Exp(- ? |0.0 - 1.0|2) = exp(- ? )

???

i = 5, j = 5, R(x5,x5) = 1

The resulting correlation matrix is thus:

127
1 e ?0.0625? e ?0 .25 ? e ? 0.5625?
?? e ? ? ??
?0.0625? ? 0.25 ? ?0.5625?
?? 1 e e e ??
R = ?? 1 e ? 0.0625? e ?0.25? ??
e ?0.0625? ??
?? sym 1
?? 1 ??

where ? is the unknown parameter which is used to fit the kriging model to the data.

The constant portion of the global model is now estimated using Equation 2.27 which is

repeated here as Equation 4.3:

?ˆ ?? (f T R?1f )? 1 f T R ?1y [4.3]

ˆ , once the maximum likelihood estimate for ? is known, is


and is a function of ? . The value for ??

essentially a weighted average of the sample points based on intersite distances.

In order to find the maximum likelihood estimate for ? , the variance of sample data from

the underlying constant global model must be estimated from Equation 2.28 which is repeated

here as Equation 4.4:

(y ? f ?ˆ )?T R?1 (y ? f ?ˆ )?
?ˆ ?2 ? [4.4]
ns

where ns = 5. The MLE for ? is then found by maximizing Equation 4.5 which is the same as

Equation 2.29 given previously in Section 2.4.2.

[n s ln( ?ˆ 2?) ? ln | R |]
? ( ?) ? ? [4.5]
2

128
A plot of ? (? ) is given in Figure 4.2. The MLE, or “best” guess, for ? is the point which

maximizes ? (? ) from Equation 4.5.

-10
0 2 4 6 8 10 12 14 16 18 20
-11
? ???

-12
-13
? * = 6.924
-14
-15
-16
-17
-18
theta

Figure 4.2 MLE Objective Function for 1-D Example

In this example, the MLE for ? is 6.924; hence, the “best” kriging model to fit these five

sample points when using a constant underlying global model and the Gaussian correlation

function is when ? = 6.924. Substituting this value into Equation 4.2, the resulting correlation

matrix is thus:

??
1 0.649 0.177 0.020 0.001??
1 0.649 0.177 0.020
?? 1 0.649 0.177??
R=
?? sym 1 0.649??
?? 1 ??

Now, new points are predicted using the scalar form of Equation 2.25 which is

repeated here as Equation 4.6:


129
yˆ ? ?ˆ ?? r T (x)R?1(y ? f?ˆ )? [4.6]

where rT (x) is the correlation vector of length 5 between an untried value of x and the sampled

data points {0.00, 0.25, 0.50, 0.75, 1.00}. The general form of rT (x) is given by Equation

2.26 which is particularized for this example as follows:

rT (x) = { R(x,0.00), R(x,0.25), R(x,0.50), R(x,0.75), R(x,1.00) }

where R is the Gaussian correlation function. Notice that the x values for which a new y is to be

predicted are scaled to [0,1]; however, the predicted values of y are the actual values.

A plot of the resulting kriging model—using a Gaussian correlation function and an

underlying constant global model—is shown in Figure 4.3 along with the original function, the

second-order response surface, and the five sample points. Immediately evident from the figure

is fact that the kriging model interpolates the data points, approximating the original function

better than the second-order response surface model which represents a least squares fit. In

this example, the interpolating capability of the kriging model allows it to predict an optimum

which is much closer to the actual optimum.

130
80
Su and Renaud (1996) Fcn
70
Sample Points
2nd Order RS Model
60
Kriging w/Gauss.
50

40

30

20

10
915 925 935 945 955 965
x

Figure 4.3 One Variable Example of Response Surface and Kriging Models

It is also important to notice that outside of the design space defined by the sample

points (920 = x = 945), neither model predicts as well as expected. The kriging model returns

to the underlying global model which is a constant in this example. This is typical behavior for a

kriging model; far from the design points, the kriging model returns to the underlying global

model because the influence of the sample points has “exponentially decayed away” outside of

the design space.

Sixteen, evenly spaced points (not including the sample points) are taken from within the

sample range (920 = x = 945) to assess the accuracy of the two approximations. The

maximum absolute error, the average absolute error, and the root mean square error (MSE),
131
Equations 2.30-2.32, for the 16 validation points are listed in Table 4.2. Both raw values and

percentages of actual values are listed in the table for ease of comparison.

Table 4.2 Error Analysis of One Variable Example

Raw Values As a % of Actual Value


Error 2nd Order Kriging 2nd Order Kriging
Measures RS Model Model RS Model Model
Max ABS(error) 3.134 2.507 18.34% 7.52%
Avg ABS(error) 1.911 0.776 10.32% 3.59%
root MSE 2.155 1.004 11.83% 4.16%

Based on this error analysis, the kriging model approximates the original function better

because it has a lower root MSE, average absolute error, and maximum absolute error. A

more involved example to compare further the predictive capability of second-order response

surface models and kriging models is presented in the next section.

4.2 AEROSPIKE NOZZLE DESIGN PROBLEM

The design of an aerospike nozzle has been selected as the preliminary test problem for

comparing the predictive capability of response surface and kriging models. The linear

aerospike rocket engine is the propulsion system proposed for the VentureStar reusable launch

vehicle (RLV) which is illustrated in Figure 4.4. The VentureStar RLV is one of the concepts

for the next generation space shuttles (Sweetman, 1996).

132
Figure 4.4 VentureStar RLV with Aerospike Nozzle (Korte, et al., 1997)

The aerospike rocket engine consists of a rocket thruster, cowl, aerospike nozzle, and

plug base regions as shown in Figure 4.5. The aerospike nozzle is a truncated spike or plug

nozzle that adjusts to the ambient pressure and integrates well with launch vehicles (Rao, 1961).

The flow field structure changes dramatically from low altitude to high altitude on the spike

surface and in the base region (Hagemann, et al., 1996; Mueller and Sule, 1972; Rommel, et

al., 1995). Additional flow is injected in the base region to create an aerodynamic spike

(Iacobellis, et al., 1967) which gives the aerospike nozzle its name and increases the base

pressure and contribution of the base region to the aerospike thrust.

133
Figure 4.5 Aerospike Components and Flow Field Characteristics
(Korte, et al., 1997)

The analysis of the nozzle involves two disciplines: aerodynamics and structures; there is

an interaction between the structural displacements of the nozzle surface and the pressures

caused by the varying aerodynamic effects. Thrust and nozzle wall pressure calculations are

made using computational fluid dynamics (CFD) analysis and are linked to a structural finite

element analysis model for determining nozzle weight and structural integrity. A mission average

engine specific impulse and engine thrust/weight ratio are calculated and used to estimate vehicle

gross-lift-off-weight (GLOW). The multidisciplinary domain decomposition is illustrated in

Figure 4.6. Korte, et al. (1997) provide additional details on the aerodynamic and structural

analyses for the aerospike nozzle.

134
Figure 4.6 Multidisciplinary Domain Decomposition for Aerospike Nozzle (Korte, et
al., 1997)

For this study, three design variables are considered: starting (thruster) angle, exit (base)

height, and (base) length as shown in Figure 4.7. The thruster angle (a) is the entrance angle of

the gas from the combustion chamber onto the nozzle surface; the base height (h) and length (l)

refer to the solid portion of the nozzle itself. A quadratic curve defines the aerospike nozzle

surface profile based on the values of thruster angle, height, and length.

135
Figure 4.7 Nozzle Geometry Design Variables (Korte, et al., 1997)

Bounds for the design variables are set to produce viable nozzle profiles from the

quadratic model based on all combinations of thruster angle, height, and length within the design

space. Second-order response surface models and kriging models are developed and validated

for each response (thrust, weight, and GLOW) in the next section; optimization of the aerospike

nozzle using the response surface and kriging models for different objective functions is

performed in Section 4.2.4.

136
4.2.1 Metamodeling of the Aerospike Nozzle Problem

The data used to fit the response surface and kriging models is obtained from a 25 point

random orthogonal array (Owen, 1992). The use of these orthogonal arrays in this preliminary

example is based, in part, on the success of the work by Booker, et al. (1995) and the

recommendations of Barton (1994). The actual sample points are illustrated in Figure 4.8 and

are scaled to fit the three dimensional design space defined by the bounds on the thruster angle

(a), base height (h), and length (l).

Length

Angle

Height

Figure 4.8 Sample Points of 25 Point Orthogonal Array

Response Surface Models for the Aerospike Nozzle Problem

137
The response surface models for weight, thrust, and GLOW are fit to the 25 sample points

using ordinary least squares regression techniques and the software package JMP® (SAS,

1995). The resulting second-order response surface models are given in the Equations 4.7-4.9.

The equations are scaled against the baseline design to protect the proprietary nature of some of

the data.

Weight = 0.810 - 0.116a + 0.121h + 0.152l + 0.065a2 - 0.025ah +


0.0013h2 - 0.0539al - 0.0131hl + 0.0301l2 [4.7]

Thrust = 0.997 + 0.00031a + 0.0019h + 0.0060l - 0.00175a2 +


0.00125ah - 0.0011h2 + 0.00125al - 0.00198hl - 0.00165l2 [4.8]

GLOW = 0.9930 - 0.0270a + 0.0065h - 0.0265l + 0.0307a2 -


0.0163ah + 0.0100h2 - 0.0226al + 0.0151hl + 0.0195l2 [4.9]

The R2, R2adj, and root MSE values for each of these second-order response surface

models are summarized in Table 4.2. As evidenced by the high R2 and R2adj values and low

root MSE values, the second-order polynomial models appear to capture a large portion of the

observed variance.

Table 4.3 Model Diagnostics of Response Surface Models

Response
Measure Weight Thrust GLOW
2
R 0.986 0.998 0.971
R2adj 0.977 0.996 0.953
root MSE 1.12% 0.01% 0.25%

138
Kriging Models for the Aerospike Nozzle Problem

The kriging models are built from the same 25 sample points used to fit the response surface

models. In this preliminary example, a constant term for the global model and a Gaussian

correlation function, Equation 2.21, for the local departures are chosen.

Initial investigations revealed that a single ? parameter was insufficient to model the data

accurately due to scaling of the design variables (a similar problem is encountered in (Giunta, et

al., 1998)). Therefore, a simple 3-D exhaustive grid search with a refinable step size is used to

find the maximum likelihood estimates for the three ? parameters needed to obtain the “best”

kriging model. The resulting maximum likelihood estimates for the three ? parameters for the

weight, thrust, and GLOW models are summarized in Table 4.4; note that these values are for

the scaled sample points.

Table 4.4 Theta Parameters for Kriging Models of Aerospike Nozzle

MLE Response
Values Weight Thrust GLOW
? angle = 0.548 0.30 3.362
? height = 1.323 0.50 2.437
? length = 2.718 0.65 0.537

With these parameters for the Gaussian correlation function, the kriging models now are

specified fully. A new point is predicted using these ? values and the 25 sample points as shown

139
in combination with Equations 2.25-2.27. The accuracy of the response surface and kriging

models is examined in the next two sections.

4.2.2 Error Analysis of Response Surface and Kriging Models

An additional 25 randomly selected validation points are used to verify the accuracy of

the response surface and kriging models. Error is defined as the difference between the actual

response from the computer analysis, y(x), and the predicted value, yˆ (x), from the response

surface or kriging model. The maximum absolute error, the average absolute error, and the root

MSE, see Equations 2.30-2.32, for the 25 randomly selected validation points are summarized

in Table 4.5.

Table 4.5 Error Analysis of Aerospike Nozzle Approximation Models

2nd Order Response Surface Models


Error Measure Weight Thrust GLOW
Max. ABS(error) 19.57% 0.032% 3.68%
Avg. ABS(error) 2.44% 0.012% 0.53%
root MSE 4.54% 0.015% 0.90%

Kriging Models (with Constant Term)


Error Measure Weight Thrust GLOW
Max. ABS(error) 17.23% 0.048% 3.43%
Avg. ABS(error) 2.51% 0.012% 0.59%
root MSE 4.37% 0.018% 0.89%

For the weight and GLOW responses, the kriging models have lower maximum

absolute errors and lower root MSEs than the response surface models; however, the average

absolute error is slightly larger for the kriging models. For thrust, the response surface models
140
are slightly better than the kriging models according to the values in the table; the maximum

absolute error and root MSE are slightly less while the average absolute errors are essentially

the same. It is not surprising that the response surface models predict thrust better; it has a very

high R2 value, 0.998, and low root MSE, 0.01%. It is reassuring to note, however, that the

kriging model, despite using only a constant term for the underlying global model, is only slightly

less accurate than the corresponding response surface model. In summary, it appears that both

models predict each response reasonably well, with the kriging models having a slight advantage

in overall accuracy because of the lower root MSE values. A graphical comparison is

presented in the next section to examine the accuracy of the response surface and kriging

models further.

4.2.3 Graphical Comparison of Response Surface and Kriging Models

In addition to the numerical error analysis of the previous section, a graphical

comparison of the response surface and kriging models is performed to visualize differences in

the two approximation models. In Figure 4.9-4.10, three dimensional contour plots of thrust,

weight, and GLOW as a function of thruster angle, length, and height are given. In each figure,

the same contour levels are used for the response surface and kriging models so that the shapes

of the contours can be compared.

141
(a) Thrust (b) Weight

Figure 4.9 Response Surface and Kriging Models for Thrust and Weight

In Figure 4.9a, the contours of thrust for the response surface and kriging models are

very similar. As evidenced by the high R2 and low root MSE values, the response surface

models should fit the data quite well, and it is reassuring to note that the kriging models resemble

the response surface models even through the underlying global model for the kriging models is

just a constant term. This demonstrates the power and flexibility of the “local” deviations of the

kriging model in general, and of the Gaussian correlation function in particular.

The contours of the response surface and kriging models in Figure 4.9b are also very

similar, but the influence of the localized perturbations caused by the Gaussian correlation

function can be seen in the kriging model for weight. The error analysis from the previous

section indicated that the kriging model for weight is slightly more accurate than the second-

order response surface model which may result from the small non-linear localized variations in

the kriging model.

142
The general shape of the GLOW contours is the same in Figure 4.10; however, the size

and shape of the different contours, particularly along the length axis, are quite different. The

end view along the length axis in Figure 4.10b further highlights the differences between the two

models. Notice also in Figure 4.10b that the kriging model predicts a minimum GLOW located

within the design space centered around Height = -0.8, Angle = 0, along the axis defined by 0.2

= Length = 0.8; this minimum was verified through additional experiments and is assumed to be

the minimum value for GLOW.

(a) GLOW - Isometric View (b) GLOW - End View

Figure 4.10 Response Surface and Kriging Models for GLOW

From the graphical and error analyses of the response surface and kriging models, it

appears that both models fit the data quite well. In the next section the accuracy of both

metamodels is put to the test. Four optimization problems are formulated and solved using each

143
of the metamodels and the efficiency and accuracy of the results are compared as a final test of

model adequacy.

4.2.4 Optimization using the Response Surface and Kriging Metamodels

The true test of the accuracy of the response surface and kriging models comes when

the approximations are used during optimization. It is paramount that any approximations used

in optimization prove reasonably accurate, lest they lead the optimization algorithm into regions

of bad designs. Trust Region approaches (see e.g., Lewis, 1996; Rodriguez, et al., 1997) and

the Model Management framework (see e.g., Alexandrov, et al., 1997; Booker, et al., 1995)

have been developed to ensure that optimization algorithms are not led astray by inaccurate

approximations. In this work, however, the focus has been on developing the approximation

models, particularly the kriging models, and not on the optimization itself.

Four different optimization problems are formulated and solved to compare the

accuracy of the response surface and kriging models, see Table 4.6: (1) maximize thrust, (2)

minimize weight, (3) minimize GLOW, and (4) maximize thrust/weight ratio. The first two

objective functions in Table 4.6 represent traditional single objective, single discipline

optimization problems. The second two objective functions are more characteristic of

multidisciplinary optimization; minimizing GLOW or maximizing the thrust/weight ratio requires

tradeoffs between the aerodynamics and structures disciplines. As seen in the table, for each

objective function, constraint limits are placed on the remaining responses; for instance,

constraints are placed on the maximum allowable weight and GLOW and the minimum

144
allowable thrust/weight ratio when maximizing thrust. However, none of the constraints are

active in any of the final results.

Table 4.6 Aerospike Nozzle Optimization Problem Formulations

Problem #1: Maximize Thrust Problem #2: Minimize Weight


Find: Find:
-1 = a = 1 -1 = a = 1
-1 = h = 1 -1 = h = 1
-1 = l = 1 -1 = l = 1
Satisfy: Satisfy:
Weight = Weightmax Thrust = Thrustmin
GLOW = GLOWmax GLOW = GLOWmax
Thr/Wt = (Thr/Wt)min Thr/Wt = (Thr/Wt)min
Maximize: Minimize:
Thrust = f(a,h,l) Weight = f(a,h,l)
Problem #3: Minimize GLOW Problem #4: Maximize Thr/Wt Ratio
Find: Find:
-1 = a = 1 -1 = a = 1
-1 = h = 1 -1 = h = 1
-1 = l = 1 -1 = l = 1
Satisfy: Satisfy:
Thrust = Thrustmin Thrust = Thrustmin
Weight = Weightmax Weight = Weightmax
Thr/Wt = (Thr/Wt)min GLOW = GLOWmax
Minimize: Maximize:
GLOW = f(a,h,l) Thr/Wt = f(a,h,l)

Each optimization problem is solved using: (a) the second-order response surface

models and (b) the kriging model approximations for thrust, weight, and GLOW. The

optimization is performed using the Generalized Reduced Gradient (GRG) algorithm in OptdesX

145
(Parkinson, et al., 1998). Three different starting points are used for each objective function

(the lower, middle, and upper bounds of the design variables) to assess the average number of

analysis and gradient calls necessary to obtain the optimum design within the given design space.

The same parameters (i.e., step size, tolerance, constraint violation, etc.) are used within the

GRG algorithm for each optimization. The optimization results are summarized in Table 4.7.

Design variable and response values have been scaled as a percentage of the baseline design

due to the proprietary nature of some of the data.

146
Table 4.7 Aerospike Nozzle Optimization Results Using Metamodels

Avg. # of Avg. # of Verified


Analysis Gradient Optimum Design Predicted Optimum Optimum1 % Error 2
Calls Calls
Maximize Thrust
Angle 0.096 Thrust 1.0016 1.0013 0.02%
RS 27 4 Height -0.433 Weight 0.9450 0.9476 -0.27%
Models Length 1.000 Thr/Wt 1.0141 1.0134 0.07%
GLOW 0.9724 0.9759 -0.36%
Angle 0.656 Thrust 1.0016 1.0014 0.02%
Kriging 62 5 Height -0.627 Weight 0.9385 0.9155 2.51%
Models Length 1.000 Thr/Wt 1.0157 1.0210 -0.51%
GLOW 0.9690 0.9683 0.08%
Minimize Weight
Angle 0.800 Thrust 0.9957 0.9957 -0.01%
RS 29 3 Height -1.000 Weight 0.7584 0.7496 1.18%
Models Length -1.000 Thr/Wt 1.0533 1.0555 -0.21%
GLOW 0.9936 0.9906 0.30%
Angle 1.000 Thrust 0.9965 0.9956 0.08%
Kriging 43 4.67 Height -0.873 Weight 0.7725 0.7443 3.79%
Models Length -1.000 Thr/Wt 1.0506 1.0568 -0.59%
GLOW 0.9824 0.9914 -0.90%
Minimize GLOW
Angle 0.616 Thrust 1.0013 0.9957 0.56%
RS 30.67 3.33 Height -1.000 Weight 0.8969 0.8617 4.09%
Models Length 1.000 Thr/Wt 1.0251 1.0286 -0.34%
GLOW 0.9660 1.0146 -4.79%
Angle 0.764 Thrust 1.0009 1.0006 0.04%
Kriging 57.67 6.33 Height -0.833 Weight 0.9060 0.8732 3.75%
Models Length 0.676 Thr/Wt 1.0228 1.0302 -0.72%
GLOW 0.9675 0.9680 -0.05%
Maximize Thrust/Weight Ratio
Angle 0.096 Thrust 1.0016 0.9959 0.57%
RS 27 4 Height -0.433 Weight 0.9450 0.9073 4.16%
Models Length 1.000 Thr/Wt 1.0141 1.0173 -0.31%
GLOW 0.9724 1.0228 -4.93%
Angle 0.656 Thrust 1.0016 1.0014 0.02%
Kriging 62 5 Height -0.627 Weight 0.9385 0.9063 3.56%
Models Length 1.000 Thr/Wt 1.0157 1.0231 -0.73%
GLOW 0.9690 0.9666 0.25%
1
The predicted optimum value is obtained by using the values of angle, height, and length (from the
optimum design) in the actual analysis code.
2
A (+) error term indicates that the model is over predicting; a (-) indicates that it is under predicting.

The following observations are made based on the data in Table 4.7.

147
• Average number of analysis and gradient calls: In general, the response surface
models require fewer analysis and gradient calls to achieve the optimum than the kriging
models do. This can be attributed, in part, to the fact that the response surface models
are simple second-order polynomials; the kriging models are more complex, non-linear
functions as evidenced in Figure 4.9 and Figure 4.10.

• Convergence rates: Although not shown in the table, optimization using the response
surface models tends to converge more quickly than when using kriging models. This
can be inferred from the number of gradient calls which is one to three calls fewer for
the response surface models than the kriging models.

148
• Optimum designs: The optimum designs obtained from the response surface and
kriging models are essentially the same for each objective function, indicating that both
approximations send the optimization algorithm in the same general direction. The
largest discrepancy is the length for the minimize GLOW optimization; response surface
models predict the optimum GLOW occurs at the upper bound on length (+1) while the
kriging models yield 0.676. This difference is evident from Figure 4.10. Furthermore, it
has been verified through additional experiments that the GLOW value obtained using
the kriging models is the actual minimum.

• Predicted optima and prediction errors: To check the accuracy of the predicted
optima, the optimum design values for angle, height, and length are used as inputs into
the original analysis codes and the percentage difference between the actual and
predicted values is computed. The prediction error is less than 5% for all cases and is
0.5% or less in three quarters of the results, indicating close agreement between the
metamodels and the actual analyses.

4.2.5 Lessons Learned from the Aerospike Nozzle Example

In summary, the response surface and kriging approximations yield comparable results

with minimal difference in predictive capability. It is worth noting that the kriging models

perform as well as the second-order response surface models even though the global

portion of the kriging model is only a constant. This helps to verify Hypothesis 2 which

states that kriging models are a viable metamodeling technique for building

approximations of deterministic computer analyses; however, many questions remain

unanswered.

149
• Correlation function: A Gaussian correlation function is utilized in this example to fit
the data, but is this the best correlation function of the five being considered in this
dissertation?

• Experimental design: A 25 point random orthogonal array is used in this example to


sample the design space and provide data to fit both the kriging and response surface
models, but is this the best type of experimental design for sampling deterministic
computer codes such as the ones used in this example? Would an alternative
experimental design yield a more accurate predictor?

• Model validation: Because kriging models interpolate the data, R2 values and residual
plots cannot be used to assess model accuracy. In this example an additional 25
validation points are employed to assess accuracy; however, other validation
approaches exist. One such approach which does not require additional validation
points is leave-one-out cross validation (Mitchell and Morris, 1992) mentioned in
Section 2.4.2. Does cross validation provide a sufficient assessment of model
accuracy?

A study of six engineering test problems is set up and performed in the next chapter to answer

these questions. In closing this chapter, a brief look ahead to that study is offered in the next

section.

4.3 A LOOK BACK AND A LOOK AHEAD

In an attempt to determine the types of applications for which kriging is useful, several

engineering examples are introduced in the next chapter to serve as test problems to establish

the utility of kriging and verify Hypothesis 2. In addition to testing Hypothesis 2, several

classical and space filling experimental designs are compared and contrasted in an effort to test

150
Hypothesis 3 to determine if space filling experimental designs are better suited for building

metamodels of deterministic computer experiments.

151
5. 5
CHAPTER 5

THE UTILITY OF KRIGING AND SPACE FILLING


EXPERIMENTAL DESIGNS

Kriging/DOE Testbed

In this chapter, Hypotheses 2 and 3 are tested explicitly, verifying the utility of kriging

and space filling experimental designs for building metamodels of deterministic computer

analyses. A pictorial overview and specific details of the study are given in Section 5.1. Six

engineering examples, introduced in Section 5.1.1, provide a testbed of problems to

benchmark kriging and space filling designs and verify Hypotheses 2 and 3. In Sections

5.1.2, 5.1.3, and 5.1.4, the factors, experimental designs, and responses in the study are

explained. Analysis of variance of the data and response correlation are presented in the

precursory data analysis in Section 5.2. Section 5.3 contains the results of testing Hypothesis 2

and a discussion of the ramifications of the results; the results and discussion regarding

145
Hypothesis 3 follow in Section 5.4. A summary of the study and its relevance to the

development of the PPCEM is offered in Section 5.5.

146
5.1 OVERVIEW OF KRIGING/DOE STUDY AND PROBLEM TESTBED

Consider the following scenario. Assume there is a “black-box” simulation which is

expensive to run and you desire to replace it with a metamodel, a kriging one in particular.

Assume that there are k design variables which you wish to include in the metamodel. What is

best type of experimental design you should use to query the simulation to generate data to build

an accurate kriging metamodel? How many sample points should you use? What type of

correlation function should you use to obtain the best predictor? Lastly, how can you best

validate the metamodel once you have constructed it?

The objective in this study is to answer precisely these questions. Given a series of test

problems (i.e., analyses), determine the best experimental design, sample size, and correlation

function to generate the most accurate model and determine how best to validate it. Toward

this end, a testbed of six engineering examples—the design of a three-bar truss, a two-bar truss,

a spring, a two-member frame, a welded beam, and a pressure vessel as introduced in Section

5.1.1 and Appendix D—has been created to test the utility of kriging and space filling

experimental designs. A pictorial overview of the kriging/DOE study is given in Figure 5.1; the

figure is viewed from top to bottom.

Contained in these six engineering examples are a total of 26 different types of equations

which are used to test the utility of kriging at metamodeling deterministic computer analyses. If a

kriging metamodel yields an accurate approximation of all, or a majority, of these equations,

then Hypothesis 2 is considered to be verified. Moreover, for each example, five correlation

147
functions are used to construct five different kriging metamodels in an effort to determine which

is the best correlation on average.

Meanwhile, for each example several classical and space filling experimental designs are

used to construct each kriging metamodel. By analyzing the accuracy of the resulting kriging

metamodel, the experimental design which yields the most accurate predictor, on average, can

be determined. In this regard, Hypothesis 3 is tested explicitly to verify that space filling

experimental designs yield more accurate kriging metamodels than do classical experimental

designs. And while Hypothesis 2 and 3 are being tested, the usefulness of cross validation root

mean square error as a measure of accuracy of a kriging metamodel is investigated.

148
2 Variable Problems 3 Variable Problems 4 Variable Problems

Problem
Six Test
Testbed
Problems
(§5.1.1)
3-bar 2-bar 2-member welded pressure
spring
truss truss frame beam vessel

EQN
1-4 5, 6, 7 8-14 15-17 18-22 23-26
(1-26)

Classical Designs Space Filling Designs


DOE
(1-15) bkbnk ccdaf ••• cciaf hamss mnmxl mxmnl ••• yelhd

Factors
& Levels For 2 Variables For 3 Variables For 4 Variables
(§5.1.2) NSAMP
7, 8, 9, 10, 11, 12, 13, 14 13-25 20-41
(§4.1.3)

CORFCN Piecewise Linear Quadratic


Exponential Gaussian
(1-5) Cubic Matérn Matérn

Responses Error Max. Abs. Error RMSE CVRMSE


(§5.1.4) Measures Sample Range Sample Range Sample Range

Precursory Data Analysis and Analysis of Variance (see §5.2)

Test Hypothesis 2 Test Hypothesis 3


Hypothesis
Isolate CORFCN and EQN Isolate DOE and NSAMP
Testing
(see §5.3) (see §5.4)

Figure 5.1 Pictorial Overview of Kriging/DOE Study

In total, 7905 kriging models are constructed: one for each correlation function

(CORFCN) for each experimental design (DOE) for each sample size (NSAMP) for each

equation (EQN) in each problem. As an example, the arrows in Figure 5.1 trace Equation 7 in

the two-bar truss problem. For EQN 7, there are 15 possible experimental design (DOE)

149
choices; in this case, the minimax Latin hypercube design (mnmxl) is being considered. For this

design, there are several possible choices for NSAMP, ranging from 7-14, because this is a two

variable problem. Using 10 sample points as an example, at the next level there are five

correlation functions (CORFCN) which can be used to build a kriging model; the Gaussian

correlation function is highlighted in this example. Finally, three measures of model accuracy are

computed for the kriging model resulting from this particular combination of EQN, DOE,

NSAMP, and CORFCN: max. abs. error, root mean square error (RMSE), and cross

validation root mean square error (CVRMSE) which are “normalized” by the corresponding

sample range so that responses of different magnitude can be compared directly.

After a precursory analysis of the data in Section 5.2, these three error measures of

model accuracy are used to test Hypotheses 2 and 3 explicitly. As shown in Figure 5.1:

Hypothesis 2 is tested in Section 5.3 by isolating the effects of correlation function


(CORFCN) and equation (EQN) on the accuracy of the resulting kriging model as
assessed through the error measures.

Hypothesis 3 is tested in Section 5.4 by isolating the effects of experimental design (DOE)
and sample size (NSAMP) on the error measures of accuracy of the resulting kriging
model.

The test problems used in this study are introduced next.

5.1.1 Overview of Testbed Problems

Six test problems were selected from the literature to provide a testbed for assessing the

utility of kriging and several different space filling experimental designs. These problems are not

150
meant to be all inclusive; rather, they are taken as representative of typical analyses encountered

in mechanical design. The analysis of these problems is simple enough not to warrant building

kriging models of the responses; however, these problems have been selected because:

a. they have been well studied and the behavior of the system and the underlying analysis
equations are known,

b. the corresponding “region of interest” is known in each problem, and

c. they have been used by other researchers to test their own metamodeling strategies and
algorithms.

Furthermore, the optimum solution for each problem is also known; however, a more extensive

error analysis is employed to assess the accuracy of the kriging models (see Section 5.1.4).

In the following sections, each example is described along with its pertinent constraints,

design variable bounds, and the objective function; note that a kriging model is constructed for

each constraint and objective function in each problem. The values of the parameters in the

equations (i.e., all of the letters and symbols which are not explicitly stated as being design

variables) are given in the referenced sections of Appendix D which contain the complete

description of each problem.

151
Two Variable Problems

The two variable problems investigated are the design of a two-bar truss (Figure 5.2) and of a

symmetric three-bar truss (Figure 5.3). The problem formulations (objective functions,

constraints, and bounds) follow each figure. A complete description of the two-bar and three-

bar examples is given in Appendix D, Sections D.1 and D.2, respectively.

D
A1 A2 A3
Section C-C’
N
H C
2P ? ?
? ?
C’
? ? x

B B P2 P1

Figure 5.2 Two-Bar Truss Figure 5.3 Three-Bar Truss

Find: Find:
• Tube diameter, D • Cross section area, A1 = A3
• Height of the truss, H • Cross section area, A2
Satisfy: Satisfy:
• Constraints: • Constraints:
? 2E(D 2 ? T 2 ) P(B2 ? H2 )1 /2 ??1 A ??
g1(x) = ? =0 g1(x) = 20,000 - ?? ? 2 ??
8(B 2 ? H 2 ) ?TDH
2
=0
??A 1 2 A1A 2 ? 2A 1 ??
P(B2 ? H 2 )1/ 2
g2(x) = ? y - =0 20, 000 2A1
? TDH g2(x) = 20,000 - =0
2A1A 2 ? 2A 12
• Bounds: 20, 000A 2
g3(x) = 15,000 ? =0
0.5 in. = D = 5.0 in. 2A1A 2 ? 2A12
5.0 in. = H = 50 in. • Bounds:
0.5 in2 = A1 = A3 = 1.2 in2
Minimize: 0.0 in2 = A2 = 4.0 in2
Weight, W(x) = 2? ?DT(B2 + H2)1/2
Minimize:
For more information:

152
• see, e.g., (Schmit, 1981) Weight, W(x) = ? N( 2 2 A1 ? A 2 )
• see Appendix D, Section D.1
For more information:
• see, e.g., (Schmit, 1981)
• see Appendix D, Section D.2

153
Three Variable Problems

The three variable problems are the design of a compression spring (Figure 5.4) and two-

member frame (Figure 5.5). Complete descriptions of these problems are given in Appendix D,

Sections D.3 and D.4, respectively.

z U1

(1) (3)
P
L L
x (2) y

U2 U3

t h

Figure 5.4 Compression Spring Figure 5.5 Two-Member Frame

Find: Find:
• Number of active coils, N • Frame width, d
• Mean coil diameter, D • Frame height, h
• Wire diameter, d • Frame wall thickness, t
Satisfy: Satisfy:
• Constraints: • Constraints:
g1(x) = S - 8CfFmaxD/(?d3) = 0 g1(x) = (? 12 + 3?2)1/2 = 40,000
g2(x) = lmax - lf = 0 g2(x) = (? 22 + 3?2)1/2 = 40,000
g3(x) = ?pm - ? = 0
• Bounds:
g4(x) = (Fmax - Fload)/K - ?w = 0
2.5 in. = d = 10 in.
g5(x) = Dmax - D - d = 0
2.5 in. = h = 10 in.
g6(x) = C - 3 = 0
0.1 in. = t = 1.0 in.
• Bounds:
Minimize:
3 = N = 30
Volume, V(x) = 2L(2dt + 2ht - 4t2)
1.0 in. = D = 6.0 in.
0.2 in. = d = 0.5 in. For more information:

154
Minimize: • see (Arora, 1989)
Volume, V(x) = ?2Dd2(N + 2)/4 • see Appendix D, Section D.4
For more information:
• see, e.g., (Siddall, 1982)
• see Appendix D, Section D.3

155
Four Variable Problems

The four variable problems being investigated are the design of a welded beam, Figure 5.6, and

design of a pressure vessel, Figure 5.7. The problem formulations follow each figure.

Complete descriptions are given in Appendix D, Sections D.5 and D.6, respectively.

Th Ts

F F
R R
l L t
B
h

b
L

Figure 5.6 Welded Beam Figure 5.7 Pressure Vessel

Find: Find:
• Weld height, h • Cylinder radius, R
• Weld length, l • Cylinder length, L
• Bar thickness, t • Shell thickness, Ts
• Bar width, b • Spherical head thickness, Th
Satisfy: Satisfy:
• Constraints: • Constraints:
g1(x) = [(?’)2 + 2?’?’’cos? + (?’’)2]1/2 = ?d g1(x) = Ts - 0.0193R = 0
g2(x) = 6FL/(bt2) = 30,000 g2(x) = Th - 0.00954R = 0
g3(x) =
4.013 EI? t
[1? ( )
EI
] = 6000 g3(x) = ?R2L + (4/3)?R3 - 1.296E6 = 0
L2 2L ?
g4(x) = 4FL3/(Et3b) = 0.25 • Bounds:
25 in. = R = 150 in.
• Bounds: 25 in. = L = 240 in.
0.125 in. = h = 2.0 in. 1.0 in. = Ts = 1.375 in.
2.0 in. = l = 10.0 in. 0.625 in. = Th = 1.0 in.
2.0 in. = t = 10.0 in.
0.125 in. = b = 2.0 in. Minimize:
F(x) = 0.6224TsRL + 1.7781ThR2 +
Minimize:
156
F(x) = (1 + c3)h2l + c4tb(L + l) 3.1661Ts2L + 19.84Ts2R
For more information: For more information:
• see (Ragsdell and Phillips, 1976) • see, e.g., (Sandgren, 1990)
• see Appendix D, Section D.5 • see Appendix D, Section D.6

157
Taken together, these six problems provide a wide variety of functions to approximate

since a kriging model is built for each objective function and constraint for each problem. In

total, there are 26 different equations contained in these six problems, ranging from simple linear

functions to reciprocal square roots; some equations even require the inversion of a finite

element matrix (see Section D.4 for the analysis of the two-member frame). With these six

problems as the testbed for verifying Hypotheses 2 and 3, the factors (and corresponding levels

of interest) being studied are explained next.

5.1.2 Factors and Levels for Kriging/DOE Experiment

The three basic factors considered in this experiment are listed in Table 5.1: CORFCN

refers to the correlation function used in the kriging model, EQN refers to the equation being

approximated, and DOE refers to the type of experimental design being utilized to sample the

equation to provide data to fit the model. The corresponding levels for each factor also are

listed in the table and are explained as follows.

• CORFCN has 5 levels of interest based on the correlation functions being studied (refer
to Table 2.1, Equations 2.20-2.24); the correlation function associated with each level
is given in the first two columns of Table 5.1.

• EQN has 26 levels based on the total number of equations (i.e., objective functions and
constraints) in the six test problems; when showing the levels for EQN, the objective
function for each problem is singled out from the constraints, see the middle two
columns of Table 5.1.

158
• DOE has 15 levels based on all of the classical and space filling experimental design
introduced in Section 2.4.3 for investigation; the acronyms and corresponding name of
each design is listed in the last two columns of Table 5.1.

Every effort is made to ensure that the observations of each factor level in the

experiment are properly balanced; however, some factors (and levels) are beyond control.

Each level of CORFCN given in Table 5.1 occurs an equal number of times in each problem;

hence, it is easy to examine the effect of the different correlation functions on the overall

accuracy of the kriging model (see Section 5.3.1). The factor EQN is used to isolate the

functions being considered and is utilized in Section 5.3.2 when the accuracy of the kriging

model is examined for each pair of problems. As such, both of these factors are relatively well-

balanced in the design. The levels of DOE, however, are not well-balanced because the fifteen

levels for DOE do not appear equally in each problem; for example, there is no Box-Behnken

experimental design for two variable problems.

Table 5.1 Factors and Level for Kriging/DOE Study

CORFCN EQN DOE


Level Equation Level Equation Level Type of Design
1 Exponential Two Variable Problems Classical DOE
Eqn. 2.20 Three-Bar Truss bkbnk Box Behnken
2 Gaussian 1 W(x) ccdaf CCD + CCF
Eqn. 2.21 2-4 g1(x) - g3(x) ccdes CCD
3 Cubic ccfac CCF
Eqn. 2.22 Two-Bar Truss ccins CCI
4 Lin. Matérn 5 W(x) cciaf CCI + CCF
Eqn. 2.23 6-7 g1(x) - g3(x)
5 Quad. Matérn Space Filling DOE

159
Eqn. 2.24 Three Variable Problems hamss Hammersley
Spring Sequence
8 V(x) mnmxl Minimax Lh
9-14 g1(x) - g6(x) mxmnl Maximin Lh
oalhd Orthogonal
Two-Member Frame Array-Based Lh
15 V(x) oarry Orthogonal
16-17 g1(x) - g2(x) Array
oplhd Optimal Lh
Four Variable Problems rnlhd Random Lh
Welded Beam unifd Uniform Design
18 F(x) yelhd Orthogonal Lh
19-22 g1(x) - g4(x)

Pressure Vessel
23 F(x)
24-26 g1(x) - g3(x)

To make things even more complicated, the number of sample points within each design

depends on the type of DOE considered and the number of variables in the problem. For

instance, a CCD for the two variable problems has 22 + 2•2 + 1 = 9 points while a random

Latin hypercube can have any number of sample points. Hence, great care must be taken when

analyzing the effects of DOE because of the biasing which occurs due to unbalanced sample

sizes in the experiment. This is discussed in more detail in the next section which contains a

complete listing of which experimental designs (and corresponding sample sizes) are used in

each of the two, three, and four variable problems.

160
5.1.3 Experimental Design Choices for Test Problems

A very important factor in the selection of an experimental design is the number of

points used. How is the number of points to be determined for a given design? For the

two types of classical designs utilized in this dissertation—CCDs and Box-Behnken designs—

the number of points essentially is fixed once the number of factors is specified. Fractional

factorial designs within a CCD are not considered for these problems because they contain so

few variables. Unlike the CCDs and Box-Behnken designs, for most space filling designs the

number of points is not dictated by the number of factors and can be any number within reason.

Therefore, in order to determine the number of points used in a space filling design, a

CCD with the same number of factors is used to determine the baseline number of points, e.g.,

for three factors, a CCD requires 15 points, and the number of points used in all space filling

designs for three factors would be selected to be as close to 15 as possible. However, because

some space filling designs can have a variable number of sample points, a variety of sample sizes

for each design are considered in order to see if fewer or slightly more points provides an

improved fit. As a guideline, an upper bound on the number of points of about 1.5 times the

number prescribed by the baseline CCD is employed. This factor of 1.5 is primarily based on

the recommendations of Giunta, et al. (1994) who found that for small problems (i.e., fewer

than about five factors) the variance of a second-order response surface model leveled off when

the number of sample points was about 1.5 times the number of terms in the polynomial model.

This number serves as a guideline in this work despite the fact that kriging models do not

necessarily use a second-order polynomial.

161
How important is the number of design points when picking an experimental

design? The answer is very important. In order to compare the utility of different experimental

designs properly, it is important to use the same number of sample points because a design with

more sample points is expected to provide more information, possibly resulting in a more

accurate model. Therefore, when designs do not have the same number of points, it is

impossible to determine if an improvement in model accuracy is from the design itself (i.e.,

spacing of the points in the design space) or from the number of sample points. However, in

some cases it is extremely difficult, if not impossible, to have two different designs which have

the same number of points. For instance, a three factor CCD has 15 points, a three factor Box-

Behnken has 13 (since replicates are not used), and a strength 2 randomized OA has either 9,

16, or 25 points since it is restricted to q2 points where q is the number of levels and is

restricted to be a prime power. Despite these difficulties, every effort is made to make the

sample sizes overlap as much as possible from one design to the next. The experimental designs

and corresponding sample sizes for each pair of problems are described in the following

sections.

Two Variable Problems

For the two variable problems (the two-bar and three-bar trusses), nine types of experimental

designs are considered, see Table 5.2. Of these nine types of designs, there are 51 unique

designs because each design which has a different number of points is considered a unique

design. For instance, a seven point Latin hypercube and an eight point Latin hypercube are

162
unique designs because they have different sample sizes even though they are both Latin

hypercube designs.

Table 5.2 Experimental Designs for Two Factor Test Problems

Type of Design # Points Type of Design # Points


CCD, CCF, CCI 9 CCI + CCF 13
Box-Behnken (bxbnk) NA CCD + CCF 13
Minimax Latin 7-14 Randomized Orthogonal Array NA
Hypercube (mnmxl) (oarry) †
Maximin Latin 7-14 Optimal Latin Hypercube 7-14
Hypercube (mxmnl) (oplhd)
Random Latin 7-14 Orthogonal Array-Based Latin 9
Hypercubes (rnlhd)† Hypercubes (oalhd)†
Orthogonal Latin 9 Hammersley Sampling 7-14
Hypercubes (yelhd)† Sequence (hamss)
Uniform Designs (unifd) 7-14

Each design is instantiated three times because it is based on a random permutation,
and the resulting error measures are averaged over all three randomizations for that
design.

Some designs are based on random permutations of levels as indicated by the

superscript (†) in the table. To minimize the effects of this randomness, each of these designs is

randomized three times, and the resulting error measures are averaged over all three

randomizations for that specific design to prevent a design from yielding a poor model because

of its randomly chosen levels. As a result, there a total of 71 designs which are fit for each of

the two variable problems. Finally, notice that neither Box-Behnken designs nor orthogonal

arrays are included in these problems; there is no Box-Behnken design for two factors, and a

163
nine point orthogonal array for two factors is a 3 x 3 grid, the same as a face-centered central

composite design (CCF).

Three Variable Problems

Eleven types of experimental designs for a total of 63 unique designs are considered (as shown

in Table 5.3) for the three variable spring and a two-member frame test problems. In all, there

are 92 total designs constructed for each three variable problem once the three randomizations

of the Latin hypercube, orthogonal Latin hypercube, orthogonal array, and orthogonal array-

based Latin hypercube designs are added to the study.

Table 5.3 Experimental Designs for Three Variable Test Problems

Type of Design # Points Type of Design # Points


CCD, CCF, CCI 15 CCI + CCF 21
Box-Behnken (bxbnk) 13 CCD + CCF 23
Minimax Latin 13-19, Randomized Orthogonal Array 16, 25
Hypercube (mnmxl) 21, 23, 25 (oarry) †
Maximin Latin 13-15, 17, 19 Optimal Latin Hypercube 13-19,
Hypercube (mxmnl) (oplhd) 21, 23, 25
Random Latin 13-19, Orthogonal Array-Based Latin 16, 25
Hypercubes (rnlhd)† 21, 23, 25 Hypercubes (oalhd)†
Orthogonal Latin 17 Hammersley Sampling 13-19,

Hypercubes (yelhd) Sequence (hamss) 21, 23, 25
Uniform Designs (unifd) 13, 15, 17, 19,
21, 23, 25

Each design is instantiated three times because it is based on a random permutation,
and the resulting error measures are averaged over all three randomizations for that
design.

164
Notice that a 13 point Box-Behnken design is included in the set of designs for the three

variable problems along with two randomized orthogonal array designs: a 16 point OA and a 25

point OA. One thing to note about these designs (and the orthogonal array-based Latin

hypercubes as well) is that the number of points in the design is limited to q2 sample points,

where q is a power of a prime number. Thus, only q = 4 and q = 5 point OAs are considered

for the three variable problems in order to maintain a fairly consistent number of points between

the different designs.

Four Variable Problems

For the two, four variable problems, 66 unique designs from eleven types of experimental

designs are employed (see Table 5.4). Including the repetitions of the designs with random

permutations, a total of 102 designs are examined for each of these problems.

Table 5.4 Experimental Designs for Four Variable Test Problems

Type of Design # Points Type of Design # Points


CCD, CCF, CCI 25 CCI + CCF 33
Box-Behnken (bxbnk) 25 CCD + CCF 41
Minimax Latin 20-29, 31, 33 Randomized Orthogonal Array 16, 25, 32

Hypercube (mnmxl) (oarry)
Maximin Latin 22, 25, 26, 28 Optimal Latin Hypercube 20, 22, 25, 26,
Hypercube (mxmnl) (oplhd) 28, 29, 31, 33
Random Latin 20-29, 31, 33 Orthogonal Array-Based†Latin 16, 25

Hypercubes (rnlhd) Hypercubes (oalhd)
Orthogonal Latin 33 Hammersley Sampling 20-29, 31, 33

Hypercubes (yelhd) Sequence (hamss)
Uniform Designs (unifd) 21, 23, 25, 27,
29, 31

165

Each design is instantiated three times because it is based on a random permutation,
and the resulting error measures are averaged over all three randomizations for that
design.

Notice in Table 5.4 that only four maximin Latin hypercubes are considered: 22, 25, 26,

and 28 point designs. This is because the simulated annealing (Morris and Mitchell, 1995) used

to create these designs is not very robust in generating large four factor designs, and large four

factor designs are not listed in (Morris and Mitchell, 1992). In addition, three orthogonal arrays

are employed: 16, 25, and 32 point designs. The 16 and 25 point designs are strength 2

designs; the 32 point OA design is a strength 3 design with 2q3 points and levels 0, ..., q-1. So

while there are more points with the 32 point OA design than in the 25 point OA design, the

number of unique factor levels being considered in the 32 point OA design is actually less than in

the 25 point OA design.

In summary, the experimental designs and corresponding levels listed in Table 5.2

through Table 5.4 are used to generate data to build kriging models for each equation in each

problem. For each kriging model, the kriging model is cross validated and the accuracy of the

kriging model is further assessed using a set of validation points which is independent of the

design and number of samples. The end result is three measures of model accuracy which

provide the responses for this study as explained in the next section.

5.1.4 Responses for the Kriging/DOE Experiment

As shown in Figure 5.1, there are three responses in the kriging/DOE study:

166
1. cross validation root mean square error (CVRMSE), see Equation 2.33 in Section
2.4.2, of the kriging model;

2. maximum absolute error (MAX), see Equation 2.30 in Section 2.4.2, of the kriging
model; and

3. root mean square error (RMSE), see Equation 2.32 in Section 2.4.2 of the kriging
model.

The CVRMSE of the kriging model is based on the leave-one-out cross validation procedure

described in Section 2.4.2; it utilizes the sample data to validate the model and does not require

additional data for validation. As such, it is uncertain whether CVRMSE provides an

assessment of model adequacy; therefore, three sets of validation points are used to compute

MAX and RMSE. The average absolute error measure, Equation 2.27, is not included in this

study since it correlates well with RMSE and provides little additional information beyond that

obtained from analysis of RMSE. The number of validation points used in each problem is

listed in Table 5.5: 1000, 1500, and 2000 validation points for the two, three, and four variable

problems, respectively.

Table 5.5 Additional Random Points Used to Assess Model Accuracy

Test Problem Name of Problem # Points


2 variables Two-bar truss & Sym. three-bar truss 1000
3 variables Two-member frame & Helical spring 1500
4 variables Pressure vessel & Welded beam 2000

167
Rather than randomly pick these validation points, the points are obtained from a

random Latin hypercube to ensure uniformity within the design space. The predicted values

from each kriging model are compared against the actual values from the set of validation points,

and the error measures MAX and RMSE are computed. These measures are then

“normalized” as a percentage of the sample range, for the particular design under investigation,

in order to compare responses with different magnitudes. A precursory analysis of the data is

given in the next section.

5.2 PRECURSORY KRIGING/DOE DATA ANALYSIS AND ANOVA

In total, there are 11535 kriging models constructed as shown in Table 5.6 for the six

test problems—one kriging model for each equation for each design for each test problem. For

each of these models, there are three measures of model accuracy: MAX, RMSE, and

CVRMSE; hence, there are 34605 data points in the resulting data set.

Table 5.6 Kriging Test Problem Model Summary

Problem No. of No. of No. of Total No. of No. of No. of No. of Total
Name Variables Responses Unique DOE DOE CORFCN Models Models
2bar 2 3 51 71 5 765 1005
3bar 2 4 51 71 5 1020 1340
2mem 3 3 63 92 5 945 1380
spring 3 7 63 92 5 2205 3220
press 4 4 66 102 5 1320 2040
weld 4 5 66 102 5 1650 2550
Grand Totals 7905 11535

168
To facilitate analysis of the data set, the error measures of the designs which are

replicated—the orthogonal arrays, random Latin hypercubes, OA-based Latin hypercubes, and

orthogonal Latin hypercubes—are averaged to reduce the data set to 7905 models. However,

not all of these 7905 models are good, i.e., many contain outliers which bias the results, and

potential outliers must be removed. The cause of the outliers can be attributed to incomplete

convergence of the numerical optimization used to fit the model or singularities in the data set

which occur during model fitting, numerical round-off error, or bad data resulting from

transferring data from file to file, program to program, and computer to computer.

Hence, the data set is culled to remove any potential outliers. Rather than first fit the

model and remove potential outliers based on the residuals, the data is culled based on (a)

potential RMSE outliers, (b) potential MAX outliers, and (c) potential CVRMSE outliers since

it is known that many outliers exist due to singularities in the data set which occur during model

fitting. The process is described in detail in Appendix E; density plots are included in Appendix

E to show the distribution of the resulting data for the two, three, and four variable problems.

In this manner, the data set is reduced from 7905 models to 7578. This constitutes a

reduction of about 4% which is considered reasonable given the magnitude of the study and the

potential for errors. From this point forward, any reference to “the data set” refers to the final

culled data set with all of the potential outliers removed and not to the original data set unless

explicitly specified.

169
Analysis of variance is performed in the next section to determine which factors have a

significant effect on the accuracy of the resulting kriging model. This is followed in Section 5.2.2

with an examination of the correlation of the resulting error measures.

5.2.1 Analysis of Variance of Kriging/DOE Study

Before Hypotheses 2 and 3 are tested, analysis of variance (ANOVA) is performed to

determine which factors have a significant effect on the accuracy of the resulting kriging model

(see, e.g., (Chambers, et al., 1992; Montgomery, 1991) for more on ANOVA). The software

package S-Plus4 (MathSoft, 1997) is used to analyze the data. The ANOVA is performed

separately for each pair of two, three, and four variable problems for all three error measures.

Furthermore, because of the size of the data set, only main effects and two-factor interactions

can be studied. The ANOVA results are given in Section E.2, and a summary of the ANOVA

results are given in Table 5.7. In the table, the factor main effects and two-factor interaction

effects are listed in the first column of the table; a colon between factors (e.g.,

CORFCN:NSAMP) indicates a two-factor interaction. The abbreviations “sig” and “not sig”

are used to indicate whether or not the effect is significant. For instance, all of the main effects

and two-factor interactions except CORFCN:NSAMP are significant for RMSE.RANGE and

MAX.RANGE in the two and three variable problems.

170
Table 5.7 Summary of ANOVA Results for Kriging/DOE Study

2 Variable Problems 3 Variable Problems 4 Variable Problems


RMSE MAX CVRMS RMSE MAX CVRMS RMSE MAX CVRMS
E E E
Effect RANGE RANGE RANGE RANGE RANGE RANGE RANGE RANGE RANGE
DOE sig sig sig sig sig sig sig sig sig
CORFCN sig sig sig sig sig sig sig sig sig
NSAMP sig sig sig sig sig sig sig sig sig
EQN sig sig sig sig sig sig sig sig sig
DOE:CORFCN sig sig sig sig sig sig sig not sig sig
DOE:NSAMP sig sig not sig sig sig sig sig sig not sig
DOE:EQN sig sig sig sig sig sig sig sig sig
CORFCN:NSAM not sig not sig sig not sig not sig sig not sig not sig not sig
P
CORFCN:EQN sig sig sig sig sig sig sig not sig sig
NSAMP:EQN sig sig sig sig sig not sig sig sig not sig

As can be seen in Table 5.7, the majority of the effects are significant on all of the error

measures. It is not surprising to see that the main effects of the factors DOE, CORFCN,

NSAMP, and EQN are significant for all responses for all of the problems. Likewise, the

interaction between DOE and NSAMP is significant for all RMSE.RANGE and

MAX.RANGE values. The interaction between CORFCN and NSAMP is not significant in

the majority of cases since it is unlikely that these two factors would interact to provide a more

accurate model. The interaction between DOE and CORFCN is significant in all but one case

which is interesting to note. In summary, it appears that there are many significant interactions

and main effects to examine. Observations regarding many of these interactions can be inferred

from the appropriate graphs; however, the commentary in Sections 5.3 and 5.4 focuses

primarily on main effects.

5.2.2 Correlation of Error Measures

171
The two most important measures of model accuracy in this study are considered to be

RMSE and MAX. Why are these two particular measures the most important? RMSE is

used to gauge the overall accuracy of the model, and MAX is used to gauge the local accuracy

of the model. Ideally, RMSE and MAX would be zero, indicating that the metamodel predicts

the underlying analysis or model exactly; however, this is rarely the case. Therefore, the lower

the value of either error measure, the more accurate the model.

Both measures are important when using an approximation in an optimization or robust

design application because high values of RMSE can lead an optimization algorithm into a region

of bad design and high values of MAX prevent the optimization algorithm from finding the true

optimum solution. To see if the two measures are correlated, a plot of RMSE.RANGE versus

MAX.RANGE for the data set is given in Figure 5.8. Here and henceforth, the acronyms

RMSE.RANGE and MAX.RANGE are used to refer to the values of RMSE and MAX when

normalized against the sample range.

172
6

4
max.range

0.0 0.1 0.2 0.3 0.4 0.5 0.6


rmse.range

Figure 5.8 Correlation Between RMSE.RANGE and MAX.RANGE

Since the data is widely scattered in Figure 5.8, the two error measures do correlate

well. Models with low RMSE.RANGE values tend to have low MAX.RANGE values, but

models with moderate RMSE.RANGE values have any of a variety of MAX.RANGE values.

As such, it is important to analyze both RMSE.RANGE and MAX.RANGE when drawing

conclusions.

A plot of RMSE.RANGE versus CVRMSE.RANGE is given in Figure 5.9. Based on

the wide scattering of the data, RMSE.RANGE and CVRMSE.RANGE are not correlated

either. This means that the cross validation root mean square error is not a sufficient

measure of model accuracy because root mean square error provides the best possible
173
assessment of overall model accuracy. If CVRMSE.RANGE and RMSE.RANGE had been

correlated, then CVRMSE alone could be computed to assess model accuracy without having

to take any additional points to compute RMSE to validate the model.

1.0

0.8
cvrmse.range

0.6

0.4

0.2

0.0

0.0 0.1 0.2 0.3 0.4 0.5 0.6


rmse.range

Figure 5.9 Correlation of RMSE.RANGE and CVRMSE.RANGE

A plot of MAX.RANGE versus CVRMSE.RANGE is given in Figure 5.10. As with

Figure 5.9, there is a wide scattering of the data, and it appears that MAX.RANGE and

CVRMSE.RANGE are not well correlated either. Hence, there is no need to examine

CVRMSE.RANGE further because it does not provide a good assessment of model accuracy

since it does not correlate well with either MAX.RANGE or RMSE.RANGE. As stated
174
earlier, this finding is unfortunate because it means that additional validation points must be

taken in order to assess the accuracy of a kriging model properly; cross validating the

model using the sample data is not a sufficient measure of accuracy.

1.0

0.8
cvrmse.range

0.6

0.4

0.2

0.0

0 1 2 3 4 5 6
max.range

Figure 5.10 Correlation of MAX.RANGE and CVRMSE.RANGE

Using only RMSE.RANGE and MAX.RANGE, the error of the resulting kriging

models can now be assessed by isolating a single factor (or pair of factors). The process for

analyzing the data in order to interpret specific results is identified at the beginning of each

section when Hypotheses 2 and 3 are tested. Hypothesis 2 is tested first in the next section,

and Hypothesis 3 is tested second in Section 5.4.


175
5.3 TESTING HYPOTHESIS 2: THE UTILITY OF KRIGING

Recall Hypothesis 2 from Section 1.3.1 and Section 3.3.2:

Hypothesis 2: Kriging is a viable metamodeling technique for building approximations

of deterministic computer analyses.

In order to test this hypothesis, two factors are isolated to analyze the results further, namely,

CORFCN and EQN. Both factors were found to have a significant effect on the accuracy of

the resulting kriging in the ANOVA in Section 5.2.1. The effect of CORFCN on

RMSE.RANGE and MAX.RANGE is investigated in the next section. The effect of EQN on

RMSE.RANGE and MAX.RANGE is discussed in Section 5.3.2. Keep in mind that all of

these results are based strictly on averages of the data at a given level of a particular variable; it

is assumed that biasing due to unbalanced numbers of observations at each level is negligible

since such a large data set is being used.

5.3.1 Effect of Correlation Function on Kriging Model Accuracy

The effect of correlation function on model accuracy was found to be significant in the

ANOVA in Section 5.2.1, but it is uncertain which correlation function yields the best results on

average. Therefore, the effect of CORFCN on RMSE.RANGE aggregated over all the

problems and for each pair of problems is shown in Figure 5.11. The average (mean) of

RMSE.RANGE for each factor level is plotted on the vertical axis in the figure. Meanwhile, the

176
vertical bars within the figure are used for grouping purposes, showing the range of effects of the

different levels of the factor being considered (in this case, CORFCN) for each problem group

as indicated on the x-axis. The numbers 1, 2, 3, 4, and 5 in the figure indicate the level of

correlation function as described in the key in the figure. The horizontal dashed lines which

cross each vertical bar indicate the group average of RMSE.RANGE for that particular

grouping; for instance, the mean RMSE.RANGE for all of the problems is about 0.062. The

arrows are used to indicate the effect a particular level of CORFCN has on RMSE.RANGE;

the same holds true regardless of the factor being considered. For example, in Figure 5.11 the

average effect of CORFCN = 1 in the two variable problems is slightly less than 0.08 while the

average effect of CORFCN = 4 in the same problems is slightly greater than 0.06. Finally,

lower values of RMSE.RANGE (and MAX.RANGE) are better; so, the lower the arrow of a

particular level on the vertical line, the more accurate is the resulting kriging model.

177
Key:
1 1 = Exponential
0.08 2 = Gaussian
3 = Cubic
1

1
mean of rmse.range
0.07

3 2
1
5
4
5 3
0.06

4 3 4
5
2
2 4
3
5
0.05

2
all problems 2 variable problems 3 variable problems 4 variable problems
Factor - CORFCN

Figure 5.11 Effect of Correlation Function Type on RMSE.RANGE

Some observations regarding Figure 5.11 are as follows. The exponential correlation

function (CORFCN = 1) repeatedly is the worst. Overall, the Gaussian correlation function

(CORFCN = 2) provides the lowest RMSE.RANGE on average and also for the three and

four variable problems as well. The linear Matérn (CORFCN = 4) yields the lowest average

RMSE.RANGE for the two variable problems but yields comparable results to the piece-wise

cubic correlation function (CORFCN = 3) and the quadratic Matérn (CORFCN = 5)

correlation function otherwise. The piece-wise cubic (CORFCN = 3) and quadratic Matérn

(CORFCN = 5) correlation functions generally yield worse results than the Gaussian correlation

function (CORFCN = 2).

178
The effect of CORFCN on MAX.RANGE is shown in Figure 5.12. As in Figure 5.11,

the exponential correlation function (CORFCN = 1) repeatedly is the worst but does

surprisingly well in the four variable case. Overall, the Gaussian correlation function (CORFCN

= 2) provides the lowest MAX.RANGE on average and for the three and four variable

problems as well.

Key:
1 = Exponential
4
0.7

2 = Gaussian 3
3 = Cubic 5

2 1
mean of max.range
0.6

1 1

4
0.5

5 3
2

1
4
0.4

3 5
2 5
3
4 2

all problems 2 variable problems 3 variable problems 4 variable problems


Factor - CORFCN

Figure 5.12 Effect of Correlation Function Type on MAX.RANGE

As seen in Figure 5.12, the linear Matérn correlation function (CORFCN = 4) yields

the best MAX.RANGE for the two variable problems and the worst for the four variable

problems with on average results otherwise. The piece-wise cubic (CORFCN = 3) and

179
quadratic Matérn (CORFCN = 5) correlation functions yield comparable results, falling

somewhere in the middle of the spectrum in each problem and performing slightly better than

average overall.

In summary, the Gaussian correlation function (CORFCN = 2) is the best

correlation function to use for building kriging models. On average, it provides the lowest

RMSE.RANGE and MAX.RANGE, yielding the most accurate kriging models. Furthermore,

it also yields the best results in the three and four variable problems when averaged over all

designs, sample sizes, and equations; in the two variable problems its performance is average

but not far behind the linear Matérn correlation function (CORFCN = 4).

5.3.2 Effects of Equation Type on Kriging Model Accuracy

In order to determine which types of equations are fit best, the factor EQN is used to

isolate which equations are well fit by the kriging models, thus explicitly testing Hypothesis 2. A

plot of the resulting RMSE.RANGE of the two, three, and four variable problems for each level

of the factor EQN is shown in Figure 5.13; the effect of each level of EQN on the mean of

RMSE.RANGE is averaged over all DOE, NSAMP, and CORFCN for each problem. For

clarity, dashed lines are used to indicate the 5% and 10% RMSE.RANGE values. If a 5% level

of model accuracy is used as a cut-off point, then 14 out of the 26 equations in this study are

accurately modeled by kriging. If that cut-off is raised to 10%, then 20 out of the 26 equations

are accurately modeled by kriging.

180
0.15
18
19
21

4 2
mean of rmse.range

10 16
10% 11 12
17
6

5%
7
9 22
8
15
20
5 14 26 23
1 13 24 25
0.0

2 variable problems 3 variable problems 4 variable problems


Factor - EQN

Figure 5.13 Effect of EQN of RMSE.RANGE for Testbed Problems

Looking at the effect of equation type on MAX.RANGE in Figure 5.14, however,

fewer kriging models meet a 10% cut-off point. In Figure 5.14, only nine of the 26 equations

fall within the 10% level of accuracy. If the level of accuracy is allowed to drop to 20%, which

is quite high, then five more equations may be considered modeled accurately by kriging (EQN

= 7, 8, 9, 20, and 22). The mean values for MAX.RANGE for Equations 18, 19, and 21 are

beyond the scale of the chart. For convenience, the equations and corresponding level of EQN

are listed in Table 5.8 which summarizes which equations are fit well and which are not based

on the 5% and 10% error levels used.

181
3 18 19
21
1.0
0.8

16 1210
11
mean of max.range

17
0.6

4
2 6
0.4
0.2

7 22
8 9
20
10% 14 15 23
5 26
1
0.0

13 24 25

2 variable problems 3 variable problems 4 variable problems


Factor - EQN

Figure 5.14 Effect of Equation Type on MAX.RANGE

The types of equations which are well fit by kriging are noted in Table 5.8. At the 5%

level of RMSE.RANGE the linear combinations of the design variables (EQN = 1, 5, 8, 13, 15,

24, and 25) and most reciprocal equations (EQN = 7, 9, 20, and 22) are modeled well. Some

higher-order equations are also modeled well (EQN = 23 and 26). At the 10% level, all of the

equations in the three variable problems are modeled well (EQN = 8-17). At this level, the

equations based on the finite element model of the two-member frame (EQN = 16 and 17) also

are accurately represented by the kriging models. Looking at MAX.RANGE, however, the

182
majority of the equations which meet the 10% cut-off are linear combinations of the design

variables (e.g., EQN = 1, 5, 13, 24, and 25) which may involve higher-order terms or

reciprocals (e.g., EQN = 14, 23, and 26).

183
Table 5.8 Summary of Equations Accurately Modeled by Kriging

EQN Equation and Design Variables, x, RMSE.RANGE MAX.RANGE


# for Each Problem %error < 5% %error < 10% %error < 10%
Three-bar truss: x = {A1, A2}
1 W(x) = ? N( 2 2 A1 ? A 2 ) yes yes yes
??1 A2 ??
2 g1(x) = 20,000 - ?? ? 2 ??
no no no
??A 1 2 A1A 2 ? 2A 1 ??
20, 000 2A1
3 g2(x) = 20,000 - no no no
2A1A 2 ? 2A 12
20, 000A 2
4 g3(x) = 15,000 ? no no no
2A1A 2 ? 2A12
Two-bar truss: x = {D, H}
5 W(x) = 2? ?DT(B2 + H2)1/2 yes yes yes
? E(D ? T ) P(B ? H )
2 2 2 2 2 1 /2

6 g1(x) = ? no yes no
8(B 2 ? H 2 ) ?TDH
P(B ? H )
2 2 1/ 2

7 g2(x) = ? y - yes yes no


? TDH
Spring: x = {N, D, d}
8 V(x) = ?2Dd2(N + 2)/4 yes yes no
9 g1(x) = S - 8CfFmaxD/(?d3) yes yes no
10 g2(x) = lmax - lf no yes no
11 g3(x) = ?pm - ? no yes no
12 g4(x) = (Fmax - Fload)/K - ?w no yes no
13 g5(x) = Dmax - D - d yes yes yes
14 g6(x) = D/d - 3 yes yes yes
Two-member frame: x = {d, h, t}
15 V(x) = 2L(2dt + 2ht - 4t2) yes yes yes
16 g1(x) = (? 12 + 3?2)1/2 no yes no
17 g2(x) = (? 22 + 3?2)1/2 no yes no
Welded Beam: x = {h, l, t, b}
18 F(x) = (1 + c3)h2l + c4tb(L + l) no no no
19 g1(x) = [(?’)2 + 2?’?’’cos? + (?’’)2]1/2 no no no
20 g2(x) = 6FL/(bt2) yes yes no
4.013 EI? t EI
21 g3(x) = [1? ( ) ] no no no
L2
2L ?
22 g4(x) = 4FL3/(Et3b) yes yes no
Pressure Vessel: x = {R, L, Ts, Th}
F(x) = 0.6224TsRL + 1.7781ThR2 +
23 yes yes yes

184
3.1661Ts2L + 19.84Ts2R
24 g1(x) = Ts - 0.0193R yes yes yes
25 g2(x) = Th - 0.00954R yes yes yes
2 3
26 g3(x) = ?R L + (4/3)?R - 1.296E6 yes yes yes
Which types of functions are not modeled well by kriging? Based on the data in

Table 5.8, the equations which are not modeled well by kriging are the equations involving

reciprocals of combinations of the design variables in the three-bar truss problem (EQN = 2, 3,

and 4) and two-bar truss problems (EQN = 6 and 7), and the majority of the welded beam

equations which include shear stress calculations, a cosine term which is a function of the design

variables, and a variety of reciprocals and square roots of terms which are functions of the

design variables. In addition, the finite element equations for the two-member frame (EQN =

16 and 17) are not modeled well at the 5% level accuracy or at the 10% level of accuracy of

MAX.RANGE. It is also interesting to note that the objective function of the welded beam

problem (EQN = 18) is one of the equations approximated worst by the kriging. This is rather

surprising considering it is very similar to the objective function of the pressure vessel problem

(EQN = 23) which is modeled well in all cases. Perhaps these differences are due to the size of

the design space as opposed to the equations themselves; very few approximation methods will

work well if the points are sparsely scattered throughout the design space which may be the

case in welded beam problem and the three-bar truss problem.

In summary, it appears that kriging does provide an accurate approximation of a variety

of equations. Using a 5% level of accuracy, over half (14 out of the 26) of the equations

studied are accurately modeled over the entire design space as measured by RMSE.RANGE; if

185
a 10% level of accuracy is used instead, then over 3/4 of the equations (20 out of 26) are

accurately modeled. Unfortunately, only nine of the 26 meet the 10% level of accuracy in

MAX.RANGE; however, of the two measures, RMSE.RANGE is considered to be more

important from a design standpoint since accuracy over the entire design space is more

important during design space search than the maximum discrepancy at any one given point. As

such, Hypothesis 2 is verified. In the next section, Hypothesis 3 is tested.

5.4 TESTING HYPOTHESIS 3: THE UTILITY OF SPACE FILLING


EXPERIMENTAL DESIGNS

Recall Hypothesis 3 from Section 1.1.3 and Section 3.3.2.

Hypothesis 3: Space filling experimental designs are better suited for building

metamodels of deterministic computer experiments than classical designs.

In order to test this hypothesis, the data set is analyzed by isolating the factor DOE which was

found to have a significant effect on the accuracy of the resulting kriging in the ANOVA in

Section 5.2.1. However, as the same number of sample sizes are not used in each design as

discussed in Section 5.1.3, the results also must be conditioned on sample size (NSAMP) for a

fair comparison between designs. Consider, for instance, the combined CCD + CCF (ccdaf)

design in the two variable problems which has 13 sample points. It cannot be concluded that

the combined CCD + CCF is the best by averaging over all designs because its effect is biased

by the fact that it has 13 sample points which is at the upper end of the number of points in the

186
two variable problems and is therefore expected to yield good results because of the large

number of sample points. Meanwhile, the effects of all of the other designs with variable

numbers of points (i.e., unifd, hamss, mxmnl, mnmxl, and oplhd) are averaged over all sample

sizes where the smaller the sample size, the less accurate the model, and the worse the effect of

these designs. The results for the two, three, and four variable problems are discussed in

Sections 5.4.1, 5.4.2, and 5.4.3, respectively, by conditioning on both design type and sample

size. As stated previously, keep in mind that the results are based on averaging the data at a

given level of a particular variable; it is assumed that biasing due to unbalanced numbers of

observations at each level is negligible since such a large data set is being used.

5.4.1 Comparison of Designs for Two Variable Problems

For the two variable problems, the classical designs (CCI, CCF, and CCD) each utilize

nine sample points, and the combined CCF + CCI and CCF + CCD each have 13 points.

Hence, the average effect of each design which has nine points and each design which has 13

points on RMSE.RANGE is shown in Figure 5.15. The average effect on MAX.RANGE of all

the designs with these sample sizes is shown in Figure 5.16.

Looking first at the nine point designs in Figure 5.15, it is surprising to note that both the

CCD and CCI designs perform well with the minimax Latin hypercube (mnmxl) design yielding

the best RMSE.RANGE values; recall that the minimax Latin hypercube designs are unique to

this research (see Appendix C). The orthogonal Latin hypercubes (yelhd), maximin Latin

hypercubes (mxmnl), and the uniform designs (unifd) also perform well. The worst designs are

187
the Hammersley sampling sequence design, the CCF design and the OA-based Latin

hypercube. The random Latin hypercubes yield average results.

In the 13 point designs, the uniform design (unifd) yields the best results with the

maximin and minimax Latin hypercube designs giving equally good results which are only slightly

worse than that of the uniform design. The random Latin hypercubes (rnlhd) continue to give

average results with the combined CCD + CCF (ccdaf) and CCI + CCF (cciaf) designs giving

slightly better results but results which are still about 1% worse than the maximin and minimax

Latin hypercubes. The optimal Latin hypercube designs are the worst with the Hammersley

sampling sequence designs showing good improvement with the extra four sample points.
0.12

hamss

ccfac
oplhd
0.10

oalhd
mean of rmse.range
0.08

rnlhd
0.06

oplhd hamss

unifd mxmnl rnlhd


ccdes yelhd
ccins cciaf
mnmxl ccdaf
0.04

mxmnl mnmxl
unifd

NSAMP = 9 NSAMP = 13
Factor - DOE

188
Figure 5.15 Effect of 9 and 13 Point DOE on RMSE.RANGE

Turning to the results of MAX.RANGE in Figure 5.16, the classical designs perform

quite well with the CCI design (ccins) being the best in the nine point designs and the combined

CCD + CCF (ccdaf) being the best in the 13 point designs. The nine point minimax Latin

hypercubes (mnmxl) and OA-based Latin hypercubes (oalhd) yield comparable results to the

CCD and CCF designs. The remaining space filling designs all fair worse than the CCD with

the Hammersley sampling sequence giving the worst MAX.RANGE. In the 13 point designs,

the combined CCI + CCF (cciaf) is the second best design with the maximin Latin hypercube

(mxmnl) coming in a close third. The uniform, optimal Latin hypercube, minimax Latin

hypercube, and random Latin hypercube designs are slightly worse than the average, and the

Hammersley sampling sequence yields the worst results.

189
hamss
0.8
mean of max.range
0.6

oplhd
mxmnl unifd hamss
0.4

rnlhd
yelhd
oplhd rnlhd
ccdes mnmxl unifd mnmxl
ccfac
0.2

oalhd mxmnl
cciaf
ccins
ccdaf
NSAMP = 9 NSAMP = 13
Factor - DOE

Figure 5.16 Effect of 9 and 13 Point DOE on MAX.RANGE

Based on these results, the space filling designs do best in terms of RMSE.RANGE

while the classical designs yield the lowest MAX.RANGE. If RMSE.RANGE is taken as the

more important of the two measures of error, than the space filling DOE are better than the

classical DOE for the two variable problems considered in this dissertation. The results for the

three variable problems are discussed next.

5.4.2 Comparison of Designs for 3 Factors

In the three variable problems, there are four values of NSAMP which must be

considered due to differences in prescribed sample sizes. The Box-Behnken design has 13

points; the classic CCD, CCF, and CCI designs have 15; the CCI + CCF has 21; and the

190
CCD + CCF has 23. The effects of these designs on RMSE.RANGE is shown in Figure 5.17;

the effects on MAX.RANGE are plotted in Figure 5.18.


0.10

hamss rnlhd
0.09

mnmxl ccdes

hamss rnlhd
0.08

oplhd
mean of rmse.range

unifd
mxmnl
0.07

rnlhd
0.06

mnmxl hamss
oplhd
unifd mnmxl
unifd rnlhd
0.05

oplhd mnmxl hamss


cciaf
bxbnk oplhd
0.04

ccfac ccins unifd


mxmnl
ccdaf
NSAMP = 13 NSAMP = 15 NSAMP = 21 NSAMP = 23
Factor - DOE

Figure 5.17 Effect of 13, 15, 21, and 23 Point DOE on RMSE.RANGE

In Figure 5.17, the Box-Behnken (bxbnk) design dominates the 13 point designs with a

RMSE.RANGE of about 5%. All of the space filling designs perform quite poorly in fact with

RMSE.RANGE values of about 8% or worse; it appears that these designs do not fare well

when relatively few sample points are taken in the design space. A similar observation can be

made regarding the 15 point designs also. The maximin Latin hypercube (mxmnl) yields the best

result, but the CCI and CCF designs are both almost as good. The minimax Latin hypercube

191
(mnmxl) design yields average results with the uniform and optimal Latin hypercube design

(oplhd) fairing slightly better but not as good as the CCI and CCF. The CCD is the worse

design with the random Latin hypercube (rnlhd) and the Hammersley sampling sequence

(hamss) yielding comparably poor results.

In the 21 and 23 point designs in Figure 5.17, the optimal Latin hypercube design

(oplhd) yields the lowest RMSE.RANGE with the random Latin hypercube design (rnlhd)

yielding the worst. The combined CCI + CCF (cciaf) is the second best 21 point design,

followed closely by the minimax Latin hypercube (mnmxl). The uniform design (unifd) gives an

average result in the 21 point case but is the second best design in the 23 point case; the best

design is the combined CCD + CCF (ccdaf). The optimal Latin hypercube design (oplhd) is

the worst of the 23 point designs with the minimax Latin hypercube (mnmxl) and random Latin

hypercube designs (rnlhd) yielding results which are worse than the average.

In Figure 5.18, the effects of these different designs on MAX.RANGE are plotted. The

classical experimental designs consistently provided the lowest MAX.RANGE when averaging

over all other factors. The space filling designs do not perform well in any case and yield

particularly poor results in the 13 and 21 point designs. The minimax Latin hypercube (mnmxl)

design is no exception, giving near average results in 15, 21, and 23 point designs and the next

to worst result among the 13 point designs.

As with the two variable problems, the space filling designs offer better results if

RMSE.RANGE is considered while the classical designs are better when it comes to

192
MAX.RANGE for the problems considered in this dissertation. The four variable problems are

examined in the next section to see if the same holds true.


0.8

hamss oplhd
ccdes
rnlhd mnmxl
hamss
unifd hamss
hamss
0.6

rnlhd
mean of max.range

mxmnl rnlhd rnlhd


oplhd
mnmxl
mnmxl mnmxl
0.4

unifd
unifd
oplhd unifd
oplhd
0.2

mxmnl
bxbnk ccins cciaf
ccfac ccdaf

NSAMP = 13 NSAMP = 15 NSAMP = 21 NSAMP = 23


Factor - DOE

Figure 5.18 Effect of 13, 15, 21 and 23 Point DOE on MAX.RANGE

5.4.3 Comparison of Designs for 4 Factors

For the four variable problems, there are two sample sizes to examine: NSAMP = 25

points and NSAMP = 33 points. Figure 5.19 contains the effects of DOE on RMSE.RANGE

for these sample sizes, and Figure 5.20 contains the effects on of DOE on MAX.RANGE for

these sample sizes. As seen in the figures, there are twelve designs which use 25 sample points

193
and only six with 33. The combined CCD + CCF is not considered because it is the only

design which uses 41 sample points.

Looking first at Figure 5.19, the minimax Latin hypercube (mnmxl) design introduced in

this dissertation yields the best results on average. The uniform design (unifd) is a close second

in the 25 point case, and the random Latin hypercube design (rnlhd) is a close second in the 33

point case. Of the classical 25 point designs, the Box-Behnken (bxbnk) design performs slightly

better than average while the CCD (ccdes), CCF (ccfac), and CCI (ccins) designs all do worse

than average. Finally, the Hammersley sampling sequence (hamss) designs perform poorly at

both sample sizes.

194
0.08 ccdes = 0.14
hamss hamss
oalhd
ccfac
0.07

ccins

oplhd
0.06

bxbnk cciaf
mean of rmse.range

mxmnl
0.05

rnlhd
0.04

oarry yelhd
0.03

oplhd
unifd
mnmxl
rnlhd
0.02

mnmxl

NSAMP = 25 NSAMP = 33
Factor - DOE

Figure 5.19 Effect of 25 and 33 Point DOE on RMSE.RANGE

In Figure 5.20, the effects of the 25 and 33 point DOE on MAX.RANGE are plotted.

Unlike the two and three variable problems, the space filling designs yield the best

MAX.RANGE for the four variable problems. The randomized 25 point orthogonal (oarry)

produces the lowest MAX.RANGE with the 25 point minimax Latin hypercube (mnmxl) a close

second. The classical Box-Behnken (bxbnk), CCF (ccfac), and CCI (ccins) designs and the

space filling uniform design (unifd) and optimal Latin hypercube design (oplhd) all yield

comparable results which are only slightly worse than either the minimax Latin hypercube or the

randomized orthogonal array. The 25 point orthogonal array-based Latin hypercube (oalhd)

195
and the maximin Latin hypercube (mxmnl) produce results which are close to the average effect.

The Hammersley sampling sequence (hamss) designs yield the worst MAX.RANGE in both the

25 and 33 point designs; the 25 point CCD (ccdes) does not fair much better than the

Hammersley sampling sequence design, however.

hamss
1.2

hamss ccdes
1.0
mean of max.range
0.8

rnlhd
0.6

yelhd oplhd
oalhd
0.4

mxmnl

ccins unifd oplhd


rnlhd
0.2

bxbnk ccfac cciaf


mnmxl oarry
mnmxl

NSAMP = 25 NSAMP = 33
Factor - DOE

Figure 5.20 Effect of 25 and 33 Point DOE on MAX.RANGE

In the 33 point designs, the minimax Latin hypercube (mnmxl) yields the lowest

MAX.RANGE on average. The combined CCI + CCF (cciaf) and random Latin hypercube

designs (rnlhd) give comparable results which are slightly worse than the minimax Latin

196
hypercube. Finally, the 33 point orthogonal Latin hypercubes (yelhd) and optimal Latin

hypercubes (oplhd) are both slightly worse than the average.

5.4.4 Lessons Learned from Experimental Design Study

The space filling designs yield lower RMSE.RANGE values for all of the two, three,

and four variable problems considered in this dissertation. The classical experimental designs

yield lower MAX.RANGE values for the two and three variable problems but do not perform

as well as the space filling designs in the four variable problems. In small dimensions, i.e., two

and three variables, the classical designs spread the points out equally well in the design space

regardless if they are “space filling” designs or not. However, based on the observed trends in

the data, it appears that as the number of design variables increases, the space filling designs

perform better and better in terms of the two error measurements used in this study. As their

name implies, the space filling designs do a better job at spreading out points in the design space

and thus filling the space as the number of variables increases. Hence, Hypothesis 3 is verified

because the space filling designs do perform better than the classical designs in terms of

RMSE.RANGE which provides the best assessment of overall accuracy of a metamodel.

Furthermore, the larger the number of design variables and the more sample points, the better is

the accuracy of the resulting kriging model from a space filling design.

Some additional comments about particular space filling designs are as follows.

• The minimax Latin hypercube designs introduced in this dissertation perform quite well
in these problems. With the exception of the 13 point minimax Latin hypercube design

197
for the three variable problems, these designs consistently are among the best of the
designs in terms of its effect on RMSE.RANGE and MAX.RANGE.

• The Hammersley sampling sequence designs perform poorly in all of these problems.
The impetus for the Hammersley designs, though, are to provide good stratification of a
k-dimensional space (Kalagnanam and Diwekar, 1997); as such, they are designed to
perform well in large design space which may explain why they perform so poorly in
these relatively small problems.

• The random Latin hypercube designs provide average results as might be expected
because these designs simply rely on a random scattering of points in the design space.
By imposing additional considerations on these designs to “control” the randomization of
the points, the performance of these designs can be improved. For instance, the
orthogonal Latin hypercubes (Ye, 1997), the maximin Latin hypercubes (Morris and
Mitchell, 1995), the (IMSE) optimal Latin hypercubes (Park, 1994), and the
orthogonal-array based Latin hypercubes (Tang, 1993) typically yield a more accurate
kriging model than does the basic random Latin hypercube. This observation is not
new; rather, it supports the claims which the creators of these designs when they
introduced them.

• The uniform designs perform surprisingly well, considering they are based solely on
number-theoretic reasoning (Fang and Wang, 1994). Regardless, the importance of
these designs lies in that fact that uniformly spreading the points out the design space
yields an accurate kriging model; this is a new observation which is obvious but not well
documented in the literature.

Hypotheses 2 and 3 have now been verified as a result of this study. In the next

section, the relevance of these results with regard to the PPCEM are discussed.

198
5.5 A LOOK BACK AND A LOOK AHEAD

In this chapter, 7905 kriging metamodels of six engineering test problems have been

constructed and validated using a variety of correlation functions and experimental designs to

test and verify Hypotheses 2 and 3. In closing this chapter, recall the questions posed at the

beginning of this study:

• What is best type of experimental design you should use to query the simulation to
generate data to build an accurate kriging metamodel? For problems containing
only two variables, either classical or space filling designs yield good results; however,
as the size of the design space increases (i.e., number of variables increases), space
filling experimental design tend to yield more accurate kriging metamodels on average
since they tend to spread the points out well in the design space. In particular, the
minimax Latin hypercube design, uniform designs, and orthogonal arrays yield good
results. Random Latin hypercubes also provide good results, provided orthogonality or
optimality (e.g., IMSE) are imposed to control the randomization. Finally, of the
designs considered, Hammersley point designs are not recommended unless numerous
sample points can be afforded.

• How many sample points should you use? The interaction between sample size and
experimental design type is examined in Section E.3 because this impact does not
directly impact testing Hypotheses 2 or 3 since the analysis is conditioned on sample
size. In general, the more sample points which can be afforded, the more accurate the
resulting model. However, as discussed in Section E.3, a recommendation on the
number of sample points cannot be made at this time because a wide enough spread of
points was not investigated.

199
• What type of correlation function should you use to obtain the best predictor?
Based on the results in Section 5.3.1, the Gaussian correlation function yields the most
accurate predictor on average of the five studied.

• Lastly, how can you best validate the metamodel once you have constructed it?
As discussed in Section 5.2.2, cross validation root mean square error is not a sufficient
measure of model accuracy since it does not correlate well with either root mean square
error or max. error. One possible explanation of this is that because the sample sizes
are relatively small, an insufficient number of points is available to cross-validate the
model properly. If more points were available, then cross validation error may yield a
reasonable assessment of model accuracy; however, this has not been tested. In light of
this result, then, it is imperative that additional sample points be taken to validate a
kriging model.

These results have a direct bearing on the metamodeling capabilities within the Product Platform

Concept Exploration Method (PPCEM) as depicted in Figure 5.21. In the event that

metamodels need to be constructed for a deterministic computer code or simulation routine

within the context of product platform design, then the best correlation function to select—if

kriging metamodels are to be utilized—is the Gaussian correlation function, and the best

design to use is a space filling experimental design if the problem has more than two variables (if

the problem only has two variables, then a classical experimental design will suffice). Also, it is

recommended to take as many sample points as possible, but keep in mind that additional

sample points are needed to validate the model since cross validation does not appear to

provide a sufficient assessment of model accuracy.

200
Family of Universal
Kriging/DOE Testbed Electric Motors

§5.3 §5.2
Chp 6

Platform

MDO Example
Product Platform Concept Exploration Method

Chp 3

Space Modeling Conceptual Scalable Market


Filling Kriging Mean and Noise Product Segmentation
DoE Variance Factors Platform Grid

Metamodeling Robust Design Principles Product Family Design

FOUNDATIONS: Decision-Based Design & the Robust Concept Exploration Method

Figure 5.21 Pictorial Review of Chapter 5 and Preview of Chapter 6

In the next chapter, the focus returns to the PPCEM, and the process of testing and

verifying Hypotheses 1 resumes. Specifically, in Chapter 6 the design of a family of universal

electric motors is offered as “proof of concept” that the PPCEM works and that it is effective at

facilitating the design and development of a scalable product platform for a product family.

201
202
6.
CHAPTER 6

DESIGN OF A FAMILY OF UNIVERSAL ELECTRIC


MOTORS

In this chapter, the Product Platform Concept Exploration Method (PPCEM) is

implemented to verify its use for designing a family of universal motors around a common

scalable product platform. An overview of the universal motor problem is presented in Section

6.1; a schematic of a typical universal motor is given in Section 6.1.1, and a practical

mathematical model for universal motors is derived in Section 6.1.2. Section 6.2 contains the

implementation of Steps 1 and 2 of the PPCEM. A market segmentation grid is created for the

problem and relevant factors and responses for the universal motor platform are identified.

Section 6.3 follows with the implementation of Step 4 of the PPCEM by aggregating the

universal motor specifications and formulating a compromise DSP; Step 3 of the PPCEM—

building metamodels—is not utilized in this example because analytical expressions for mean and

standard deviation of the responses are derived separately. Section 6.4 contains the

development of the actual universal motor platform; ramifications of the resulting universal motor

190
platform and product family are analyzed in Section 6.5. Through this example problem,

Hypothesis 1 and Sub-Hypotheses 1.1-1.3 are tested and verified as follows:

Hypothesis 1 - All but Step 3 of the PPCEM is employed in this chapter to design a family
of universal motors based on a scalable product platform. The success of the method
as discussed in Section 6.5 provides an initial “proof of concept” for the method and
hence Hypothesis 1.

Sub-Hypothesis 1.1 - The market segmentation grid is utilized in Section 6.2 to help
identify the stack length as the scale factor around which the motor family is vertically
scaled to achieve the desired platform leveraging strategy; this supports Sub-Hypothesis
1.1.

Sub-Hypothesis 1.2 - The scale factor for the family of universal motors is taken as the
stack length, following the footsteps of the example by Black & Decker (Lehnerd,
1987). Robust design principles are used in this example to develop a universal motor
platform—defined by seven design variables—which is insensitive to variations in the
scale factor and is thus good for a family of motors based on different instantiations of
the stack length (the scale factor). The success of this implementation helps to support
Sub-Hypothesis 1.2.

Sub-Hypothesis 1.3 - Robust design principles of “bringing the mean on target” and
“minimizing the deviation” are utilized in this example to aggregate individual targets and
constraints and to facilitate the design of the family of motors. Combining this
formulation with the compromise DSP allows a family of motors to be designed around
a common, scalable product platform, verifying Sub-Hypothesis 1.3.

Despite all of the work in the previous two chapters, Hypotheses 2 and 3 are not tested

in this example. Analytic expressions for mean and standard deviation of the response are

derived from the analysis equations themselves and used directly in the compromise DSP for the

191
PPCEM. In concluding the chapter, a brief look ahead to the General Aviation aircraft example

in Chapter 7 in which the PPCEM is tested in full is offered in Section 6.6.

192
6.1 OVERVIEW OF THE UNIVERSAL MOTOR PROBLEM

Universal electric motors are so named for their capability to function on both direct

current (DC) and alternating current (AC). Universal motors also deliver more torque for a

given current than any other kind of AC motor (Chapman, 1991). The high performance

characteristics and flexibility of universal motors understandably have led to a wide range of

applications, especially in household use where they are found in electric drills, saws, blenders,

vacuum cleaners, and sewing machines, to name a few examples (Martin, 1986).

In addition, many companies manufacture several products which use universal motors;

for example several companies offer a complete line of power tools, whereas several others

offer a line of kitchen appliances or yard care tools (cf., Lehnerd, 1987). For these companies,

it has already become common practice to utilize a family of universal motors of similar physical

dimensions to meet a range of performance requirements for a group of products (Nasar,

1987). The advantages of this approach included increased modularity with decreased

manufacturing time and inventory costs. For example, Black & Decker developed a family of

universal motors for its power tools in the 1970s in response to a need to redesign their tools as

discussed in Section 1.1.1.

In this chapter the task is to identify a set of common physical dimensions for a

hypothetical family of universal motors to satisfy a range of performance needs, providing initial

“proof of concept” for the PPCEM. To begin, a physical description and schematic of the

193
universal motor is offered in the next section. In Section 6.1.2, relevant analyses for modeling

the performance of a universal motor are introduced.

6.1.1 Physical Description, Schematic, and Nomenclature for the Universal Motor
Problem

A universal motor is composed of an armature and a field which are also referred to as

the rotor and stator, respectively, see Figure 6.1. The motor depicted in the figure has two field

poles, an attached cooling fan, and laminations in both the armature and the field. Laminating

the metal in both the armature and field greatly reduces certain kinds of power losses (cf.,

Nasar, 1987).

The armature consists of metal shaft about which wire is wrapped longitudinally around

two or more metal slats, or armature poles, as many as thousands of times. The field consists of

a hollow metal cylinder within which the armature rotates. The field also has wire wrapped

longitudinally around interior metal slats, or field poles, as many as hundreds of times.

194
Figure 6.1 Schematic of a Universal Motor
(adapted from G.S.Electric, 1997)

For a universal motor, the wraps of wire around the armature and the field are wired in

series, which means that the same current is applied to both sets of wire. As current passes

through the field windings, a large magnetic field is generated, which passes through the metal of

the field, across an air gap between the field and the armature, then through the armature

windings, through the shaft of the armature, across another air gap, and back into the metal of

the field, thus completing a magnetic circuit.

However when the magnetic field passes though the armature windings, which are

themselves carrying current, the magnetic field exerts a force on the current carrying wires,

which is in the direction of the cross product of the vector direction of the current in the

armature windings and the vector direction of the magnetic field. Because of the geometry of

the windings, current on one side of the armature always is passing in the opposite direction to

the current on the other side of the armature. Thus, the force exerted by the magnetic field on

one side of the armature is opposite to the force exerted on the other side of the armature.

Thereby a net torque is exerted on the armature, causing the armature to spin within the field.

The reader is referred to (Chapman, 1991) or any physics text book (e.g., Tipple, 1991) to

learn more about how an electric motor operates. The nomenclature for the universal electric

motor is listed in Table 6.1.

Table 6.1 Nomenclature for Universal Motors

195
a Number of current paths on the armature Nc Number of turns of wire on the armature
Aa Area between a pole and the armature [mm2] Ns Number of turns of wire on the field, per
A wa Cross-sectional area of the wires on the pole
armature [mm2] ? Rotational speed [rad/sec]
A wf Cross-sectional area of the wires on the ? Resistivity of copper [Ohms/m]
field [mm2] ? copper Density of copper [kg/m3]
B Magnetic field strength (generated by the ? steel Density of steel [kg/m3]
current in the field windings) [Tesla, T] p armature Number of poles on the armature
? Magnetic flux [Webers, Wb] p field Number of poles on the field
? Magnetomotive force [Ampere?turns] P Gross Power Output [W]
H Magnetizing intensity [Ampere?turns/m] ro Outer radius of the stator [m]
I Electric current [Amperes] Ra Resistance of armature windings [Ohms]
K Motor constant [n.m.u.] Rs Resistance in the field windings [Ohms]
lr Diameter of armature [m] ? Total reluctance of the magnetic circuit
lg Length of air gap [m] [Ampere?turns/m]
lc Mean path length within the stator [m] ?s Reluctance of the stator [Ampere?turns/m]
L Stack length [m] ?a Reluctance of one air gap [Ampere?turns/m]
? steel Relative permeability of steel [n.m.u.] ?r Reluctance of the armature
?o Permeability of free space [Henrys/m] [Ampere?turns/m]
? air Relative permeability of air [n.m.u.] t Thickness of the stator [m]
m Plex of the armature winding [n.m.u.] T Torque [Nm]
M Mass [kg] Vt Terminal voltage [Volts, V]
? Efficiency [n.m.u.] Z Number of conductors on the armature

6.1.2 Relevant Analyses for Universal Motor Problem

A universal motor is the same as a direct current (DC) series motor; however, in order

to minimize certain kinds of power losses within the core of the motor when operating on AC

power, a universal motor is constructed with slightly thinner laminations in both the field and the

armature and less field windings. However, the governing electromagnetic equations for the

operation of a series DC motor and a universal motor running on DC current are identical

(Chapman, 1991). The performance at full-load torque of a universal motor running on AC

current is only slightly less than the performance of the same motor running on DC current, see

Figure 6.2. This discrepancy in performance is due to losses caused by the inherent oscillation

in alternating current (AC); for an overview of the extra losses associated with AC operation,

see, e.g., (Unnewehr, 1983).

196
Figure 6.2 Comparison of the Torque-Speed Characteristics of a Universal Motor
Rated at 1/4 Hp and 8000 rpm when Operating on AC and DC Power Supplies (Martin,
1986)

These extra losses incurred in AC operation of a universal motor are difficult, if not

impossible, to model analytically; thus, complicated finite element analyses are becoming more

popular for modeling motor behavior under AC current. Since such a detailed analysis is

beyond the scope of this work, the derived model for the performance of the universal motor is

for DC operation for which simple analytical expressions are known or can be derived.

Moreover, several texts indicate that the performance of universal motors under AC and DC

conditions is quite comparable and include diagrams such as the one reproduced in Figure 6.2

(see, e.g., (Chapman, 1991; Martin, 1986; Shultz, 1992; Unnewehr, 1983); Shultz (1992)

197
states that “Universal motors...will operate either on DC or AC up to 60 Hz. Their

performance will be essentially the same when operated on DC or AC at 60 Hz.” The sample

torque-speed curves in Figure 6.2 graphically illustrate this, showing that for one specific motor,

the performance characteristics between AC and DC operation do not deviate significantly until

well past the full-load torque of the motor. For this work, all motors are designed for operation

at full-load torque. Thus, it is assumed that designing a universal motor under DC conditions

yields satisfactory performance under AC conditions as well.

The model takes as input the design variables {Nc, Ns, Awa, Awf, ro, t, lgap, I, Vt, L} and

returns as output the power (P), torque (T), mass (M), and efficiency (? ) of the motor. To

formulate the model, it is necessary to derive equations for P, T, M, and ? as functions of the

design variables. The equations are based primarily on those given in (Chapman, 1991) and

(Cogdell, 1996) for DC electric motors unless otherwise noted.

Power

The basic equation for power output of a motor is the input power minus losses:

P = Pin - Plosses [6.1]

where the input power is the product of the voltage and the current,

Pin = VtI [6.2]

and, for a universal motor, power is lost:

• in the copper wires as they heat-up (copper losses),

198
• at the interface between the brushes and the armature (brush losses),

• in the core due to hysteresis and eddy currents (core losses),

• in mechanical friction in the bearings supporting the rotor (mechanical losses),

• in heating up the core and copper wires which adversely effects the magnetic
properties of the core and the current carrying ability of the wires (thermal losses),
and

• due to stray losses (stray losses).

Simple analytic expressions only exist for the copper losses and the brush losses. Stray losses

usually are assumed to be no more than one percent, and thus can be neglected. Mechanical

losses can be minimized by an appropriate choice of bearing and housing arrangement;

however, these variables are beyond the scope of the motor model itself. Hence mechanical

losses are neglected. Core losses, especially those incurred by eddy currents, can be minimized

by the use of thin laminations in the stator and rotor; assuming this is done, the core losses can

be assumed to be small and thus can be neglected. Thermal losses are in general non-negligible,

but are highly dependent upon the external cooling scheme (e.g., cooling fan and fins on the

housing) applied to the motor. Because an effective cooling scheme can keep the motor from

running too hot, and as the setup of the cooling configuration is beyond the scope of this model,

thermal losses are neglected. The combined effects of all the aforementioned neglected losses

will, however, decrease the output power and efficiency from the predicted value from the

model. Nevertheless, the following equations serve as a sufficiently accurate model for the DC

199
operation of a universal motor. Consequently, the general equation for power losses reduces

from,

Plosses = Pcopper + Pbrush + Pthermal + Pcore + Pmechanical + Pstray [6.3]

to a more manageable:

Plosses = Pcopper + Pbrush [6.4]

where

Pcopper = I2(Ra + Rs) [6.5]

and

Pbrush = ? I [6.6]

where ? is typically 2 volts. Substituting these expressions into the power equation yields:

P = VtI-I2(Ra + Rs) - 2I [6.7]

However, Ra and Rs, the resistances of the armature and field windings, can be specified further

as functions of the design variables. The resistances Ra and Rs can be computed directly from

the general equation that the resistance of any wire is given by:

(Re sistivity )(Length )


Resistance = [6.8]
Area cross?sec tion

200
Assuming that each wrap (i.e., turn) of wire on the armature is approximately the shape of a

rectangle with length L (the stack length of the motor) and width lr (the diameter of the armature)

then in terms of the physical dimensions of the motor, lr can be expressed as two times the

radius of the armature, which is just the outer radius of the stator minus the thickness of the

stator minus the air gap length, or,

lr = 2(ro - t - lgap) [6.9]

so that the length of one wrap of wire on the armature is:

Lengthone wrap = 2L + 2lr = 2L + 4(ro - t - lgap) [6.10]

The total length of wire on the armature is the stack length, L, times the total number of wraps

on the armature, Nc, so that the resistance of the armature, Ra, is

?(2L ? 4( r o? t ? l gap ))N c


Ra = [6.11]
Aarmature _ wire

Similarly, assuming that each wrap of wire on the field is approximately the shape of a rectangle

with length L (the stack length of the motor) and width double the inner radius of the stator (ro-

t), then the resistance of the stator, Rs, is:

?( p field )(2L ? 4(r o? t)) Ns


Rs = [6.12]
Afield _ wire

201
However the purpose of the field windings is to create a magnetic field across the armature, thus

requiring two field poles, one for the “North” end of the magnetic field and one for the “South”

end. Thus, pfield is 2, and Equation 6.12 becomes Equation 6.13 which is:

?( 2) (2L ? 4(r o? t))Ns


Rs = [6.13]
Afield _wire

Now that Ra and Rs are expressed in terms of the design variables in Equations 6.11 and 6.13,

the power equation is complete.

Efficiency

The equation for efficiency can be computed directly from the equation for power. The basic

equation for efficiency, expressed as a decimal and not a percentage, is given by:

? = P/P in [6.14]

where P and Pin are given by Equations 6.2 and 6.7.

Mass

For the purpose of estimating the mass of the motor, it is modeled as a solid steel cylinder with

length L and radius lr/2 for the armature and a hollow steel cylinder with length L, outer radius ro

and inner radius (ro-t) for the stator. The mass of the windings on both the armature and the

field are also included, where the length of each winding is the same as those assumed for the

derivation of the power equation, see Equation 6.10. Thus the equation for mass is of the form:

Mass = Mstator + Marmature + Mwindings [6.15]

202
where:

Mstator = ?(ro2 - (ro - t)2)(L)(? steel ) [6.16]

Marmature = ?(ro - t - lgap)2(L)(? steel ) [6.17]

Mwindings = (N c(2L + 4(ro - t - lgap))Awa + 2Ns(2L + 4(ro - t))Awf)? copper [6.18]

Using Equations 6.16-6.18 for Mstator, Marmature, and Mwindings, the mass of the motor, Equation

6.15, can be estimated from the design variables.

Torque

The last equation to derive is an equation for torque. In general, the torque of a DC motor is

given by:

T = K?I [6.19]

where K is a motor constant, ? is the magnetic flux, and I is the current. For a DC motor, K is

computed as:

(Z)( parmature )
K= [6.20]
2? a

where Z, the number of conductors on the armature, is:

Z = 2Nc [6.21]

and a, the number of current paths on the armature, is:


203
a = 2m = 2 [6.22]

assuming a simplex (m = 1) wave winding on the armature. Since the number of armature poles

on a universal motor is almost invariably two (cf., Martin, 1986), or,

parmature = 2 [6.23]

K can be reduced to:

2Nc (2) N c
K= = [6.24]
2 ?( 2) ?

The derivation of the flux term, ?, is significantly more complicated. To begin, consider

the idealized DC motor shown in Figure 6.3a with its corresponding magnetic circuit shown in

Figure 6.3b. As shown in the figure, N is the number of turns on the stator (which is equal to

2Ns for the model being derived), I is the current, A is the cross-sectional area of the stator, lr is

the diameter of the armature, lg is the gap length, and lc is the mean magnetic path length in the

stator.

204
(a) Physical model (b) Magnetic circuit

Figure 6.3 An Idealized DC Motor (Chapman, 1991)

In general the equation for flux through a magnetic circuit is simply the magnetomotive

force, ? , divided by the total reluctance of the circuit, ? :

?
?? [6.25]
?

where the magnetomotive force, ? , is simply the number of turns around one pole of the field

times the current:

? = N sI [6.26]

The total reluctance, ? , is calculated from the magnetic circuit shown in Figure 6.3b.

For a magnetic circuit, reluctances in series add just like resistors in series in an electric circuit;

therefore, the total reluctance in the idealized DC motor is the sum of the reluctances of the

stator, rotor, and two air gaps:

205
? = ? s + ? r + 2? a [6.27]

where, in general, reluctance is calculated as:

Length
? = [6.28]
(Permeability )(Area cross? sec tion)

When permeability, ? , is expressed as the relative permeability of the material times the

permeability of free space, ? o, the reluctance of the stator, rotor, and air gaps are:

lc lr la
? s= ,? r= ,? a= [6.29]
? steel? o A s ? steel? o A r ? air ? oA a

In order to approximate more closely a universal motor for this example, the idealized

DC motor geometry shown in Figure 6.3 is modified to be more representative of a real

universal motor. The resulting model geometry is shown in Figure 6.4a and is described by the

outer radius of the stator, ro, the thickness of the stator, t, the diameter of the armature, lr, the

length of the air gap, lgap, and the stack length, L. The resulting magnetic circuit is shown in

Figure 6.4b; notice that the magnetic circuit for the idealized DC motor and the magnetic circuit

for a universal motor are different, because in a universal motor there are two paths which the

magnetic flux can take around the stator, i.e., clockwise and counter-clockwise. These two

paths are in parallel and thus are included in the magnetic circuit as two parallel flux paths.

Reluctances in parallel in a magnetic circuit act like resistors in parallel in an electric circuit, so

that the combined reluctance of two identical reluctances in parallel is simply one half the

reluctance of either path. Therefore, for a universal motor:

206
lc lr la
? s= ,? r= ,? a= [6.30]
2? steel? oA s ? steel? o A r ? air ? oA a

so that Equation 6.27 for the total reluctance, ? , still holds.

(a) Physical model (b) Magnetic circuit

Figure 6.4 Model Geometry for a Universal Motor

In Equation 6.30, the mean magnetic path length in the stator, lc, is taken to be one half

the mean circumference of the hollow stator cylinder, or,

lc = ?(2ro + t)/2 [6.31]

The cross-sectional area of the stator, As, is taken to be the thickness of the stator times the

stack length, or,

As = (t)(L) [6.32]

207
The cross-sectional area of the armature is taken to be approximately the diameter of the

armature times the stack length:

At = (lr)(L) [6.33]

The cross-sectional area of the air gap is the length of the air gap times the stack length:

Aa = (lgap)(L) [6.34]

The last expression needed for the calculation of reluctance is the relative permeability

of the stator and the armature. For the purposes of this model, both the stator and the armature

are assumed to be made of steel with the relative permeability versus magnetizing intensity curve

for typical steel is shown in Figure 6.5.

Figure 6.5 Relative Permeability Versus Magnetizing Intensity for a Typical Piece of
Steel (Chapman, 1991)

208
The curve is divided into three regions, and each section is fit with an appropriate

numerical expression in order to include the curve shown in Figure 6.5 in the model. The curve

fits used are as follows:

?r = -0.2279H2 + 52.411H + 3115.8 H = 220

?r = 11633.5 - 1486.33ln(H) 220 < H = 1000

?r = 1000 H > 1000 [6.35]

where, from Ampere's Law, the magnetizing intensity, H, is given by,

N cI
H= [6.36]
l c ? l r ? 2l gap

The relative permeability of air, ? air, is taken as unity, and the permeability of free space is a

constant, ? o = 4 ? 10-7. Now with expressions for K, ? , ? , ? s, ? r, ? a, lc, lr, As, Ar, Aa, and

? steel in terms of the design variables, the torque equation is complete.

This completes the mathematical model for the universal motor, and the PPCEM now

can be implemented to design a family of universal motors around a common product platform.

The initial steps of the PPCEM, Steps 1 and 2, are outlined in the next section.

6.2 STEPS 1 AND 2: CREATE MARKET SEGMENTATION GRID AND


CLASSIFY FACTORS FOR UNIVERSAL MOTOR PLATFORM

With a given set of performance requirements and the model derived in Section 6.1.3,

the first step in implementing the PPCEM is to create the market segmentation grid to identify

and map which type of leveraging can be used to meet the overall design requirements and

209
realize the desired product platform and product family. The market segmentation grid shown in

Figure 6.6 depicts the desired leveraging strategy for this universal motor example. The goal is

to design a motor platform which can be leveraged vertically for different market segments

which are defined by the torque needs of each market, following in footsteps of the Black &

Decker universal motor example from (Lehnerd, 1987) which was discussed in Section 1.1.1.

In this specific example, ten instantiations of the motor are to be considered; moreover, in order

to reduce cost, size, and weight, it is supposed the best motor is the one that satisfies its

performance requirements with the least overall mass and greatest efficiency. Standardized

interfaces will ensure horizontal leveraging across market segments; however, only vertical

leverage is considered in this example.

High End
Vertical Scaling

Functional
parametric
Mid-Range
scale factor:
torque =
f(length)

Low End

Lawn & Power Kitchen


Garden Tools Appliances

Universal Motor Platform

Figure 6.6 Universal Motor Market Segmentation Grid

210
Having created the market segmentation grid and identified an appropriate leveraging

strategy and scale factor, Step 2 in the PPCEM is to classify the factors of interest within the

universal motor problem. The design variables (i.e., control factors) and corresponding ranges

of interest in this study are as follows:

1. Number of turns of wire on the armature (100 = Nc = 1500 turns)

2. Number of turns of wire on each field pole (1 = Ns = 500 turns)

3. Cross-sectional area of the wire used on the armature (0.01 = Awa = 1.0 mm2)

4. Cross-sectional area of the wire used on the field poles (0.01 = Awf = 1.0 mm2)

5. Radius of the motor (0.01 = ro = 0.10 m)

6. Thickness of the stator (0.0005 = t = 0.10 m)

7. Current drawn by the motor (0.1 = I = 6.0 Amp)

The terminal voltage, Vt, is fixed at 115 volts to correspond to standard household

voltage, and the length of the air gap, lgap, is set to 0.7 mm which is taken to be the minimum

possible air gap length. The minimum air gap length is fixed because the performance equations

derived in Section 6.1.3 indicate that minimizing the air gap length maximizes torque and

minimizes mass with no effect on the other performance measures.

Following in the footsteps of the Black & Decker example, the stack length, L, is the

scale factor for the product family primarily because of its importance in the torque equation,

Equation 6.19, derived in Section 6.1.3, i.e., torque is directly proportional to stack length. To

increase torque across the platform, stack length of the motors is increased while keeping the

other physical parameters (e.g., the outer radius and the thickness) unchanged. Furthermore, it

211
is assumed that the greatest manufacturing costs savings can be achieved by exploiting the fact

that only the stack length of the motors varies while still providing a variety of torque and power

ratings. The initial range of interest for stack length is taken to be 1 to 20 centimeters; specific

instantiations are computed in Step 5 so as to meet the desired torque requirements for each

platform derivative.

There are a total of six responses (i.e., goals and constraints) which are of interest for

each motor. The following constraint values—Table 6.2—and goal targets—Table 6.3—are

assumed to define each market niche for each motor.

Table 6.2 Constraints for Universal Motor Product Family

Constraints Value
Magnetizing intensity, H H < 5000
Feasible geometry ro > t
Power of each motor, P P = 300 W
Efficiency of each motor, ? ? = 0.15
Mass of each motor, M M = 2.0 kg

The constraint on magnetizing intensity ensures that the magnetic flux within the motor

does not exceed the physical flux carrying capacity of the steel (Chapman, 1991). The

constraint on feasible geometry ensures that the thickness of the stator does not exceed the

radius of the stator, since the thickness is measured from the outside of the motor inward, as

indicated in Figure 6.4a. The desired power for each motor is 300 W which is treated as an

equality constraint to ensure that design variable settings are selected to match this requirement

212
exactly. A minimum allowable efficiency of 15% and a maximum allowable mass of 2.0 kg are

assumed to define a feasible motor. The efficiency and mass goal target for each motor are

listed in Table 6.2 along with the desired torque requirement for each motor.

Table 6.3 Goal Targets for Universal Motor Platform

Goal
Motor Torque [Nm] Mass [kg] Efficiency
1 0.05 0.50 0.70
2 0.10 ? ?
3 0.125 ? ?
4 0.15 ? ?
5 0.20 ? ?
6 0.25 ? ?
7 0.30 ? ?
8 0.35 ? ?
9 0.40 ? ?
10 0.50 ? ?

For the purpose of illustration, the relationship between the design variables, the scale

factor, and the responses, are shown in the P-Diagram in Figure 6.7.

X = Control Factors
# wire turns on armature Y = Responses
# wire turns on field pole Universal Power
Afield wire Torque
Motor
Mass
Aarmature wire Model Efficiency
Motor radius Feasible Geometry
Stator thickness Magnetizing Intensity
Current drawn S = stack length

Figure 6.7 P-Diagram for the Universal Motor Example

213
This concludes Steps 1 and 2 of the PPCEM. Step 3 is not utilized in this particular

example since it is possible to derive expressions for mean and variance of each motor due to

scaling the stack length as described in the next section. The next step, then, is Step 4 which is

to formulate an appropriate compromise DSP for the family of universal motors.

6.3 STEP 4: AGGREGATE PRODUCT SPECIFICATIONS AND FORMULATE


UNIVERSAL MOTOR PLATFORM COMPROMISE DSP

The corresponding compromise DSP formulation for the universal motor product

platform is listed in Figure 6.8. In summary, there are nine design variables, seven constraints,

and two objectives. The two objectives (minimize mass to its target and maximize efficiency to

its target) are assumed to have equal importance to the design, and are thus weighted equally in

the problem formulation.

214
Given:
? Parametric (horizontal) scale factor: stack length
? Universal motor model analysis equations, Section 6.1.2

Find:
? The system variables, x:
• Number of turns on the armature, Nc • Thickness of the stator, t
• Number of turns on each pole on the • Current drawn by the motor, I
field, Ns • Radius of the motor, r
• Cross-sectional area of the wire on the • Mean of stack length, ? L
armature, Awa • Standard deviation of stack
• Cross-sectional area of the wire on the length, ? L
field, Awf

Satisfy:
? The system constraints:
• Magnetizing intensity, H: Hmax = 5000
• Feasible geometry: t < ro
• Power output, P: P = 300 W
• Motor efficiency, ? : ? = 0.15
• Mass, M: M = 2.0 kg
? Aggregated torque requirements:
• Mean torque, ? T : ?T = 0.2425 Nm
• Standard deviation torque, ? T : ? T = 0.13675 Nm

? The bounds on the system variables:


100 = Nc = 1500 turns 0.5 = t = 10.0 mm
1 = Ns = 500 turns 0.1 = I = 6.0 A
0.01 = Awa = 1.0 mm2 1.0 = ? L = 10.0 cm
0.01 = Awf = 1.0 mm2 0.0 = ? L = 10.0 cm
1.0 = ro = 10.0 cm

Minimize:
? Mean mass, target: M = 0.50 kg

Maximize:
? Mean efficiency, target: ? = 0.70

Figure 6.8 Universal Motor Product Platform Compromise DSP Formulation for Use
with OptdesX

215
The aggregated mean torque, ? T , and standard deviation, ? T are calculated as the

sample mean and standard deviation of the set of torque requirements {0.05, 0.1, 0.125, 0.15,

0.2, 0.25, 0.3, 0.35, 0.4, 0.5} Nm assuming a uniform distribution. Power, efficiency, and

mass for the family are assumed to be uniformly distributed with respective means and standard

deviations because it is assumed that the distribution of the demand for the motors is uniform.

The mean power, mean efficiency, and mean mass are calculated as the power,

efficiency, and mass, respectively, for the mean length. The standard deviation of torque is

approximated using a first-order Taylor series expansion, assuming that the standard deviation is

small (Phadke, 1989):

?T
? T ?Ý? ? [6.37]
?? L L

Now that the compromise DSP for the family of universal motors is formulated, Step 5 of the

PPCEM is executed to develop the universal motor platform.

6.4 STEP 5: DEVELOP THE UNIVERSAL MOTOR PLATFORM

For this example problem, the compromise DSP formulated in Section 6.3 is solved

using the Generalized Reduced Gradient (GRG) algorithm in OptdesX. For a thorough

explanation of OptdesX and the GRG algorithm, see, e.g., (Parkinson, et al., 1998). The

OptdesX software package is used instead of DSIDES in this example since implementation of

the PPCEM is not algorithm dependent.

216
Note that in order to develop the product portfolio, the compromise DSP is formulated

with goals for mean torque, ? T , and standard deviation of torque, ? T , which ensures that the

product portfolio will be able to be instantiated for all ten values of torque within the range of the

scale factor specified by the mean and standard deviation, ? L and ? L, for the stack length. Also

note that the constraint on magnetizing intensity is formulated to ensure that the entire product

family will meet the constraint on magnetizing intensity individually. This is accomplished by

computing a maximum magnetizing intensity which represents the magnetizing intensity for the

largest instantiation of the product family and is simply evaluated at the upper bound of current.

The compromise DSP in Figure 6.8 is solved using three different starting points in OptdesX:

the lower, middle, and upper bounds of the design variables. The best design variable settings

and responses for the motor platform are listed in Table 6.3. The values for the number of

armature turns and field turns have been rounded to the nearest integer.

Table 6.4 Universal Motor Product Platform Solution

Design Variable Value Response Value


Number of armature turns Nc 1062 Torque, mean [Nm] 0.2425
Number of field turns Ns 54 Torque, std. dev. [Nm] 0.137
Wire x-sect area, field [mm2] Awf 0.376 Power, mean [W] 300
Wire x-sect area, armature [mm2] Awa 0.241 Efficiency, mean [%] 0.608
Motor radius [cm] r 2.59 Mass, mean [kg] 0.751
Stator thickness [mm] t 6.66
Current drawn [Amp] I 4.29
Stack length, mean [cm] ?L 2.62
Stack length, std dev [cm] ?L 1.48

217
For the purpose of verifying the solution itself, convergence plots for mean mass and

mean efficiency are presented in Figure 6.10 for high, middle, and low starting points used in

OptdesX. Excellent convergence is shown in both plots.

(a) Mass, mean (b) Efficiency, mean

Figure 6.9 Convergence Plots for the Universal Motor Product Platform

To develop the individual motors within the scaled product family using the product platform

specifications from Table 6.4, the compromise DSP given in Figure 6.8 is modified such that Nc,

Ns, Awa, Awf, r, and t are held constant at the values listed in Table 6.4, and only the current, I,

and stack length, L, are allowed to vary to meet the original set of torque requirements.

Because the mean and standard deviation for stack length have been found for the product

platform, the initial range of interest for stack length now can be discarded in favor of the range

for the product platform. Using the assumption that length is uniformly distributed, the minimum

and maximum bounds on length can be estimated as:

218
?L - 3 ?L = L = ?L - 3 ?L [6.38]

Substituting the values for mean and standard deviation of stack length shown in Table 6.4, the

new lower and upper bounds of interest for stack length are as follows:

0.057 = L = 5.18 cm

Note that because individual torque goals are being set, the goal for standard deviation of

torque is eliminated in the modified compromise DSP formulation. Also, the constraint on

magnetizing intensity is no longer imposed on any maximum magnetizing intensity but rather on

the individual magnetizing intensity and is evaluated at the current for each motor.

The product platform is instantiated by selecting appropriate values for the scale factor

(stack length) within the range specified by the mean and standard deviation in Table 6.4 for

each desired set of torque and power requirements. The current also is being allowed to vary

since it is a dependent variable in the system, i.e., it is the amount of current which is drawn by

the motor such that the given torque and power requirements are met for a given motor

geometry. In terms of the principles of robust design, the values shown in Table 6.4 for the

product portfolio are found such that the goal for mean power is on target, while varying the

current allows the standard deviation of power across the instantiated product family to be zero.

The modified compromise DSP for the product family is shown in Figure 6.10 and is again

solved using OptdesX while starting from three different starting points.

219
Given:
? Configuration scale factor = stack length
? Universal motor model equations
? Platform settings for Nc, Ns, Awa, Awf, r, and t (Table 6.4)

Find:
? The system variables, x:
• Stack length, L • Current drawn by the motor, I

Satisfy:
? The system constraints:
• Magnetizing intensity, H: Hmax = 5000
• Feasible geometry: t < ro
• Power output, P: P = 300 W
• Motor efficiency, ? : ? = 0.15
• Mass, M: M = 2.0 kg
? Individual torque requirements:
• Torque, T: T = {0.05, 0.1, 0.125, 0.15, 0.2, 0.25,
0.3, 0.35, 0.4, 0.5} Nm
? The bounds on the system variables:
0.1 = I = 6.0 A
0.057 = L = 5.18 cm

Minimize:
? Mass, target: M = 0.50 kg

Maximize:
? Efficiency, target: ? = 0.70

Figure 6.10 Compromise DSP Formulation for Instantiating the PPCEM Platform for
Use with OptdesX

The resulting values for current and stack length of each motor (PPCEM platform

instantiation) are listed in Table 6.5 along with the corresponding response values. Notice that

the stack length varies from 0.865 cm to 2.95 cm in order to meet the desired torque and

power requirements. The resulting current drawn by the system ranges from 3.39 Amps to

220
5.82 Amps which is slightly high but acceptable for a motor with such a large torque. Finally,

notice that only three motors meet the desired efficiency target of 70%, and these are all at the

low-end, and only one motor achieves the mass target of 0.5 kg.

Table 6.5 Universal Motor Product Family PPCEM Instantiations

Product Specifications (Design Variables) Responses


Awf Awa I r t L T P ? M
2 2
Motor Nc Ns [mm ] [mm ] [Amp] [cm] [mm] [cm] [Nm] [W] [%] [kg]
1 1062 54 0.376 0.241 3.39 2.59 6.66 0.865 0.05 300 76.8 0.380
2 ? ? ? ? 3.62 ? ? 1.53 0.10 ? 72.2 0.520
3 ? ? ? ? 3.73 ? ? 1.79 0.125 ? 70.0 0.576
4 ? ? ? ? 3.85 ? ? 2.02 0.15 ? 67.9 0.625
5 ? ? ? ? 4.08 ? ? 2.39 0.20 ? 63.9 0.703
6 ? ? ? ? 4.33 ? ? 2.66 0.25 ? 60.2 0.759
7 ? ? ? ? 4.59 ? ? 2.83 0.30 ? 56.8 0.797
8 ? ? ? ? 4.87 ? ? 2.94 0.35 ? 53.6 0.820
9 ? ? ? ? 5.16 ? ? 2.99 0.40 ? 50.5 0.830
10 ? ? ? ? 5.82 ? ? 2.95 0.50 ? 44.8 0.820

It is uncertain whether the failure to achieve the desired mass and efficiency targets is a

property of the system itself or a result of using the PPCEM. Therefore, a family of individually

designed universal motors is developed in the next section to provide a benchmark for

comparison. The differences between this family of benchmark motors and the PPCEM

instantiations are then compared to verify the PPCEM solutions.

221
6.5 RAMIFICATIONS OF THE RESULTS OF THE ELECTRIC MOTOR
EXAMPLE PROBLEM

6.5.1 Development of a Benchmark Universal Motor Family

In order to generate a family of benchmark motors to compare with the PPCEM family

of motors, the compromise DSP presented in Figure 6.10 is modified such that Nc, Ns, Awa,

Awf, r, and t are all design variables variables in addition to I and L. The resulting compromise

DSP is shown in Figure 6.12. This compromise DSP is solved using OptdesX for each of the

ten power and torque ratings. Three different starting points—lower, middle, and upper

bounds—are used to solve the compromise DSP for each motor.

Given:
? Universal motor model analysis equations, Section 6.1.2

Find:
? The system variables, x:
• Number of turns on the armature, Nc • Thickness of the stator, t
• Number of turns on each pole on the • Current drawn by the motor, I
field, Ns • Radius of the motor, r
• Cross-sectional area of the wire on the • Mean of stack length, ? L
armature, Awa • Standard deviation of stack
• Cross-sectional area of the wire on the length, ? L
field, Awf

Satisfy:
? The system constraints:
• Magnetizing intensity, H: Hmax = 5000
• Feasible geometry: t < ro
• Power output, P: P = 300 W
• Motor efficiency, ? : ? = 0.15
• Mass, M: M = 2.0 kg

222
? Individual torque requirement:
• Torque T = {0.05, 0.1, 0.125, 0.15, 0.2, 0.25,
0.3, 0.35, 0.4, 0.5} Nm
? The bounds on the system variables:
100 = Nc = 1500 turns 1.0 = ro = 10.0 cm
1 = Ns = 500 turns 0.5 = t = 10.0 mm
0.01 = Awa = 1.0 mm2 0.1 = I = 6.0 A
0.01 = Awf = 1.0 mm2 0.057 = L = 5.18 m

Minimize:
? Mean mass, target: M = 0.50 kg

Maximize:
? Mean efficiency, target: ? = 0.70

Figure 6.11 Compromise DSP Formulation for Benchmark Universal Motor Family for
Use with OptdesX

The resulting design variable settings and responses for each benchmark motor are

summarized in Table 6.6. Compared to the PPCEM solutions listed in Table 6.5, the number of

armature turns, Nc, is generally lower than the PPCEM platform specification and the number of

field turns, Ns, is slightly higher. The cross-sectional area of the field wire, Awf, is lower than

the PPCEM platform specification; however, the PPCEM platform value for Awa, armature wire

cross-sectional area, is contained within the range observed for the benchmark motors.

Similarly, the ranges for motor radius, r, and thickness, t, for the benchmark motors both span

the values of the PPCEM platform specifications. These motors draw less current—a maximum

of 4.71 Amps—compared to the PPCEM family of motors which draw as much as 5.82 Amps

for the equivalent motor. Finally, note that the range of stack lengths of the benchmark motors

are comparable to the range of stack lengths found using the PPCEM.

223
Table 6.6 Benchmark Universal Motor Specifications

Product Specifications (Design Variables) Responses


Awf Awa I r t L T P ? M
2 2
Motor Nc Ns [mm ] [mm ] [Amp] [cm] [mm] [cm] [Nm] [W] [%] [kg]
1 730 45 0.205 0.203 3.65 3.62 9.69 0.998 0.05 300 71.4 0.500
2 750 76 0.203 0.186 3.73 3.31 11.77 1.28 0.10 ? 70.6 0.500
3 760 89 0.203 0.190 3.73 3.12 11.20 1.41 0.125 ? 70.0 0.500
4 785 95 0.205 0.205 3.70 2.82 8.88 1.63 0.15 ? 70.5 0.500
5 988 74 0.217 0.241 3.84 2.26 5.75 2.38 0.20 ? 67.9 0.558
6 1007 73 0.224 0.246 4.02 2.35 6.17 2.61 0.25 ? 64.9 0.639
7 1030 73 0.230 0.253 4.19 2.44 6.35 2.74 0.30 ? 62.2 0.712
8 1056 73 0.237 0.260 4.36 2.51 6.46 2.81 0.35 ? 59.8 0.777
9 1082 72 0.243 0.267 4.53 2.58 6.67 2.87 0.40 ? 57.7 0.837
10 1087 72 0.247 0.284 4.71 2.71 7.15 3.16 0.50 ? 55.3 0.985

Regarding the performance of each motor, the desired torque and power requirements

are achieved by each motor; moreover, more of the benchmark motors achieve the mass and

efficiency targets of 0.5 kg and 70%. Unlike the PPCEM family of motors, four of the

benchmark motors achieves the efficiency target of 70%, and four of the motors achieves the

mass target of 0.5 kg. A closer comparison of the performance of the individual motors is

offered in Table 6.7.

For the purpose of validating the solutions themselves, convergence plots for mass and

efficiency for the 0.25 Nm benchmark motor torque are presented in Figure 6.13 for the high,

middle, and low starting points. Fairly good convergence is observed in each graph; however,

the final value of efficiency from the high starting point is slightly worse than the final values of

efficiency from the low and middle starting points. Therefore, in situtations where all three

points do not converge to the same final value, only the best solution is reported.

224
(a) Mass (b) Efficiency

Figure 6.12 Convergence Plots for 0.25 Nm Benchmark Motor

6.5.2 Comparison between the Benchmark Universal Motor Family and the PPCEM
Motor Family

In the previous section, a family of individually designed benchmark motors is created to

compare against the performance of the PPCEM family of universal motors found in Section

6.4. As shown in Table 6.5 and Table 6.6, both families meet their goals for both power and

torque; however, their responses for efficiency and mass differ. The efficiency and mass of each

motor within the benchmark family and the PPCEM family are repeated in Table 6.7 along with

the percentage difference of each response from the benchmark to the PPCEM. For efficiency,

a positive change denotes an improvement from the benchmark to the PPCEM; meanwhile, a

negative change denotes an improvement in the mass. Finally, note that a motor which has

achieved its target mass (0.5 kg) and efficiency (70%) is considered to be equivalent to a motor

with a mass which is lower than the target or an efficiency which is higher than the target.

225
Table 6.7 Comparison of the Responses between the Benchmark Motor Family and
the PPCEM Motor Family

Benchmark Motors PPCEM Motors Percent Difference


Motor ? [%] M [kg] ? [%] M [kg] ? M
1 71.4 0.500 76.8 0.380 equiv. equiv.
2 70.6 0.500 72.2 0.520 equiv. 4.0%
3 70.0 0.500 70.0 0.576 equiv. 15.2%
4 70.5 0.500 67.9 0.625 -3.7% 25.0%
5 67.9 0.558 63.9 0.703 -5.9% 26.0%
6 64.9 0.639 60.2 0.759 -7.2% 18.8%
7 62.2 0.712 56.8 0.797 -8.7% 11.9%
8 59.8 0.777 53.6 0.820 -10.4% 5.5%
9 57.7 0.837 50.5 0.830 -12.5% -0.8%
10 55.3 0.985 44.8 0.820 -19.0% -16.7%
Average change: -6.74% 8.89%

Three of the PPCEM motors have equivalent efficiency ratings to the corresponding

benchmark motor which produces the same torque; however, only the motor with the smallest

torque (Motor #1) is considered to have a mass equivalent to its corresponding benchmark

motor. As tallied at the bottom of Table 6.7, the PPCEM motors lose 7% in efficiency and

weigh 9% more than the family of benchmark motors, on average. Therefore, while the

family of PPCEM motors based on a common product platform scaled in stack length

are able to achieve the desired range of torque and power requirements, they lose, on

average, 10% in both efficiency and mass compared to an equivalent family of

individually designed benchmark motors. At this point, it is up to the judgment of the

engineering designers and managers to decide if the savings (in inventory, manufacturing, etc.)

from having a family motors based a common platform and scaled only in stack length

226
outweighs the sacrifice in mass and efficiency. Meanwhile, an attempt to improve the

performance of the PPCEM family of motors in relation to the benchmark family of motors is

offered in the next section.

6.5.3 Improvements to the PPCEM Motor Family and Lessons Learned

While investigating this example, it was learned that Black & Decker varies more than

just stack length when it scales its universal motors to meet a variety of power ratings. In

addition to increasing the stack length of the motor, they also allow the number of turns in the

field and armature and the cross-sectional area of the wires in the field and armature to vary

from one motor to the next. Careful inspection of any two motors from one of their power tool

lines (say, corded drills) reveals that this is indeed the case.

The question then becomes: how well do the PPCEM instantiations perform if the

number of field and armature turns (N s and Nc) and the cross-sectional area of the field

and armature wires (A wf and Awa) are allowed to vary in conjunction with varying the

stack length (and current)? The results obtained by solving a new set of compromise DSPs

for each universal motor are listed in Table 6.8. These solutions are obtained by modifying the

compromise DSP in Figure 6.10 to allow Nc, Ns, Awf, and Awa to vary from their platform

settings of 1062, 54, 0.376 mm2, and 0.241 mm2, respectively.

227
Table 6.8 New PPCEM Universal Motor Instantiations with Varying Numbers of
Turns, Wire Cross-Sectional Areas, and Stack Lengths

Product Specifications (Design Variables) Responses


Awf Awa I r t L T P ? M
2 2
Motor Nc Ns [mm ] [mm ] [Amp] [cm] [mm] [cm] [Nm] [W] [%] [kg]
1 970 41 0.306 0.221 3.49 2.59 6.66 1.18 0.05 300 74.7 0.397
2 981 66 0.306 0.224 3.62 ? ? 1.37 0.10 ? 72.1 0.456
3 986 74 0.306 0.225 3.67 ? ? 1.44 0.125 ? 71.1 0.477
4 990 82 0.306 0.227 3.72 ? ? 1.51 0.15 ? 70.1 0.499
5 999 84 0.307 0.230 3.86 ? ? 1.81 0.20 ? 67.5 0.568
6 1064 80 0.359 0.239 4.03 ? ? 2.03 0.25 ? 64.6 0.646
7 1135 76 0.309 0.257 4.19 ? ? 2.20 0.30 ? 62.2 0.712
8 1166 75 0.282 0.268 4.35 ? ? 2.42 0.35 ? 59.9 0.774
9 1195 72 0.280 0.277 4.51 ? ? 2.60 0.40 ? 57.7 0.833
10 1242 67 0.286 0.293 4.85 ? ? 2.91 0.50 ? 53.8 0.941

Recall that the target for efficiency is 70%, and the target for mass is 0.5 kg. So as

discussed previously, even if a particular motor weighs less than 0.5 kg or has an efficiency

greater than 70%, it is still considered to be equivalent to a motor which is exactly 0.5 kg or has

70% efficiency. With this in mind, the new family of PPCEM motors (allowing the numbers of

turns and wire cross-sectional areas to vary along with stack length and current) and the family

of benchmark motors are essentially identical in terms of performance as can be seen in

Table 6.9. In both families of motors, the necessary torque and power requirements have been

met, and the two sets of motors are compared solely on their respective efficiencies and masses.

The result is that four of the ten motors are equivalent (identical) since they achieve the targets

for mass and efficiency, and the remaining six motors vary by less than 2%. The highest torque

motor in this new PPCEM family is slightly less efficient (-2.7%) than the corresponding

228
benchmark motor, but it weighs less (-4.5%). This tradeoff is really negligible since more wire

can be wrapped around the field or armature to improve the efficiency with only slight increase

in mass. Consequently, by allowing the numbers of wire turns and the wire cross-

sectional areas to vary while also scaling the stack length, the resulting family of motors

obtained using the PPCEM is equivalent to the family of individually designed

benchmark motors. This is a very important observation because it indicates that the PPCEM

can be used to obtain a family of motors which sacrifices minimal performance even though the

motors are based, for the most part, on a common platform specification.

Table 6.9 Comparison of Benchmark Designs and New PPCEM Instantiations with
Varying Numbers of Turns, Wire Cross-Sectional Areas, and Stack Lengths

Efficiency (Target = 70.0%) Mass (Target = 0.5 kg)


Benchmark PPCEM Percent Benchmark PPCEM Percent
Motor ? [%] ? [%] Difference M [kg] M [kg] Difference
1 71.4 74.7 equivalent 0.500 0.397 equivalent
2 70.6 72.1 equivalent 0.500 0.456 equivalent
3 70.0 71.1 equivalent 0.500 0.477 equivalent
4 70.5 70.1 equivalent 0.500 0.499 equivalent
5 67.9 67.5 -0.6% 0.558 0.568 1.8%
6 64.9 64.6 -0.5% 0.639 0.646 1.1%
7 62.2 62.2 0.0% 0.712 0.712 0.0%
8 59.8 59.9 0.2% 0.777 0.774 -0.4%
9 57.7 57.7 0.0% 0.837 0.833 -0.5%
10 55.3 53.8 -2.7% 0.985 0.941 -4.5%

From an engineering standpoint, does it make sense to let Nc, Ns, Awf, and Awa

vary along with the stack length (and current)? From a manufacturing perspective, it makes

229
perfect sense to allow the number of turns of wire in the armature and field (Nc and Ns,

respectively) to vary since it costs little extra to wrap more (or fewer) turns when the motor is

being produced. From a inventory perspective, however, it would appear that allowing Awf and

Awa to vary is not cost effective since it requires that multiple wire types (i.e., varying cross-

sectional areas) must be kept in stock in order to produce the family of motors. The justification

for allowing Awf and Awa to vary is as follows. As the stack length of the motor increases (with

everything else being held constant), the torque on the motor increases; however, the power

output actually decreases because the copper losses, given by Equation 6.5, increase as since

Ra and Rs increase (see Equation 6.8) as stack length increases.

One way to compensate for this loss in power is to allow more current to be drawn as

is the case in this example. What is typically done, however, to compensate for this decrease in

power (as the stack length is increased) is to decrease the number of field and armature turns

while simultaneously increasing the field and armature wire cross-sectional areas. This lowers

the resistances Ra and Rs and reduces copper losses without having to draw additional current in

order to maintain the desired output power. An added benefit of this approach is that the

rotational speed of the motor will also increase. In reality, the operating speed of the motor is a

very important design consideration since power and torque are related through the equation:

P = T? [6.39]

where P is power, T is torque, and ? is the rotational speed of the motor. The speed of the

motor has been neglected in this example since it is fixed once power and torque have been

230
specified. Based on Equation 6.39, for a fixed power output, as torque increases, the rotational

speed of the motor must decrease. In many cases however, as power increases and torque

increases, maintaining a consistent operating speed for the motor is often desired. The

additional inventory costs incurred by stocking a wider variety of wire sizes is offset by this

combination of effects.

But what if Awf and Awa were held fixed and only Nc and Ns were allowed to vary

along with stack length (and current)? The answer is summarized in Table 6.10 wherein the

compromise DSP for the family of motors for instantiating the PPCEM platform, Figure 6.10,

has been modified a third time to allow only Nc, Ns, L and I to vary from the platform

specifications. Comparison of the data in Table 6.10 with Table 6.8 (the PPCEM instantiations

with Awf and Awa varying also) reveals that both families of motors are nearly identical in terms

of their performance characteristics (mass and efficiency).

231
Table 6.10 Third PPCEM Universal Motor Family with Varying Numbers of Turns
and Stack Lengths

Product Specifications (Design Variables) Responses


Awf Awa I r t L T P ? M
2 2
Motor Nc Ns [mm ] [mm ] [Amp] [cm] [mm] [cm] [Nm] [W] [%] [kg]
1 1104 40 0.376 0.241 3.49 2.59 6.66 1.18 0.05 300 74.6 0.423
2 1120 68 ? ? 3.61 ? ? 1.37 0.10 ? 72.2 0.466
3 1126 79 ? ? 3.66 ? ? 1.44 0.125 ? 71.2 0.483
4 1131 87 ? ? 3.72 ? ? 1.51 0.15 ? 70.2 0.500
5 1119 84 ? ? 3.88 ? ? 1.81 0.20 ? 67.2 0.575
6 1091 81 ? ? 4.03 ? ? 2.03 0.25 ? 64.6 0.648
7 1060 77 ? ? 4.20 ? ? 2.20 0.30 ? 62.1 0.717
8 1025 74 ? ? 4.38 ? ? 2.42 0.35 ? 59.6 0.785
9 987 71 ? ? 4.56 ? ? 2.60 0.40 ? 57.2 0.851
10 909 65 ? ? 4.97 ? ? 2.91 0.50 ? 52.4 0.979

The mass and efficiency of the four families of motors (the benchmark, the PPCEM

varying only L, the PPCEM varying L, Nc, and Ns, and the PPCEM varying L, Nc, Ns, Awf, and

Awa) are summarized in Table 6.11 to facilitate comparison. The percentage difference (%

Diff.) listed in the table is a comparison of each PPCEM instantiation against the corresponding

benchmark motor, i.e., the performance characteristics of a motor which has been individually

designed and optimized. As stated previously, motors which achieve their respective mass and

efficiency targets of 0.50 kg and 70.0% are considered equivalent solutions even if the motor

has a lower mass or higher efficiency. In this regard, some observations based on the data in

Table 6.11 are as follows:

• As more variables are allowed to vary from the platform specifications, the better the
performance of the individual motors; the tradeoff is that less and less is common
between the motors within the product family. It then becomes a decision of the

232
engineering designers and management to evaluate the tradeoffs between commonality
and performance to determine the best family to pursue. This reinforces the statement
that the PPCEM facilitates generating these options but is not necessarily used to select
the best one since it requires information which is beyond the scope of this investigation.

Table 6.11 Efficiency and Mass of Benchmark Motors and PPCEM Motor Platform
Families

PPCEM PPCEM PPCEM


Benchmark Vary L Vary L, Vary L, Nc,
Motors only Nc, Ns Ns, Awf, Awa
Motor ? [%] ? [%] % Diff. ? [%] % Diff. ? [%] % Diff.
1 71.4 76.8 equiv. 74.6 equiv. 74.7 equiv.
2 70.6 72.2 equiv. 72.2 equiv. 72.1 equiv.
3 70.0 70.0 equiv. 71.2 equiv. 71.1 equiv.
4 70.5 67.9 -3.7% 70.2 equiv. 70.1 equiv.
5 67.9 63.9 -5.9% 67.2 1.0% 67.5 0.6%
6 64.9 60.2 -7.2% 64.6 0.5% 64.6 0.5%
7 62.2 56.8 -8.7% 62.1 0.2% 62.2 0.0%
8 59.8 53.6 -10.4% 59.6 0.3% 59.9 -0.2%
9 57.7 50.5 -12.5% 57.2 0.9% 57.7 0.0%
10 55.3 44.8 -19.0% 52.4 5.2% 53.8 2.7%

Motor M [kg] M [kg] % Diff. M [kg] % Diff. M [kg] % Diff.


1 0.500 0.380 equiv. 0.423 equiv. 0.397 equiv.
2 0.500 0.520 equiv. 0.466 equiv. 0.456 equiv.
3 0.500 0.576 equiv. 0.483 equiv. 0.477 equiv.
4 0.500 0.625 25.0% 0.500 equiv. 0.499 equiv.
5 0.558 0.703 26.0% 0.575 -3.0% 0.568 -1.8%
6 0.639 0.759 18.8% 0.648 -1.4% 0.646 -1.1%
7 0.712 0.797 11.9% 0.717 -0.7% 0.712 0.0%
8 0.777 0.820 5.5% 0.785 -1.0% 0.774 0.4%
9 0.837 0.830 -0.8% 0.851 -1.7% 0.833 0.5%
10 0.985 0.820 -16.8% 0.979 0.6% 0.941 4.5%

233
• In this example, the PPCEM instantiations when Nc, Ns, Awf, and Awa are allowed to
vary in addition to the stack length yields an equivalent family of motors to the family of
individually designed benchmark motors. Varying only Nc and Ns in the PPCEM family
while holding Awf and Awa fixed at the platform specification also yields a good family of
motors with minimal sacrifice in performance (mass and efficiency) when compared to
the benchmark family of motors.

In light of these observations, are the solutions obtained from the PPCEM useful?

The answer is undoubtedly yes. The initial family of motors obtained using the PPCEM meets

the range of torque and power requirements which have been specified for the product family.

However, because these motors are based on a common product platform and vary only in

stack length (and current), the motors lose, on average, 10% in both efficiency and mass for the

specified targets when compared to a family of individually designed benchmark motors. In an

effort to reflect a more realistic set of motors, by allowing the number of turns in the armature

and field to vary in addition to the stack length, the family of motors obtained using the PPCEM

are essentially identical to the equivalent family of benchmark motors. The necessary torque

and power requirements are met with minimal sacrifice in performance (mass and efficiency). If

the wire cross-sectional areas of the wire in the field and armature are allowed to vary in

addition to the number of wire turns in each and the stack length, then the family of motors

obtained using the PPCEM are identical, for all intents and purposes, to the corresponding

family of benchmark motors. Thus, the PPCEM has greatly facilitated generating a variety of

options which the engineering designers and managers can select from based on what is best for

the company.

234
Are the time and resources consumed within reasonable limits? In general, fewer

analysis calls are required to obtain the PPCEM family of motors than the benchmark family of

motors. To obtain the benchmark motor family, ten optimization problems must be solved

where each optimization involves finding the best settings of eight design variables. For the

PPCEM platform instantiations, the initial family of motors requires solving one optimization

problem to find the values of nine design variables (which includes mean and standard deviation

of stack length for the platform) followed by solving ten optimization problems involving as few

as two (current, I, and stack length, L) and as many as six (current, I, stack length, L, #

armature turns, Nc, # field turns, Ns, cross-sectional areas of the field wire, Awf, and armature

wire, Awa) design variables where the size of the subsequent optimization problems is dependent

on the number of design variables which are being instantiated for each motor from the platform

design.

Because a gradient based algorithm—the GRG algorithm in OptdesX—is being used to

optimize the motor platform and individual motors, each iteration of the optimization requires

one analysis call to evaluate the current iterate and two evaluation calls per design variable to

estimate the gradient to determine the next iterate. For the family of benchmark motors, the

number of analysis calls is approximately:

??n iterations ??????1 analysis ?? ??


?? 2 analyses ???8 d.v. ???= 170n
(10 motors) ?? ? [6.40]
motor ???? ???? iteration ?? ??(d.v.)(iteration) ?? ??
??

235
where d.v. is an abbreviation for design variable, and n is the average number of iterations

required to solve each optimization. For the PPCEM family of motors, the number of analysis

calls required to obtain the family of motors is approximately:

??
??1 analysis ?? ??
?? 2 analyses ???9 d.v. ???+ (10 motors)•
(m iterations) ?? ?
?? iteration ?? ??(d.v.)(iteration) ??
?? ??
??

??k iterations ??????1 analysis ?? ??


?? 2 analyses ???2 d.v. ???= 19m + 50k
? [6.41]
?? motor ???? ???? iteration ?? ??(d.v.)(iteration) ?? ??
??

where m is the number of iterations required to find the PPCEM platform design, and k is the

number of iterations required to find each instantiation of the PPCEM platform. On average, n

˜ 10, m ˜ 12, and k ˜ 5; therefore, the average number of analysis calls required to obtain the

benchmark motor designs is 170•10 = 1700 while the average number of analysis calls required

to find the PPCEM motor designs is 19•12 + 50•5 = 478, a difference of 1222 analyses. So

even if as many as six design variables are allowed to vary between PPCEM instantiations from

the product platform, then by replacing (2 d.v.) in Equation 6.41 with (6 d.v.), it would still only

require about 19•12 + 130•5 = 878 analysis calls which is slightly more than half the analysis

calls required to find the benchmark motor designs, and this estimate does not even take into

consideration the fact that each optimization is solved from three different starting point. So by

using the PPCEM to first find a common motor platform design and then scaling the

platform in the stack length, an equivalently good family of motors can be obtained with

fewer analysis calls than if each motor were designed individually. Plus, there is the

236
added benefit of the family of motors found using the PPCEM have more in common

with one another.

Is the work grounded in reality? The problem formulation has been developed from

an electric motor for a 3/8” variable speed, reversible, corded Black & Decker drill, model

#7190. The drill is rated at 288 W and 1200 rpm, drawing 3.5 Amps of current and is at the

low-end of their product line. The gear reduction on the motor is estimated to be 10:1;

therefore, the operating speed of the motor itself is 12,000 rpm. Using Equation 6.39, the

operating motor torque is computed as being 0.23 Nm. Assuming an input voltage of 115 V,

the input power is 402.5 W when drawing 3.5 Amps of current (see Equation 6.2). Since the

output power is 288 W, the efficiency of the motor is computed using Equation 6.14 and is

71.6%. The mass of the motor is 0.496 kg. Consequently, the target values of 300 W power,

70% efficiency, 0.5 kg, and 0.05 Nm to 0.5 Nm of torque are built around the performance

ratings for this motor, and the motor from this drill is taken as the mid-range motor in a family of

universal motors for this example.

The pertinent motor specifications (design variable settings) for this torque and power

rating are as follows:

• Stack length, L = 2.84 cm • Air gap length, lgap = 0.7 mm

• Number of field turns, Ns = 135 • Stator thickness, t = 4.2 mm

• Number of armature turns, Nc ˜ 700-800 • Motor radius, r = 2.85 cm

• Field wire cross-sectional area, Awf = 0.13 mm2

• Armature wire cross-sectional area, Awa ˜ 0.10 mm2

237
It is difficult to count the number of armature turns in the actual motor; the best guess is around

750 turns. Can the analytical model be used to predict the performance of the actual

motor given these specifications? Unfortunately, the analytical model developed in this

chapter cannot be used to predict the performance to the actual motor given these

specifications. There are two discrepancies which arise between the model and the actual

motor. First, the real motor from the drill is not a true universal motor since it appears to be

designed for AC use only (as stated on the exterior of the box). Second, the number of poles

on the armature is twelve; in a real universal motor, the number of poles in the armature is

typically two which is an important assumption used when deriving the torque equations

(Equations 6.19-6.24) for the motor. However, these specifications can still be used to gauge

the solutions obtained from the analytical model.

In general, the values for stator thickness, motor radius, stack length, and current are in

close agreement with the values obtained using the PPCEM, see Table 6.5, Table 6.6, Table

6.8, and Table 6.10. The number of field turns is on the low end while the number of armature

turns is on the high end compared to this actual motor. Finally, the wire cross-sectional areas in

the actual motor are slightly smaller than the values obtained using the analytical model for the

motor. These discrepancies are discussed in more detail in the context of the limitations and

shortcomings of the model and problem formulation.

What are the limitations of the analytical model and problem formulation

developed in this chapter? There are two noteworthy shortcomings to the analytical model

238
and problem formulation presented in this chapter for the family of universal electric motors.

First, the speed of the motor has not been taken into consideration in the problem formulation as

discussed previously. By specifying power and torque requirements for each motor, the

resulting rotational speed of the motor is fixed through Equation 6.40. It is important to ensure

that as the torque of the motor increases that the power also increases so that the operating

speed of the motor does not decrease significantly. For purposes of this demonstration, this is

not a major concern; however, a more realistic representation of the motor problem formulation

would take this into consideration.

The second notable shortcoming of the analytical model relates to the large numbers of

armature turns in each motor and the related discrepancies between wire cross-sections and

number of field turns. Can 1062 turns of 0.241 mm2 wire be packed into a cylindrical volume

with a radius of ˜ 2.0 cm (the motor radius minus the thickness of the PPCEM platform listed in

Table 6.4) and a length of 2.62 cm? The answer depends on how tightly wires can be packed

around the armature and how much steel is used within the poles of armature. The complexity

of such an analysis was considered to be beyond the scope of this example; however, it is

recommended to include these space considerations in future studies in order to improve the

fidelity of the model. Furthermore, decreasing the number of armature turns is liable to increase

the required number of field turns (which are considered to be low given that the Black &

Decker motor has 135 turns) in order to maintain sufficient magnetic flux through the motor.

Placing space constraints on the amount of wire in the field and armature should also have the

239
effect of decreasing Awf and Awa in addition to making the number of field and armature

windings more realistic.

Finally, do the benefits of the work outweigh the cost? The answer to this last

question is also affirmative. Regardless of whether the family of benchmark motors or the

PPCEM family was being designed, the analytical model would still have been constructed. The

only addition to the model required to use the PPCEM is deriving an expression for the

standard deviation of torque based on variations in the motor stack length. This is achieved by

means of a first-order Taylor series approximation, Equation 6.37, which requires taking the

derivative of the torque equation, Equation 6.19, with respect to stack length. Once this is

accomplished, using the PPCEM yields a family of motors with high commonality and negligible

performance losses in fewer analyses than an equivalent family of individually designed

benchmark motors. Thus, the PPCEM facilitates the exploration of product platform concepts

which can be scaled into an appropriate family of products, providing initial “proof of concept”

that the PPCEM does what it is intended to do. A look ahead to the next example to verify

further this observation is offered in the next section.

6.6 A LOOK BACK AND A LOOK AHEAD

The design of a family of universal electric motors has been utilized to demonstrate how

the PPCEM can be applied to a fairly small-scale, parametrically scalable problem.

Furthermore, the application of the market segmentation grid to help identify and map vertical

leveraging of a product platform for a wide range of performance/price tiers within a given

240
market segment is illustrated in this example. From the simple analytical model derived in

Section 6.1.3, it has also been shown in this example that for small-scale problems such as this

one, Step 3 of the PPCEM—building metamodels—is not always necessary provided that

analytical expressions exist, or can be derived, to relate variations in the scale factor (the stack

length of the motor in this example) to variations in product performance (goals and constraints).

As evidenced by the discussion in the previous section comparing the individually designed

benchmark motors and the PPCEM platform motors, the PPCEM has been implemented to

design a family of universal motors (based on a scalable product platform) which is capable of

meeting a wide range of torque requirements with minimal compromise in efficiency and mass.

A summary of Chapter 6 and a preview of Chapter 7 is offered in Figure 6.13.

Demonstrated: Demonstrate:
• vertical leveraging • horizontal leveraging
• parametric scale • configurational scale
factor: stack length factor: # passengers
• no metamodels • kriging metamodels
• robust design to facilitate robust
implementation: design and concept
separate goals for exploration
“bringing mean on • robust design
target” and “minimize implementation:
the variation” Family of Universal Family of General design capability
• OptdesX solver Electric Motors Aviation Aircraft indices
Test: • use DSIDES
• H1, SH1.1, SH1.2, Test:
and SH1.3 Chp 6 Chp 7 • H1, SH1.1, SH1.2,
SH1.3, and H2

Platform

Product Platform Concept Exploration Method

Figure 6.13 Pictorial Review of Chapter 6 and Preview of Chapter 7

241
As shown in Figure 6.13, in Chapter 7 the PPCEM is applied to a larger, more

complex problem, namely, the design of a family of General Aviation aircraft. In the General

Aviation aircraft example, all of the steps in the PPCEM are implemented, including

metamodeling in Step 3. In addition, a product variety tradeoff study is performed to verify

further Hypotheses 1, its related sub-hypotheses, and Hypothesis 2. The details of the General

Aviation aircraft example are discussed at the beginning of the next chapter.

242
7.
CHAPTER 7

DESIGN OF A FAMILY OF GENERAL AVIATION


AIRCRAFT

In this chapter, the PPCEM is applied in full to the design of a family of General

Aviation aircraft (GAA) for final verification of the method. The layout of this chapter parallels

that of the universal motor case study in the previous chapter. Motivation and background for

the GAA are given in Section 7.1, along with Step 1 of the PPCEM, namely, creation of an

appropriate market segmentation grid for the family of GAA to accommodate the problem

requirements. Based on the desired horizontal leveraging strategy, the scale factor for the

problem is taken as the number of passengers on the aircraft as explained in Section 7.2; the

control factors and responses for the family of GAA also are described in Section 7.2 as part of

Step 2 of the PPCEM. Kriging metamodels then are created for the mean and standard

deviation of each response for the family of GAA in Section 7.3. After validating the accuracy

of the metamodels, a compromise DSP for the family of aircraft is formulated in Section 7.4 and

234
exercised in Section 7.5 to develop the GAA platform portfolio. Ramifications of the results

and comparison of the PPCEM solutions against individually designed benchmark aircraft are

discussed in Section 7.6. In addition, a product variety tradeoff study is performed, making use

of the non-commonality indices (NCI) and performance deviation indices (PDI) to examine the

tradeoff between the two within the family of GAA.

As shown in Table 7.1, all but Hypothesis 3 are verified further in this chapter. As

mentioned in the preceding paragraph, the market segmentation grid is used to identify a

horizontal leveraging strategy for the GAA product family providing further verification of Sub-

Hypothesis 1.1. The use of design capability indices is demonstrated to aggregate the product

family specifications (SH1.3) and facilitate the development of a product platform which is

robust (SH1.2) to variations in the number of passengers on the aircraft, the scale factor.

Furthermore, kriging metamodels are exploited in this chapter to facilitate the implementation of

robust design within the PPCEM, providing further support for Hypothesis 2. Space filling

designs are utilized to create these metamodels; however, only one type of design is used—an

orthogonal array—and Hypothesis 3 is not tested.

Table 7.1 Hypotheses Tested in Chapter 7

Hypothesis Chp 6 Chp 7


H1 Product Platform Concept Exploration X X
Method
SH1.1 Usefulness of market segmentation X X
grid
SH1.2 Robust design of scalable product X X
platform

235
SH1.3 Aggregating product family X X
specifications
H2 Utility of kriging for metamodeling X
deterministic computer experiments

A summary of the findings and lessons learned in this example are summarized at the

end of the chapter in Section 7.7. The objective in the summary is to describe how Hypothesis

1 has been verified further through this example. A brief look ahead to Chapter 8 is given in this

last section.

236
7.1 STEP 1: DEVELOPMENT OF THE MARKET SEGMENTATION GRID

Before developing the market segmentation grid, an overview of the General Aviation

aircraft example is given in the next section. This is followed in Section 7.1.2 with a brief

overview of the phases of aircraft design. The market segmentation grid for the General

Aviation aircraft example is presented in Section 7.1.3 along with the baseline design which

provides the starting point for designing the family of GAA.

7.1.1 Overview of the General Aviation Aircraft Example Problem

What is a General Aviation aircraft? The term General Aviation encompasses all

flights except military operations and commercial carriers. Its potential buyers form a diverse

group that include weekend and recreational pilots, training pilots and instructors, traveling

business executives and even small commercial operators. Satisfying a group with such diverse

needs and economic potential poses a constant challenge for the General Aviation industry

because it is impossible to satisfy all of the market needs with a single aircraft. The present

financial and legal pressures endured by the General Aviation sector makes small production

runs of specialized models unprofitable. As a result, many General Aviation aircraft are no

longer being produced, and the few remaining models are beyond the financial capability of all

but the wealthiest buyers.

In an effort to revitalize the General Aviation sector, the National Aeronautics and

Space Administration (NASA) and the Federal Aviation Administration (FAA) recently

237
sponsored a General Aviation Design Competition (NASA and FAA, 1994). For this work, a

General Aviation aircraft (GAA) is defined as follows:

• a fixed-wing, single-engine, single-pilot, propeller driven aircraft,

• carries 2-6 passengers,

• cruises at 150-300 kts, and

• has a range of 800-1000 nautical miles.

One solution to the GAA crisis is to develop a family of aircraft which can be adapted

easily to satisfy distinct groups of customer demands. Therefore, the purpose in this example is

to develop the following:

a family of aircraft scaled around the two, four, and six seater configurations

using the PPCEM. This family of General Aviation aircraft must be capable of

satisfying the diverse demands of the General Aviation buyers at an affordable

price and operating cost while meeting desired technical and economic

considerations.

In order to realize this objective and demonstrate the application of the PPCEM, a brief

overview of aircraft design is given in the next section. This is followed in Section 7.1.3 with the

development of the market segmentation grid—Step 1 of the PPCEM—for the family of GAA.

7.1.2 Brief Overview of Aircraft Design

How does one go about designing an aircraft? Aircraft design traditionally is divided

into three phases, namely, conceptual, preliminary, and detailed design as illustrated in Figure

238
7.1. If manufacturing design is considered as a part of the design process, design for production

can be added as a fourth phase. The first two phases of aircraft design, the conceptual and

preliminary phases, are sometimes combined and called advanced design or synthesis in the

aerospace industry, while follow-on phases are called project design or analysis. More detailed

descriptions of the decisions made in each phase and the disciplines involved in aircraft design

can be found in, e.g., (Bond and Ricci, 1992). Specifically, the efforts in this example are

directed toward utilizing the computer within the traditional conceptual phase of aircraft design

as it is defined in (Nicolai, 1984).

Conceptual Design Phase: In this phase, the general size and configuration of the aircraft
is determined. Parametric trade studies are conducted using preliminary estimates of
aerodynamics and weights to determine the best wing loading, wing sweep, aspect ratio,
thickness ratio, and general wing-body-tail configuration. Different engines are
considered and the thrust loading is varied to obtain the best match of airframe and
engine. The first look at cost and manufacturing possibilities is made at this time. The
feasibility of the design to accomplish a given set of mission requirements is established,
but the details of the configuration are subject to change.

239
Platform

Product Platform Concept Exploration Method

Mission used to develop a scaleable aircraft platform


Requirements
Top-Level Design Specifications
Conceptual Conceptual • General arrangement & performance
Design Baseline • Representative configurations
• General internal layout
• System specifications
Preliminary Allocated • Detailed subsystems
Design Baseline • Internal arrangements
• Process design

Detailed Production
Design Baseline

Advanced Project Production


Design Design & Support

Figure 7.1 Phases of Aircraft Design (Schrage, 1992)

In general, in the early stages of aircraft design, the aircraft concept is synthesized at the

system level based on mission requirements or market opportunities. As a result, the

conceptual baseline is developed and represented by a set of top-level design specifications

as illustrated in Figure 7.1. Top-level design specifications are the descriptions of

system/subsystem concepts or the definitions of the complex system at the system/subsystem

level. Top-level design specifications are used as the starting point for the preliminary design at

the subsystem level, and form the basis for the specifications (functional properties) that are

developed during the preliminary design phase. These top-level design specifications include

variables such as aspect ratio, thickness ratio, and wing-body-tail configuration. The top-level

240
design specifications can be continuous (e.g., aspect ratio = 7-11) or they can be discrete

design concepts (e.g., single- or twin-engine, single, number of propeller blades, high or low

wing, and fixed or retractable landing gear). The settings of the top-level design specifications

define the conceptual baseline which becomes the configuration input for preliminary design,

where the system is decomposed for more sophisticated analysis by discipline, subsystem, or

component. The reader is referred to (Koch, 1997) for more discussion on the resulting

“requirements flowdown” process.

Several synthesis and analysis programs have been created to facilitate the conceptual

and preliminary design of aircraft and hence the development of these top-level design

specifications. One such program is entitled FLOPS (FLight OPtimization System (McCullers,

1993)). FLOPS is a multidisciplinary system of computer programs for conceptual and

preliminary design and evaluation of advanced aircraft concepts. Another program is called

GASP (General Aviation Synthesis Program (NASA, 1978)). GASP is a computer program

which performs tasks specifically associated with the conceptual design of General Aviation

aircraft; consequently, it has been selected as the synthesis program for use in this example.

What is GASP and how does it work? GASP is a synthesis and analysis computer

program which facilitates parametric studies of small aircraft. GASP specializes in small fixed-

wing aircraft employing propulsion systems varying from a single piston engine with fixed pitch

propeller through a twin turboprop/turbofan powered business or transport type aircraft.

GASP contains an overall control module and six technology submodules which perform the

241
various independent studies required in the design of General Aviation or small transport type

aircraft. The six technology submodules include the following:

• Geometry Module: By inputting parameters such as the number of passengers,


aspect ratio, taper ratio, sweep angles and thicknesses of wing and tail surfaces, the
dimensions of the aircraft components are calculated.

• Aerodynamics Module: Lift and drag coefficients, lift curve slope computation due to
aspect ratio, sweep angle, Mach number, and induced drag are determined.

• Propulsion Module: Reciprocating and rotating combustion engines, turboprop,


turbofan, turbojet are simulated. The results provide engine thrust and fuel flow at any
flight condition using performance data for the specific engine used.

• Weight and Balance Module: Weight trend coefficients for gross weight, payload,
and aircraft geometry are used to estimate the center of gravity, travel of the aircraft,
fuel tank size, and compartment weight.

• Mission Performance Module: All of the mission segments such as taxi, take off,
climb, cruise and landing are analyzed including the total range. The program also
calculates the best rate of climb, high speed climb, and other characteristics.

• Economics Module: Manufacturing and operating costs are estimated based on the
date of inception of the program (i.e., in 1970's dollars).

Input variables for GASP are general indicators of aircraft type, size, and performance.

The numerical output from GASP includes many aircraft design characteristics such as range,

direct operating cost, maximum cruise speed, and lift-to-drag ratio. For conceptual design of an

aircraft, GASP is used to find appropriate settings for the top-level design specifications that

satisfy the design requirements. By utilizing GASP as the simulation package within the

242
PPCEM, the PPCEM can be used to develop a set of top-level design specifications for a

suitable aircraft platform which is good for the entire family of aircraft as shown in Figure 7.1.

The first step in the PPCEM is to develop the market segmentation grid which is accomplished

in the next section.

7.1.3 The Market Segmentation Grid for the GAA Example Problem

As stated at the beginning of this section, the objective in this example is to design a

family of GAA around the two, four, and six seater configurations. The market segmentation

grid shown in Figure 7.2 depicts a potential leveraging strategy for the GAA example. The goal

is to design a low end aircraft platform which can be leveraged across three different market

segments which are defined by the capacity of the aircraft (i.e., two people, four people, and six

people) similar to the Boeing 747 series of aircraft (Rothwell and Gardiner, 1990). Each

aircraft could eventually be vertically scaled through the addition and removal of features to

increase its performance and attractiveness to a customer base willing to pay a higher price. At

this stage, however, only a low-end platform is to be developed.

243
High End

Mid-Range

Low End Platform Leveraging Configurational


Low End scale factor:
fuselage length =
f(# passengers)
2 Seater 4 Seater 6 Seater

GAA Platform Design

Figure 7.2 GAA Market Segmentation Grid

The baseline configuration is derived from an existing General Aviation aircraft, namely,

the Beechcraft Bonanza B36TC. The Bonanza is a four-to-six seat, single-engine business and

utility aircraft as illustrated in Figure 7.3 and is one of the most popular GAA sold in recent

years. It is powered by a 300 horsepower, turbocharged engine and is capable of cruising at

25,000 ft with a speed of 200 knots and a minimum range of 956 nautical miles (at 79% of

power). Furthermore, Bonanza’s mission and performance characteristics are close to those

specified in the GAA competition (NASA and FAA, 1994). Taking the Bonanza as the starting

point, several calculations can be performed to determine the GASP input data, specifically for

aerodynamics, engine performance, and stability control parameters, according to the mission

profile. The aircraft performance characteristics of the Beechcraft Bonanza B36TC

specifications when used in GASP are summarized in Table 7.2, and the corresponding top-

level design specifications for this baseline aircraft are listed in Table 7.3 where dimensions in

bold are the design variables.


244
Figure 7.3 Pictorial Representation of Baseline Aircraft

Table 7.2 Performance of the Baseline Model in GASP

Maximum cruise speed 210 kts @ 25,000 ft


Maximum range with maximum fuel 2423 n.m.
Maximum range with maximum payload 715 n.m.
Lift off distance with maximum payload 1310 ft
All engine distance to 50 ft with maximum load 2183 ft
Landing distance from 50 ft 1120 ft

245
Table 7.3 Baseline Model Specifications

Group Key Product Characteristics Dimension


Fuselage Length 30.62 ft
Width 4.33 ft
Seat Width 20 in.
Tail Length to Diameter Ratio 3.09
Wetted Area 325 ft2
Wing Aspect Ratio 7.88
Area 186.5 ft 2

Span 38.3 ft
Mean Chord 5.09 ft
1/4 Chord Sweep 4.0°
Taper Ratio 0.46
Root Thickness 0.15
Wing Loading 20.5 lb/ft2
Wing Fuel Volume 180 gal
Horizontal Tail Aspect Ratio 5.08
Area 45.2 ft 2
Span 15.15 ft
Mean Chord 3.09 ft
Thickness 0.09
Vertical Tail Aspect Ratio 1.07
Area 20.8 ft 2
Span 4.71 ft
Mean Chord 4.61 ft
Thickness 0.07
Engine Power 350 HP turbocharged
Static Thrust/Wt 0.339
Activity Factor 110
Propeller Diameter 6.30 ft
# of Blades 3

The flight mission for the family of GAA in this example is illustrated in Figure 7.4. As

specified in the GAA competition guidelines (NASA and FAA, 1994), a General Aviation

246
aircraft is required to fly at 150-300 kts (Mach 0.24 to 0.48) for a range of 800-1000 nautical

miles. The mission profile shown in Figure 7.4 has a (baseline) cruise speed of Mach 0.31 (˜

200 kts) and a range of 900 n.m. (nautical miles). In the diagram, FAR 23 represents Part 23

of the Federal Aviation Requirement which designates acceptable noise levels during aircraft

takeoff and landing as determined by the FAA.

Cruise speed Mach 0.31


900 n.m. @ 7500 ft

Landing @ sea level,


Takeoff @ sea level,
FAR 23
FAR 23
45 min. reserve fuel

Figure 7.4 GAA Mission Profile

Based on the GAA market segmentation grid in Figure 7.2, the scale factor for the

GAA platform is conceptual/configurational in nature. The aircraft platform is to be “scaled”

around the number of people on the aircraft; hence, the number of passengers is the scale factor

in this problem. The effect of the number of passengers on the length of the fuselage and sizing

of the aircraft is discussed more in the next section wherein the targets and requirements for

each aircraft and the design variables for this example are described.

247
7.2 STEP 2: GAA FACTOR CLASSIFICATION

Having created the market segmentation grid and identified an appropriate leveraging

strategy and scale factor, the next step in the PPCEM is to classify the factors within the GAA

problem. The general configuration of each aircraft has been fixed at three propeller blades,

high wing position, and retractable landing gear based on previous work (Simpson, 1995). The

design variables (i.e., control factors) and corresponding ranges of interest in this study are as

follows:

1. Cruise speed, CSPD ? [Mach 0.24, Mach 0.48]; baseline is Mach 0.31

2. Aspect ratio, AR ? [7, 11]; baseline is 7.88

3. Propeller diameter, DPRP ? [5.0 ft, 5.96 ft]; baseline is 6.3 ft

4. Wing loading, WL ? [19.0 lb/ft 2 , 25.0 lb/ft 2 ]; baseline is 20.5

5. Engine activity factor, AF ? [85, 110]; baseline is 100

6. Seat width, WS ? [14.0 in, 20.0 in]; baseline is 20 in.

A brief description of the importance and effects of each of these variables is included in Section

F.1.

There are a total of nine responses (i.e., requirements and goals) which are of interest

for each aircraft: takeoff noise, direct operating cost, ride roughness, empty weight and fuel

weight, purchase price, and maximum cruise speed, flight range and lift/drag ratio. In general, it

is desired to find settings of the design variables which:

• lower direct operating cost, purchase price, empty weight and fuel weight to their
targets;

248
• raise maximum cruise speed, flight range, and lift/drag ratio to their targets; and

• meet constraints on the maximum takeoff noise, ride roughness, direct operating cost,
empty weight, and fuel weight, and minimum flight range.

The constraint values and target values for the goals employed in this example are listed in Table

7.4 and Table 7.5, respectively. As such, these constraints and targets define each market

niche for each of the three aircraft. As shown in Table 7.4 the constraint values for take-off

noise, direct operating cost, ride roughness aircraft empty weight, and range are the same for

each aircraft within the family; only the fuel weight constraint varies for each aircraft, allowing

larger aircraft to carry more fuel.

Table 7.4 Constraints for the Two, Four, and Six Seater GAA

Constraints Acronym 2 Seater 4 Seater 6 Seater


Maximum take-off noise NOISE 75 db 75 db 75 db
Maximum direct operating cost DOC $80/hr $80/hr $80/hr
Maximum ride roughness coefficient ROUGH 2.0 2.0 2.0
Maximum aircraft empty weight WEMP 2200 lbs 2200 lbs 2200 lbs
Maximum aircraft fuel weight WFUEL 450 lbs 475 lbs 500 lbs
Minimum flight range RANGE 2000 nm 2000 nm 2000 nm

The goal targets which define each market niche are listed in Table 7.5. A compromise

DSP is used to determine the settings of the six design variables which lower fuel weight, empty

weight, direct operating cost, and purchase price to their targets or below while raising

maximum lift/drag, cruise speed, and range to their targets. The compromise DSP formulation

for the GAA problem is given in Section 7.4.

249
Table 7.5 Goal Targets for the Two, Four, and Six Seater GAA

Goal Targets Acronym 2 Seater 4 Seater 6 Seater


Aircraft fuel weight [lbs] WFUEL 450 400 350
Aircraft empty weight [lbs] WEMP 1900 1950 2000
Direct operating cost [$/hr] DOC 60 60/hr 60
Purchase price [$] PURCH 41,000 42,000 43,000
Maximum lift/drag ratio LDMAX 17 17 17
Maximum cruise speed [kts] VCRMX 200 200 200
Maximum range [nm] RANGE 2500 2500 2500

Based on the leveraging strategy shown in Figure 7.2, the number of people in the

aircraft is taken as the scale factor in the design process, ranging from a minimum of 2 to a

maximum of 6. Furthermore, it is assumed that the demand for the aircraft is uniform; therefore,

the scale factor—the number of passengers—is assumed to be uniformly distributed and so are

the corresponding responses. Taking the number of passengers as a scale factor, the length of

the central portion of the fuselage of the aircraft is scaled automatically within GASP to

accommodate the necessary number of passengers (plus one pilot). Because the length of the

aircraft is fixed once the number of people is specified, the mean and variance of the

scale factor are known in this example unlike in the universal motor example.

Based on this factor classification scheme, the P-diagram for the GAA example

problem is illustrated in Figure 7.5.

250
Y = Responses
Takeoff Noise
X = Control Factors Ride Roughness
Cruise speed
Empty Weight
Aspect Ratio
Fuel Weight
Propeller Diameter GASP Purchase Price
Wing Loading
Direct Operating Cost
Engine Activity Factor
Maximum Range
Seat Width Maximum Speed
S = # passengers Maximum Lift/drag

Figure 7.5 P-Diagram for GAA Example Problem

As shown in the figure, there are six control factors (design variables), one scale factor (the

number of passengers), and nine responses (constraints and goals). The process of constructing

kriging metamodels which relate the control and scale factors to each of the responses is

explained in the next section.

7.3 STEP 3: BUILD AND VALIDATE METAMODELS

The next step in the PPCEM is to build and validate metamodels of the

analysis/simulation routine, i.e., GASP. In particular, robustness models are constructed for

each of the nine responses, yielding a total of 18 metamodels: one metamodel for the mean and

a variance of each response. Why use metamodels in the GAA example? The impetus is

two-fold. First, GASP provides a “black-box” type analysis for sizing an aircraft. The

computation time for GASP is about 45 seconds which does not necessarily warrant the use of

metamodels; however, after multiplying this number by three—the number of aircraft in the

family—and considering the number of design scenarios that are to be considered, the

computational expense adds up quickly. Moreover, it is difficult to estimate the mean and

251
variance of each response for the family of aircraft without the metamodels. It is much more

efficient to build metamodels for the mean and deviation of each response and use them to

search the design space than it is to use GASP directly in the search for a good platform design.

The product array approach is employed to build kriging metamodels of the mean

deviation of each response to variations in the number of passengers in each aircraft. This

approach is illustrated in Figure 7.6. The outer array is based on a randomized orthogonal array

of 64 points (n = 64). The use of the randomized orthogonal array is based, primarily, on ease

of generation and available sample sizes; it is also based, in part, on its performance in the

kriging/DOE study in Chapter 5 even though a six variable test problem was not utilized in the

study. To compare the sample size, a half-fraction CCD for six factors would contain 45

points, and a full-fraction CCD would contain 77 points. The kriging models employ the

Gaussian correlation function, Equation 2.16, because this correlation function yielded the

lowest RMSE and max. error, on average, in the kriging/DOE study in Chapter 5 (see Section

5.3 in particular).

252
Platform
Scale Factor(s)
Responses
??noise??
Inner Array
Sample ??wemp??
?? doc ??

# PAX
design space

1
3
5
??rough??
j ? ??wfuel ??
??purch ??

1
2
3
# cspd ar dprp wl af ws
??range??
yj,1,1 yj,1,3 yj,1,5 µj,1 ? j,1 ??vcrmx??
1 0.24 7.0 5.9 20 85 17 ??
ldmax ??
2 0.24 7.0 5.0 19 91 20 yj,2,1 yj,2,3 yj,2,5 µj,2 ? j,2
Outer Array

• • • •
• • • •
• • • • Kriging models for each response
? ˆy ? f(cspd, ar, dprp, wl,af, ws)
n 0.48 11.0 5.0 20 85 17 yj,n,1 yj,n,3 yj,n,5 µj,n ? j,n j
? yˆ ? f(cspd, ar, dprp, wl,af, ws)
j

Figure 7.6 Product Array Approach for Constructing GAA Kriging Models

Because there is only one scale factor which has three possible settings, the inner array

shown in Figure 7.6 simply contains three runs, one for each possible value of the scale factor.

Hence, GASP is executed 3n times in order to build the kriging models for the mean and

deviation of the GAA responses. Notice that the variable PAX—the number of passengers—

varies from 1 to 5 in the figure. This is because the total number of people on the aircraft is

equal to the number of passengers plus 1 pilot; varying PAX from one to five is the same as

varying the number of people on the aircraft from two to six, allowing a family of aircraft to be

design around the two, four, and six seater configurations.

After varying the number of passengers for each combination of the design variables as

specified by the outer array, the mean and standard deviation of each response are computed

for each run using Equations 7.1 and 7.2 which are as follows:

253
y j,i,1 ? y j,i ,3 ? y j,i ,5
• Mean: ? y j ,i ? , j = {noise, wemp, ..., ldmax}, i = {1, 2, ..., n} [7.1]
3

y j,i,1 ? y j,i,5
• Std. Dev.: ? y j,i ? , j = {noise, wemp, ..., ldmax}, i = {1, 2, ..., n} [7.2]
12

Computation of the standard deviation assumes a uniform distribution of the response because

the number of passengers is assumed to vary uniformly over the design space. As an example,

the mean and standard deviation of the direct operating cost, DOC, for the 3rd experimental

design is estimated as follows:

yDOC,3,1 ? y DOC,3, 3 ? yDOC,3,5


? yDOC,3 ? [7.3]
3

y DOC,3,1 ? yDOC, 3,5


? y DOC,3 ? [7.4]
12

It is in this manner that the means and deviations for each response for each experimental run in

the outer array are computed for a given experimental design. Kriging metamodels then are

constructed for the mean and deviation of each response, resulting in 18 metamodels. The

kriging algorithm described in Section A.2.1 is used to fit the model; the fitted values (MLE

estimates) for the “best” kriging model for each response are listed in Section F.2.

A set of 1000 validation points from a random Latin hypercube are used to assess the

accuracy of the GAA kriging models. The maximum error and root mean square error (RMSE)

based on the set of validation points for the kriging models based on the 64 point orthogonal

254
array are summarized in Table 7.6; both raw values and percentages (of the sample range) are

listed.

Table 7.6 Error Analysis of GAA Kriging Models

Raw Values As a Percent of Range


Max Max
Response Error RMSE Error RMSE
? NOISE 0.020 0.006 0.03% 0.01%
? WEMP 4.630 0.942 0.24% 0.05%
? DOC 16.318 2.889 16.87% 3.49%
? ROUGH 0.015 0.004 0.76% 0.17%
? WFUEL 7.201 1.620 2.17% 0.40%
? PURCH 117.147 21.017 0.27% 0.05%
? RANGE 116.924 26.094 4.14% 1.01%
? VCRMX 0.913 0.158 0.46% 0.08%
? LDMAX 0.103 0.030 0.64% 0.17%
? NOISE 0.002 0.000 0.00% 0.00%
? WEMP 1.684 0.260 0.09% 0.01%
? DOC 2.895 0.348 3.23% 0.41%
? ROUGH 0.002 0.001 0.11% 0.03%
? WFUEL 1.689 0.368 0.40% 0.09%
? PURCH 49.998 7.636 0.12% 0.02%
? RANGE 39.702 8.797 1.37% 0.31%
? VCRMX 0.298 0.065 0.16% 0.03%
? LDMAX 0.024 0.002 0.16% 0.01%

With the exception of the maximum error for ? DOC , all of the kriging metamodels appear

sufficiently accurate for this study; maximum errors are about 4% or less, and RMSEs are 1%

or less. Despite the large maximum error for ? DOC , the RMSE for ? DOC is sufficiently low

enough, however, to provide a reasonable approximation. As such, these kriging metamodels

255
are used throughout the rest of the GAA example. Thus, Step 3 of the PPCEM is complete,

and the compromise DSP for the family of aircraft is formulated in the next section as Step 4 in

the PPCEM.

7.4 STEP 4: AGGREGATE PRODUCT SPECIFICATIONS AND FORMULATE


GAA PLATFORM COMPROMISE DSP

In the universal motor example, separate goals for “bringing the mean on target” and

“minimizing the deviation” for variations in the stack length are used. In this example, the

compromise DSP for the family of GAA employs design capability indices (Cdk) to assess the

capability of a family of designs—composed of the three General Aviation aircraft—to satisfy a

ranged set of design requirements. Design capability indices are formulated for both the

constraints and goals for the family of GAA as defined by the constraints and target values listed

in Table 7.4 and Table 7.5, respectively.

The compromise DSP for the family of GAA is derived as follows:

Cdk Constraint Formulations

o For the case where “Smaller is Better,” i.e., constraint = maximum:

( 75 ? ? noise)
• NOISE = 75 = URL C dk,noise ? Cdu ,noise ? [7.5]
3? noise
(80 ? ? doc )
• DOC < 80 = URL C dk,doc ? C du,doc ? [7.6]
3? doc
(2.0 ? ? rough )
• ROUGH = 2 = URL C dk, rough ? Cdu, rough ? [7.7]
3? rough
(2200 ? ? wemp )
• WEMP = 2200 = URL C dk,wemp ? C du,wemp ? [7.8]
3? wemp

256
(450 ? ? wfuel )
• WFUEL = 450 = URL C dk,wfuel ? C du,wfuel ? [7.9]
3? wfuel

o For the case where “Larger is Better,” i.e., constraint = minimum:

(? range ? 2000)
• RANGE = 2000 = LRL C dk, range ? Cdl,range ? [7.10]
3? range

Cdk Goal Formulations

o For the case where “Nominal is Best” for a goal:

• WFUEL:

(450 ? ? wfuel ) (? ? 350)


C du,wfuel ? , C du,wfuel ? wfuel
3? wfuel 3? wfuel

Cdk,wfuel = min{Cdu,wfuel ,Cdl,wfuel } [7.11]

• WEMP:

(2000 ? ? wemp ) (? wemp ? 1900)


C du,wemp ? ,Cdu,wemp ?
3? wemp 3? wemp

Cdk,wemp = min{Cdu,wemp,Cdl,wemp} [7.12]

• PURCH:

(43000 ? ? purch ) ( ? purch ? 41000)


C du,purch ? ,Cdu,purch ?
3? purch 3? purch

Cdk,wfuel = min{Cdu,purch,Cdl,purch} [7.13]

o For goals where “Smaller is Better”:

(60 ? ? doc )
• DOC: C dk,doc ? C du,doc ? [7.14]
3? doc

o For the case where “Larger is Better” for a goal:

257
(? ld max ? 17)
• LDMAX: C dk,ld max ? Cdl, ldmax ? [7.15]
3? ld max

(? vcrmx ? 200)
• VCRMX: C dk,vcrmx ? C dl, vcrmx ? [7.16]
3? vcrmx

(? range ? 2500)
• RANGE: C dk, range ? Cdl,range ? [7.17]
3? range

The resulting compromise DSP for the GAA product platform using these Cdk

formulations is given in Figure 7.7. There are six design variables, six constraints, and seven

goals. Of the seven goals, three are related to the economic performance of the aircraft—

empty weight (WEMP), purchase price (PURCH), and direct operating cost (DOC)—and the

remaining four are related to the technical performance of the aircraft: fuel weight (WFUEL),

maximum lift/drag (LDMAX), maximum cruise speed (VCRMX), and maximum flight range

(RANGE).

258
Given:
o Baseline aircraft configuration and mission profile
o Configuration scale factor = # passengers (where total # seats = # passengers + 1 pilot)
o Kriging models for mean and standard deviation of each response

Find:
o The system variables, x:
• cruise speed, CSPD • wing loading, WL
• wing aspect ratio, AR • engine activity factor, AF
• propeller diameter, DPRP • seat width, WS
o The values of the deviation variables associated with G(x):
• fuel weight Cdk, d1-, d1+ • maximum lift/drag Cdk, d5-, d5+
- +
• empty weight Cdk, d2 , d2 • maximum speed Cdk, d6-, d6+
• direct operating cost Cdk, d3-, d3+ • maximum range Cdk, d7-, d7+
• purchase price Cdk, d4-, d4+

Satisfy:
o The system constraints, C(x), based on kriging models:
• NOISE Cdk greater than 1: Cdk,noise(x) = 1 [7.5]
• DOC Cdk greater than 1: Cdk,doc(x) = 1 [7.6]
• ROUGH Cdk greater than 1: Cdk,rough(x) = 1 [7.7]
• WEMP Cdk greater than 1: Cdk,wemp(x) = 1 [7.8]
• WFUEL Cdk greater than 1: Cdk,wfuel (x) = 1 [7.9]
• RANGE Cdk greater than 1: Cdk,range(x) = 1 [7.10]
o The system goals, G(x), based on kriging models:
• WFUEL Cdk greater than 1: Cdk,noise(x) + d1- - d1+ = 1.0 [7.11]
• WEMP Cdk greater than 1: Cdk,wemp(x) + d2- - d2+ = 1.0 [7.12]
• DOC Cdk greater than 1: Cdk,doc(x) + d3- - d3+ = 1.0 [7.13]
• PURCH Cdk greater than 1: Cdk,purch(x) + d4- - d4+ = 1.0 [7.14]
• LDMAX Cdk greater than 1: Cdk,ldmax(x) + d5- - d5+ = 1.0 [7.15]
• VCRMX Cdk greater than 1: Cdk,vcrmx(x) + d6- - d6+ = 1.0 [7.16]
• RANGE Cdk greater than 1: Cdk,range(x) + d7- - d7+ = 1.0 [7.17]
o Constraints on deviation variables: di- • di+ = 0 and di-, di+ = 0.
o The bounds on the system variables:
0.24 M = CSPD = 0.48 M 19 lb/ft2 = WL = 25 lb/ft 2
7 = AR = 11 85 = AF = 110
5.0 ft = DPRP = 5.96 ft 14.0 in = WS = 20.0 in

259
Minimize:
o The sum of the deviation variables associated with:
• fuel weight Cdk, d1- • maximum lift/drag Cdk, d5-
-
• empty weight Cdk, d2 • maximum speed Cdk, d6-
• direct operating cost Cdk, d3- • maximum range Cdk, d7-
• purchase price Cdk, d4-
Z = { f1(d1-), f2(d2-), f3(d3-), f4(d4-), f5(d5-), f6(d6-), f7(d7-) }

Figure 7.7 GAA Product Platform Compromise DSP Formulation

Based on this GAA compromise DSP, the initial baseline design is infeasible in two

regards. First, the propeller diameter is too great as explained in Section 7.1 At 6.3 ft, the

speed of the propeller tip is above sonic speed, violating a tipspeed constraint which is not

explicitly modeled in the GAA compromise DSP; thus, the range for the propeller diameter is

set at 5-5.96 ft so that this constraint is always met. Second, the DOC violates the $80/hr

constraint which has been selected. The baseline design still represents a good design;

however, the GAA compromise DSP is being used to improve it as discussed in the next

section wherein the product platform portfolio is developed in Step 5 of the PPCEM.

7.5 STEP 5: DEVELOP THE GAA PLATFORM PORTFOLIO

In order to develop the GAA platform portfolio for the family of GAA to meet the

constraints and goals set forth in Section 7.2, three design scenarios are investigated (see Table

7.7).

Overall Tradeoff Study: All of the goals are weighted equally in an effort to develop a
platform that simultaneously meets both economic and performance requirements as
best as possible.
260
Economic Tradeoff Study: Economic related goals (C dk’s for empty weight, purchase
price, and direct operating cost) are given top priority to find a platform which meets all
of the economic requirements as best as possible; satisfying performance goals is
second priority.

Performance Tradeoff Study: Performance related goals (C dk’s for fuel weight, max.
lift/drag, max. speed, and max. range) are placed at the first priority level to develop a
platform that satisfies all of the performance requirements as best as possible;
meanwhile, economic goals are given second priority.

The corresponding deviation function formulations for each scenario are listed in Table 7.7.

Table 7.7 GAA Product Platform Compromise DSP Design Scenarios

Deviation Function
Scenario PLEV1 PLEV2 Note:
- -
(d1 + d2 + d3 - d 1- drives Cdk-wfuel to 1
d 2- drives Cdk-wemp to 1
1. Overall Tradeoff + d4- + d5- + d6-
d 3- drives Cdk-purch to 1
+ d7-)/7
(d2- + d3- + d4- (d1- + d5- + d6- d 4- drives Cdk-doc to 1
2. Economic Tradeoff d 5- drives Cdk-ldmax to 1
)/3 + d7-)/4
(d1- + d5- + d6- (d2- + d3- + d4- d 6- drives Cdk-vcrmx to 1
3. Performance Tradeoff d 7- drives Cdk-range to 1
+ d7-)/4 )/3

Three starting points are used when solving the GAA product platform compromise

DSP for each scenario: the lower, middle, upper bounds of the design variables; in a situation

where all three starting points do not converge to the same solution, the design with the lowest

deviation function value is taken as the best design (the reader is referred to the convergence

studies in Section 7.6.1). The resulting product platform specifications obtained by solving the

261
compromise DSP in Figure 7.7 are given in the next section. The individual instantiations of the

aircraft within the family based on the kriging metamodels then are discussed in Section 7.5.2.

7.5.1 Results of the GAA Compromise DSP for the Family of Aircraft

The resulting product platform specifications for each design scenario are summarized in

Table 7.8. Recall that the target values for each Cdk is 1; values above one indicate that the

family of GAA has met the desired URL or LRL while values below one indicate that the targets

have not been met for that particular requirement. All solutions are feasible, and the values for

Cdk,rough and Cdk,noise have not been included because they have no bearing on the deviation

function (other than to make the solution infeasible). The Cdk values for the initial baseline

design have also been included in the table for the sake of comparison. The PPCEM based

family has an unfair advantage because the baseline aircraft (the Beechcraft Bonanza B36TC

presented in Section 7.1.3) is a six seater aircraft and, as such, is not expect to perform well

when scaled down to fit fewer passengers; however, it still provides a reference point to

compare against the family of aircraft developed using the PPCEM.

Table 7.8 Summary of GAA Family Compromise DSP Results

Baseline Scenario
Design 1 2 3
Des. Var.
CSPD [Mach] 0.31 0.244 0.242 0.291
AR 7.88 8.00 8.09 7.62
DPRP [ft] 6.3 5.13 5.19 5.55
WL [lb/ft2] 20.5 22.45 22.63 22.48
AF 110 89.60 89.40 85.63

262
WS [in] 20 18.60 18.72 18.70
Goals
Cdk-wfuel P* -0.640 1.164 1.236 1.156
Cdk-wemp E 0.074 0.810 0.903 0.806
Cdk-doc E -670.476 -1.588 -1.312 -26.270
Cdk-purch E -2.557 0.733 0.449 0.070
Cdk-ldmax P -3.230 -4.474 -4.427 -4.964
Cdk-vcrmx P -4.397 -4.303 -3.702 -2.017
Cdk-range P -4.157 0.577 -0.672 0.429
Dev. Fcn.
PLEV1 2.036 0.986 2.388
PLEV2 2.950 9.4556
*
P indicates Cdk is related to performance; E to economics—economic
goals rank first in Scenario 2; performance goals rank first in Scenario 3

Compared to the initial baseline design, the PPCEM designs have a lower cruise speed,

propeller diameter, engine activity factor, and seat width. Meanwhile, the wing loading is slightly

larger in general; and the aspect ratio fluctuates around the baseline value. Comparing the

design variables for Scenario 1 and 2, there is negligible difference. This indicates that in the

overall tradeoff study, the economic goals tend to dominate the solution despite all goals being

equally weighted. In an effort to achieve better performance in Scenario 3 (at the sacrifice of

the economic goal achievement), the cruise speed is slightly higher, the propeller diameter is

slightly larger, and the aspect ratio and engine activity factor are slightly lower for this scenario

than either Scenario 1 or 2. Thus, in order to maintain sufficient flexibility to achieve all the

design considerations in all three scenarios, the resulting product platform is taken as follows:

• Cruise speed = Mach 0.242 or Mach 0.291 (if performance is first priority)

• Aspect ratio = 7.85 ± 0.24

263
• Propeller diameter = 5.34 ± 0.2 ft

• Wing loading = 22.54 ± 0.09 lb/ft 2

• Engine activity factor = 87.61 ± 2.0

• Seat width = 18.66 ± 0.06 in

These values comprise the range of values that cruise speed, aspect ratio, etc. should be

allowed to take in order to meet the goals as best as possible in any of the three design

scenarios. It is these values which define the GAA product platform around which the family of

aircraft are created.

Before instantiating the individual aircraft to examine how well they perform given these

specifications, notice in Table 7.8 that very few Cdk goals achieve their target of 1; only Cdk,wfuel

is consistently larger than 1, indicating that the family of GAA are capable of meeting the

specified fuel weight targets. The empty weight Cdk and purchase price Cdk are the second best

with Cdk,range performing well in Scenarios 1 and 3. All Cdk values are improved over the

baseline design except for Cdk,ldmax which has decreased slightly. In Scenario 2, the economic

Cdk’s for direct operating cost and empty weight improve slightly but at the expense of a slight

decrease in the purchase price Cdk when compared to the value obtained in Scenario 1. The

big tradeoff between the economic and performance goals in Scenarios 2 and 3 is best seen in

Cdk,doc. In all three scenarios, the family of GAA is far from achieving its target of $60/hr for the

direct operating cost as indicated by the low values for Cdk,doc; however, in Scenario 3 when

achieving performance goals is given a higher priority than economic goals, Cdk,doc is even

264
worse, indicating the compromise between a family of aircraft that has performs well versus one

that is economical.

To study these compromises further, five more design scenarios are formulated (see

Section F.4) to determine whether it is the Cdk formulation that is performing poorly or that the

targets are difficult to achieve. The results are listed separately in Section F.4 and discussed

therein. The end result of examining all these design scenarios is learning that significant

tradeoffs are occurring in Scenarios 1, 2, and 3 where the economic and performance Cdk goals

are equally weighted at different priority levels. Only when a particular Cdk is given first priority

(i.e., placed at PLEV1) in the GAA product platform compromise DSP can the target (C dk = 1)

be achieved. Any other time, the solutions from the GAA product platform compromise DSP

represent the best possible compromise which can be obtained for a particular design scenario,

poor Cdk value or not. Furthermore, the deviation function values shown in Table 7.8 are not

much value in and of themselves because they are based on how well the Cdk achieve their

target of 1. Recall that Cdk is only a means to end, i.e., to generate a family of aircraft

which satisfies the given ranged set of requirements as well as possible. What is important,

however, is the resulting aircraft which come from instantiating the PPCEM aircraft platform to

accommodate two, four, and six passengers.

7.5.2 Instantiation of the Family of General Aviation Aircraft

Unlike in the universal motor example, instantiation of the individual aircraft within the

GAA product family only requires specifying the number of passengers on the plane, not solving

265
another compromise DSP to find the best stack length to meet a particular torque requirement.

The individual constraints and goals for each aircraft must be formulated first, however, based

on the specifications given in Section 7.2. Based on the requirements in Table 7.4, the

individual constraints for each aircraft are given by the following:

• noise [dbA]: NOISE(x) = 75 dbA [7.18]

• direct operating cost [$/hr]: DOC(x) = $80/hr [7.19]

• ride roughness: ROUGH(x) = 2.0 [7.20]

• aircraft empty weight [lbs]: WEMP(x) = 2200 lbs [7.21]

• aircraft fuel weight [lbs]: WFUEL(x) = Ci,wfuel [7.22]

• maximum flight range [nm]: RANGE(x) = 2200 nm [7.23]

where Ci,wfuel = {450 lbs, 475 lbs, 500 lbs} and i = {1, 3, 5} passengers. Meanwhile, the

individual goals based on the targets in Table 7.5 for each aircraft are given by:

• aircraft fuel weight [lbs]: WFUEL(x)/Ti,wfuel + d1- - d1+ = 1.0 [7.24]

• aircraft empty weight [lbs]: WEMP(x)/Ti,wemp + d2- - d2+ = 1.0 [7.25]

• direct operating cost [$/hr]: DOC(x)/60 + d3- - d3+ = 1.0 [7.26]

• purchase price [$]: PURCH(x)/Ti,purch + d4- - d4+ = 1.0 [7.27]

• maximum lift/drag: LDMAX(x)/17 + d5- - d5+ = 1.0 [7.28]

• maximum cruise speed [kts]: VCRMX(x)/200 + d6- - d6+ = 1.0 [7.29]

• maximum range [nm]: RANGE(x)/2500 + d7- - d7+ = 1.0 [7.30]

266
where Ti,wfuel = {450 lbs, 400 lbs, 350 lbs}, Ti,wemp = {1900 lbs, 1950 lbs, 2000 lbs}, Ti,purch =

{$41000, $42000, $43000} and i = {1, 3, 5} passengers. Based on these goals, the deviation

function for each aircraft is a combination of: d1+, d2+, d3+, d4+, d5-, d6-, and d7- because it is

desired to lower fuel weight, empty weight, direct operating cost, and purchase price to their

targets and to raise maximum lift/drag, cruise speed, and range to theirs. The resulting deviation

function formulations for each aircraft for each scenario is listed in Table 7.9. These deviation

functions are identical to those listed in Table 7.7 except that di+ and di- are for the individual

goals of each aircraft and not the Cdk for the family (which only uses di- to raise Cdk to its target

of 1 as noted in Table 7.7).

Table 7.9 Deviation Functions of Individual Aircraft for Each Scenario

Deviation Function
Scenario PLEV1 PLEV2 Note:
+
(d1 + d2 ++ d 1+ lowers fuel weight to target
d 2+ lowers empty weight to target
1. Overall tradeoff d3+ + d4+ + d5-
d 3+ lowers direct oper. cost to
+ d6- + d7-)/7 target
(d2+ + d3+ + (d1+ + d5- + d6- d 4+ lowers purchase price to target
2. Economic tradeoff d 5- raises max. lift/drag to target
d4+)/3 + d7-)/4
-
(d1+ + d5- + d6- (d2+ + d3+ + d 6- raises max. speed to target
3. Performance tradeoff d 7 raises max. range to target
+ d7-)/4 d4+)/3

The instantiations of the two, four, and six seater GAA for the family of aircraft based

on the PPCEM platform values are summarized in Table 7.10. These response values are

obtained by evaluating the kriging metamodels at the design variable values listed in Table 7.8

for each scenario. Based on the low deviation function values listed in Table 7.10, it appears

267
that the PPCEM based family of aircraft perform reasonably well on an individual basis. The

targets for fuel weight and empty weight are met in all cases. Despite the poor showing of

Cdk,doc, the DOC values for the individual aircraft in Scenarios 1 and 3 are within $2/hr of the

target of $60/hr; meanwhile, the DOC values are near their maximum permitted value ($80/hr)

in Scenario 3 when economics takes second priority to performance. The purchase price goals

of {$41000, $42000, $43000} are within $1000 or less of being met in all cases. The

maximum lift/drag ratio (LDMAX) and cruise speeds (VCRMX) do not meet their targets of 17

or 200 very well. The maximum range target (2500 n.m.) is met in Scenario 1 and by all but the

six seater in Scenario 3; the range values for Scenario 2 are slightly below the target.

Table 7.10 Instantiations of the PPCEM Product Platform Based on Kriging


Metamodels

Response
Design No. of WFUEL WEMP DOC PURCH LDMAX VCRMX RANGE Dev Fcn
Scenario Seats [lbs] [lbs] [$/hr] [$] [kts] [nm] PLEV1 PLEV2
2 447.70 1892.32 61.05 42078.2 16.16 193.56 2542.7 0.018
1 4 409.19 1929.50 62.31 42601.3 15.80 190.17 2517.3 0.028
6 376.65 1959.37 62.54 43138.7 15.68 188.87 2502.8 0.036
2 445.06 1895.36 60.52 42221.0 16.16 194.82 2497.3 0.013 0.019
2 4 405.92 1932.81 61.83 42749.0 15.81 191.51 2462.0 0.016 0.036
6 373.48 1962.93 62.20 43296.0 15.68 190.20 2444.0 0.015 0.054
2 446.36 1891.95 78.16 42402.6 16.04 198.16 2543.1 0.016 0.112
3 4 406.63 1929.39 79.30 42972.1 15.73 195.16 2509.0 0.029 0.115
6 377.05 1959.14 78.28 43462.1 15.56 193.72 2482.9 0.050 0.105

So what does all this mean in terms of designing a scalable platform for a product

family? Considerable improvement has been made over the initial baseline design, but has a

268
good family of aircraft been designed? Answers to this question are offered in the next

section in which verification and the implications of the results are discussed.

7.6 VERIFICATION OF GAA PRODUCT PLATFORM RESULTS

To verify the results obtained from implementing the PPCEM, the following questions

are addressed.

Verify compromise DSP solutions - What do the convergence histories look like for
each scenario? Is the best solution being obtained?

Verify kriging predictions - How does the predicted performance of the individual
aircraft based on the kriging models compare to the actual performance in
GASP?

Verify instantiations of PPCEM platform- How do the individual aircraft based on


the PPCEM platform compare to individually designed benchmark aircraft?

Verify PPCEM family - How does the family of aircraft based on the PPCEM
compare to the aggregate group of individually designed benchmark aircraft?

Each question is addressed in turn in the following sections.

7.6.1 GAA Product Platform Compromise DSP Verification

Convergence histories of the GAA compromise DSP solution for Scenario 1 for the

family of GAA is illustrated in Figure 7.8. As seen in the figure, all three starting points converge

to approximately the same solution, indicating that the best possible solution has likely been

obtained. The initial deviation function for the high starting point is quite large (~15) while the

269
initial design based on the middle starting point is slightly infeasible; hence, the jump in iteration 2

and the increase in PLEV1.

16

14 Low
Mid
12
Hi
10
PLEV1

6
4

2
0
0 5 10 15
Iterations

Figure 7.8 Convergence History of GAA Family C-DSP for Scenario 1

The convergence histories for the PLEV1 and PLEV2 for Scenarios 2 and 3 are

illustrated in Figure 7.9. Similar to Figure 7.8, the three starting points for Scenarios 2 and 3

yield a wide range of initial deviation function values, but the model tends to converge at the

same or nearly similar solutions. This trend holds true at both priority levels in both scenarios.

270
45 5

40 4.5

35 4
3.5
30
3
25

PLEV1
2.5
PLEV1

20
2
15 1.5
10 1
5 0.5
0 0
0 2 4 6 8 10 12 14 16 0 5 10 15 20 25
Iterations Iterations

(a) Scenario 2 - Priority Level 1 (b) Scenario 3 - Priority Level 1


5 45

4.5 40
4 35
3.5 30
3
25

PLEV2
PLEV2

2.5
20
2
15
1.5
1 10

0.5 5
0 0
0 2 4 6 8 10 12 14 16 0 5 10 15 20 25
Iterations Iterations

(c) Scenario 2 - Priority Level 2 (d) Scenario 3 - Priority Level 2

Figure 7.9 Convergence History for Scenarios 2 and 3

Furthermore, it is interesting to note the parity between PLEV1 in Scenario 2 (Figure

7.9a) and PLEV2 in Scenario 3 (Figure 7.9d) and between PLEV1 in Scenario 3 (Figure 7.9b)

and PLEV2 in Scenario 2 (Figure 7.9c) since the same goals are equally weighted at different

levels in these scenarios. Comparing these graphs reveals the true nature of the tradeoffs that

occur between the economic goals and the performance goals. When the economic goals are

placed at the first priority level in Scenario 2, a much lower value for the deviation function is

capable of being achieved compared to when they are placed at the second priority level as in

271
Scenario 3. The same holds true for the performance goals in the first priority level in Scenario

3 when compared to the second priority level of Scenario 2.

7.6.2 Comparisons of Kriging Predictions and GASP

Previously, the performance of the individual aircraft have been based on predictions

from kriging metamodels (see Table 7.10). Therefore, the performance of the individual aircraft

is evaluated directly in GASP as opposed to being estimated from the kriging metamodels. The

results are summarized in Table 7.11 and can be compared directly to the previous values listed

in Table 7.10. The resulting approximation error between the kriging model predictions and the

actual aircraft instantiations in GASP are summarized in Table 7.12.

Table 7.11 Performance of PPCEM Platform Instantiations in GASP

Design No. Response Dev Fcn


Scenario Seats WFUEL WEMP DOC PURCH LDMAX VCRMX RANGE PLEV1 PLEV2
2 449.43 1887.15 61.98 41817.0 15.89 190.83 2491.0 0.024
1 4 413.80 1921.71 63.31 42374.5 15.61 188.47 2436.0 0.038
6 388.49 1946.59 63.85 42827.0 15.53 187.61 2420.0 0.051
2 447.25 1889.73 61.60 41959.4 15.91 192.24 2446.0 0.017 0.031
2 4 411.34 1924.56 62.85 42502.8 15.63 189.51 2393.0 0.020 0.051
6 385.69 1949.77 63.38 42989.1 15.55 189.07 2377.0 0.019 0.073
2 450.92 1886.97 77.43 42150.1 15.79 195.53 2499.0 0.024 0.106
3 4 415.34 1921.70 79.13 42727.0 15.52 193.32 2451.0 0.045 0.112
6 389.84 1946.82 79.76 43190.3 15.44 192.46 2437.0 0.067 0.111

The errors are expressed as a percent of the actual value obtained from GASP; a

positive error indicates over-prediction of the response, and a negative error indicates under-

prediction. The maximum error occurs for RANGE of the six seater GAA (= 3.42%). In

general, the kriging models over-predict PURCH, LDMAX, VCRMX, and RANGE and

272
under-predict WFUEL and DOC. The average percentage error for each response also is

listed in the table; values range from 0.62% to a high of 2.52%. In summary, then, it appears

that the kriging metamodel predictions are quite accurate based on the error analysis in Table

7.12.

Table 7.12 Approximation Errors for Individual Aircraft

Design No. Response


Scenario Seats WFUEL WEMP DOC PURCH LDMAX VCRMX RANGE
2 -0.39% 0.27% -1.51% 0.62% 1.73% 1.43% 2.08%
1 4 -1.12% 0.41% -1.58% 0.54% 1.20% 0.90% 3.34%
6 -3.05% 0.66% -2.06% 0.73% 0.98% 0.67% 3.42%
2 -0.49% 0.30% -1.74% 0.62% 1.60% 1.34% 2.10%
2 4 -1.32% 0.43% -1.62% 0.58% 1.14% 1.05% 2.88%
6 -3.16% 0.67% -1.85% 0.71% 0.86% 0.60% 2.82%
2 -1.01% 0.26% 0.94% 0.60% 1.56% 1.35% 1.76%
3 4 -2.10% 0.40% 0.21% 0.57% 1.36% 0.95% 2.37%
6 -3.28% 0.63% -1.85% 0.63% 0.81% 0.65% 1.88%
Average %error = -1.77% 0.45% -1.23% 0.62% 1.25% 0.99% 2.52%

7.6.3 Comparison of PPCEM Results to Benchmark Aircraft

For further verification of the PPCEM aircraft, individual (benchmark) aircraft are

designed using GASP and DSIDES directly to compare to the aircraft obtained through the

implementation of the PPCEM. The compromise DSP for these benchmark aircraft is shown in

Figure 7.10 and is derived from Equations 7.16-7.28 for the individual constraints and goals

listed in Table 7.4 and Table 7.5, respectively. As in the individual instantiations of the PPCEM

platform, the deviation variables of interest are: d1+, d2+, d3+, d4+, d5-, d6-, and d7- because it is

273
desired to lower fuel weight, empty weight, direct operating cost, and purchase price to their

targets and to raise maximum lift/drag, cruise speed, and range to theirs.

274
Given:
o Baseline aircraft configuration and mission profile
o General Aviation Synthesis Program (GASP)

Find:
o The system variables, x:
• cruise speed, CSPD • wing loading, WL
• wing aspect ratio, AR • engine activity factor, AF
• propeller diameter, DPRP • seat width, WS
o The values of the deviation variables associated with G(x):
• fuel weight Cdk, d1-, d1+ • maximum lift/drag Cdk, d5-, d5+
• empty weight Cdk, d2-, d2+ • maximum speed Cdk, d6-, d6+
- +
• direct operating cost Cdk, d3 , d3 • maximum range Cdk, d7-, d7+
• purchase price Cdk, d4-, d4+

Satisfy:
o The system constraints, C(x), based on kriging models:
• noise [dbA]: NOISE(x) = 75 dbA [7.18]
• direct operating cost [$/hr]: DOC(x) = $80/hr [7.19]
• ride roughness: ROUGH(x) = 2.0 [7.20]
• aircraft empty weight [lbs]: WEMP(x) = 2200 lbs [7.21]
• aircraft fuel weight [lbs]: WFUEL(x) = Ci,wfuel [7.22]
• maximum flight range [nm]: RANGE(x) = 2200 nm [7.23]
o The system goals, G(x), based on kriging models:
• aircraft fuel weight [lbs]: WFUEL(x)/Ti,wfuel + d1- - d1+ = 1.0 [7.24]
• aircraft empty weight [lbs]: WEMP(x)/Ti,wemp + d2- - d2+ = 1.0 [7.25]
• direct operating cost [$/hr]: DOC(x)/60 + d3- - d3+ = 1.0 [7.26]
• purchase price [$]: PURCH(x)/Ti,purch + d4- - d4+ = 1.0 [7.27]
• maximum lift/drag: LDMAX(x)/17 + d5- - d5+ = 1.0 [7.28]
• maximum cruise speed [kts]: VCRMX(x)/200 + d6- - d6+ = 1.0 [7.29]
• maximum range [nm]: RANGE(x)/2500 + d7- - d7+ = 1.0 [7.30]
o Constraints on deviation variables: di- • di+ = 0 and di-, di+ = 0.
o The bounds on the system variables:
0.24 M = CSPD = 0.48 M 19 lb/ft2 = WL = 25 lb/ft 2
7 = AR = 11 85 = AF = 110
5.0 ft = DPRP = 5.96 ft 14.0 in = WS = 20.0 in

Minimize:

275
o The sum of the deviation variables associated with:
• fuel weight, d1+ • maximum lift/drag ratio, d5-
• empty weight, d2+ • maximum speed, d6-
+
• direct operating cost, d3 • maximum range, d7-
• purchase price, d4+
Z = { f1(d1+), f2(d2+), f3(d3+), f4(d4+), f5(d5-), f6(d6-), f7(d7-) }

Figure 7.10 GAA Compromise DSP for Individual Aircraft

To design each benchmark aircraft, the compromise DSP in Figure 7.10 is

particularized with the appropriate targets and constraints and solved: Ci,wfuel = {450 lbs, 475

lbs, 500 lbs}, Ti,wfuel = {450 lbs, 400 lbs, 350 lbs}, Ti,wemp = {1900 lbs, 1950 lbs, 2000 lbs},

Ti,purch = {$41000, $42000, $43000} and i = {1, 3, 5} passengers. The same three design

scenarios are used when designing each benchmark aircraft, see Table 7.13. All three scenarios

are tradeoff studies: Scenario 1 is an overall tradeoff with all goals weighted equally; Scenario 2

has the economic goals weight equally at the first priority level (PLEV1), and the performance

goals weighted equally at the second priority level (PLEV2); and Scenario 3 is the reverse of

Scenario 2 with performance goals being ranked first and economics second. The deviation

function formulations for each scenario for the benchmark aircraft are listed in the table. Notice

that a combination of di+ and di- are being used in the deviation function and not just di-. This is

because it is desired to lower fuel weight, empty weight, direct operating cost, and purchase

price to their targets and to raise maximum lift/drag, cruise speed, and range to theirs. (With the

Cdk formulation, the only concern is to minimize di- in order to ensure that Cdk = 1.)

276
Table 7.13 Design Scenarios for Designing GAA Benchmark Aircraft

Deviation Function
Scenario PLEV1 PLEV2 Note:
+ + +
(d1 + d2 + d 1 lowers fuel weight to target
d 2+ lowers empty weight to target
1. Overall tradeoff d3+ + d4+ + d5-
d 3+ lowers direct oper. cost to
+ d6- + d7-)/7 target
(d2+ + d3+ + (d1+ + d5- + d6- d 4+ lowers purchase price to target
2. Economic tradeoff d 5- raises max. lift/drag to target
d4+)/3 + d7-)/4
-
(d1+ + d5- + d6- (d2+ + d3+ + d 6- raises max. speed to target
3. Performance tradeoff d 7 raises max. range to target
+ d7-)/4 d4+)/3

As before, three starting points—lower, middle, and upper values—are used when

designing each aircraft for each scenario; the best design(s) is then taken as the one with the

lowest deviation function value. Convergence plots for each aircraft for each scenario are listed

separately in Section F.5 and are similar to those observed for the PPCEM solutions in 7.6.1.

The final settings of the design variables for each aircraft for each of these three design scenarios

are listed in Table 7.14 through Table 7.16 for Scenarios 1-3, respectively. Each set of results

is discussed in turn and plotted graphically with the corresponding PPCEM instantiations for a

quick comparison of the results. The results for Scenario 1 are listed in Table 7.14.

Table 7.14 Individual PPCEM and Benchmark Aircraft for Scenario 1

PPCEM Aircraft Benchmark Aircraft


Des. Var. 2 Seater 4 Seater 6 Seater 2 Seater 4 Seater 6 Seater
CSPD [Mach] ? 0.244 ? 0.240 0.240 0.240
AR ? 8.00 ? 8.61 7.96 9.16
DPRP [ft] ? 5.13 ? 5.00 5.00 5.00
2
WL [lb/ft ] ? 22.46 ? 22.07 21.42 21.69
AF ? 89.60 ? 85.25 95.00 86.37

277
WS [in] ? 18.60 ? 18.04 18.35 19.45
Responses
WFUEL [lbs] 449.43 413.8 388.49 449.67 408.35 349.97
WEMP [lbs] 1887.15 1921.71 1946.59 1888.23 1926.33 1986.58
DOC [$/hr] 61.98 63.31 63.85 61.6 63.9 64.02
PURCH [$] 41817 42374.5 42827 41607.8 42240 43262.9
LDMAX 15.89 15.61 15.53 16.54 15.8 16.4
VCRMX [kts] 190.83 188.47 187.61 187.47 184.58 181.94
RANGE [nm] 2491 2436 2420 2536 2672 2466
Dev. Fcn.
PLEV1 0.0240 0.0377 0.0506 0.0187 0.0341 0.0303

As can be seen in the table, there is little variation between the design variable settings

for the benchmark aircraft even though they have been designed individually. The benchmark

aircraft share common settings for the cruise speed and propeller diameter despite being

individually designed. Aspect ratio and wing loading only vary slightly between each aircraft,

and the difference in seat widths (WS) for each aircraft is less than 1.5 in. The engine activity

factor varies the most of the six design variables. It is interesting to note that the PPCEM design

variable values are quite close to the benchmark designs. Seat width, aspect ratio, and engine

activity factor all are contained within the range of settings for the benchmark aircraft. The

PPCEM values for cruise speed and propeller diameter are only slightly larger than the

corresponding values which are shared between all three benchmark aircraft.

Despite the similarity of the design variable settings for the two families of aircraft, only

the 4 seater benchmark and PPCEM aircraft have similar deviation function values. The two

and six seater aircraft from the PPCEM are both slightly worse than the benchmark designs as a

result of having a common set of design variables for all three aircraft. To see why this is and to
278
facilitate comparison of the performance of the two families of aircraft (the one based on the

PPCEM and the group of benchmark aircraft), plots of the individual goal achievements for

each aircraft are given in Figure 7.11 for Scenario 1. The idea of using a “spider” or

“snowflake” plot to show goal achievement comes from Sandgren (1989). In the spider plot,

goal deviation values are plotted on the axes of the web; the closer a mark is on its axis to the

origin, the better that particular goal has been achieved. In this manner, the shape of the

polygon formed by connecting the deviation values for each design can be used to compare

designs quickly. In other words, the two seater aircraft from the PPCEM platform and the

benchmark design can be quickly compared by plotting their goal achievement on the same

spider plot as is done in Figure 7.11 for all three aircraft which comprise the GAA family.

279
SCENARIO 1 - Overall Tradeoff Study

2 Seater 4 Seater
WEMP WEMP
0.08 0.10
0.06 0.08
RANGE DOC RANGE 0.06 DOC
0.04
0.04
0.02 0.02
0.00 0.00
VCRMX VCRMX
PURCH PURCH

LDMAX WFUEL LDMAX WFUEL

6 Seater WEMP
Deviation Functions: PLEV1
0.15
Benchmark PPCEM Family
RANGE 0.10 DOC 2 Seater 0.0187 0.0240
4 Seater 0.0341 0.0377
0.05
6 Seater 0.0303 0.0506
0.00
VCRMX
PURCH
PPCEM Family
Benchmark
LDMAX WFUEL

Figure 7.11 Graphical Comparison of Benchmark Aircraft and PPCEM Family for
Scenario 1

In the overall tradeoff study (Scenario 1) shown in Figure 7.11 all seven goals are

equally weighted. Some observations based on the graphs are as follows:

• In the two seater aircraft, the achievement of WEMP, DOC, PURCH, WFUEL, and
RANGE appear virtually equal. The PPCEM aircraft exhibits slightly better
achievement of the VCRMX target; however, the benchmark design has better

280
LDMAX than the PPCEM which can account for the difference between the deviation
functions for these aircraft.

• In the four seater aircraft, the PPCEM solutions perform slightly better at DOC and
VCRMX, but slightly worse with RANGE, WFUEL, and LDMAX. Both aircraft
designs achieve the target for empty weight (WEMP).

• In the six seater aircraft, both designs achieve the WEMP and PURCH targets. DOC
achievement is essentially equal for both aircraft. The PPCEM designs yield slightly
better VCRMX than the benchmark aircraft; however, the benchmark design
outperforms the PPCEM design in WFUEL, LDMAX, and RANGE. It appears that
the difference in achievement in LDMAX and WFUEL account for the large
discrepancy in the two deviations functions for the six seater aircraft because the
achievement of the other goals are comparable for both aircraft.

The results for Scenario 2 are summarized in Table 7.15. As seen in Scenario 1, the

cruise speeds for the PPCEM aircraft and the benchmark aircraft are essentially the same. The

aspect ratio for the PPCEM aircraft is contained within the range of the benchmark designs but

is on the low end. The propeller diameter for the PPCEM aircraft is slightly higher than the

benchmark aircraft, which have nearly identical propeller diameters again despite being designed

individually. The wing loading for the PPCEM aircraft is about 2 lb/ft2 lower than that of the

benchmark designs, whose value, only vary by about 0.7 lb/ft2 between all three aircraft. The

engine activity factors for the benchmark aircraft vary from 85 to a high of 109; the PPCEM

aircraft have a value of 89.4 which falls within the range of the benchmark designs. Finally, the

seat widths for the benchmark designs are lower than for the PPCEM aircraft and converging

almost to the lower bound of 14 in.

281
Table 7.15 Individual PPCEM and Benchmark Aircraft for Scenario 2

PPCEM Aircraft Benchmark Aircraft


Des. Var. 2 Seater 4 Seater 6 Seater 2 Seater 4 Seater 6 Seater
CSPD [Mach] ? 0.242 ? 0.240 0.243 0.240
AR ? 8.09 ? 10.00 8.50 7.96
DPRP [ft] ? 5.20 ? 5.00 5.06 5.00
2
WL [lb/ft ] ? 22.63 ? 24.49 24.27 24.91
AF ? 89.40 ? 91.37 109.09 85.00
WS [in] ? 18.72 ? 18.18 15.42 14.39
Responses
WFUEL [lbs] 447.25 411.34 385.69 450.04 474.07 494.81
WEMP [lbs] 1889.73 1924.56 1949.77 1891.95 1865.35 1843.77
DOC [$/hr] 61.60 62.85 63.38 60.34 59.99 60.21
PURCH [$] 41959.4 42502.8 42989.1 42049.5 41778.0 41106.4
LDMAX 15.91 15.63 15.55 17.11 16.23 15.85
VCRMX [kts] 192.24 189.51 189.07 193.46 196.63 194.29
RANGE [nm] 2446.00 2393 2377 1997 2133 2058
Dev. Fcn.
PLEV1 0.0167 0.0198 0.0188 0.0104 0.000 0.0011
PLEV2 0.0311 0.0510 0.0728 0.0585 0.0986 0.1717

The resulting deviation functions for the two families of aircraft are comparable to the

benchmark designs having consistently lower PLEV1 (deviation function value at priority level

1) but slightly larger PLEV2 than the PPCEM aircraft. In the preemptive (i.e., lexicographic)

case, however, having lower PLEV2 values does not matter unless PLEV1 values are the same.

The first level deviation function value for the four seater benchmark design is zero, indicating

that the design is capable of meeting all of its designated targets. The 2 and 6 seater benchmark

designs also both fare well at achieving their targets, having PLEV1 values of 0.01 and 0.001,

respectively. The PPCEM aircraft, on the other hand, have PLEV1 values which are slightly

worse, and the four seater PPCEM design does not achieve all of its targets as did the

282
benchmark design. A look at the discrepancies between the goal achievement of the two

families of aircraft can be seen in the spider plot for the Scenario 2 shown in Figure 7.12.

SCENARIO 2 - Economic Tradeoff Study

2 Seater 4 Seater WEMP


WEMP
0.03 0.05
0.04
0.02 0.03
0.01 0.02
0.01
0.00 0.00
PURCH PURCH
DOC DOC

6 Seater Deviation Functions: PLEV1


WEMP
0.06
Benchmark PPCEM Family
0.04 2 Seater 0.0104 0.0167
4 Seater 0.0000 0.0198
0.02
6 Seater 0.0011 0.0188
0.00
PURCH
DOC
PPCEM Family
Benchmark

Figure 7.12 Graphical Comparison of Benchmark Aircraft and PPCEM Family for
Design Scenario 2, Priority Level 1 Only

In Figure 7.12 the results of the economic tradeoff study (Scenario 2) are illustrated.

Only the three economic related goals which are considered in the first priority level are shown:

empty weight (WEMP), direct operating cost (DOC), and purchase price (PURCH). Some

observations based on the graphs are as follows:

283
• Both sets of aircraft achieve the desired targets for empty weight.

• The purchase price (PURCH) for the seater aircraft from the PPCEM is slightly lower
than that of the benchmark aircraft; however, the purchase price for the four seater
PPCEM design is slightly higher than its comparative benchmark. Both six seater
aircraft do equally well at achieving the target.

• The DOC target achievement for the PPCEM solutions are higher for all three aircraft
than the individually designed benchmark aircraft; the inability to achieve the DOC
target is the main cause for the large discrepancy in the deviation function value
(PLEV1) for the PPCEM aircraft.

Finally, the results for Scenario 3 are summarized in Table 7.16. Notice that the

PPCEM cruise speed values are larger than all three benchmark designs while the aspect ratio is

less. The propeller diameter again is slightly larger for the PPCEM designs than for the

benchmark aircraft. The wing loading for both families of aircraft are comparable, but the

engine activity factor for the PPCEM tends to be on the lower end of the benchmark aircraft

setting. The seat width for the PPCEM is within the range of seat widths found for the

benchmark designs and is nearly identical to that of the four seater benchmark aircraft with the

two seater being slightly smaller and the six seater slightly larger.

Table 7.16 Individual PPCEM and Benchmark Aircraft - Scenario 3

PPCEM Aircraft Benchmark Aircraft


Des. Var. 2 Seater 4 Seater 6 Seater 2 Seater 4 Seater 6 Seater
CSPD [Mach] ? 0.291 ? 0.257 0.270 0.240
AR ? 7.62 ? 8.30 8.32 9.19
DPRP [ft] ? 5.55 ? 5.03 5.12 5.00
2
WL [lb/ft ] ? 22.48 ? 22.35 22.04 21.63

284
AF ? 85.63 ? 101.11 95.86 85
WS [in] ? 18.70 ? 18.22 18.88 19.42
Responses
WFUEL [lbs] 450.92 415.34 389.84 449.25 399.57 350.17
WEMP [lbs] 1887 1921.7 1946.82 1888.8 1937.72 1986.37
DOC [$/hr] 77.43 79.13 79.76 68.03 72.92 64.19
PURCH [$] 42150 42727 43190.3 41871 42699.8 43237.2
LDMAX 15.79 15.52 15.44 16.32 16.05 16.44
VCRMX [kts] 195.53 193.32 192.46 190.62 187.99 181.68
RANGE [nm] 2499 2451 2437 2494 2497 2478
Dev. Fcn.
PLEV1 0.0240 0.0446 0.0671 0.0223 0.0293 0.0335
PLEV2 0.1062 0.1120 0.1112 0.0517 0.0772 0.0251

The deviation function values at priority level 1 (PLEV1) for the two families of aircraft

exhibit similar trends to those seen previously except this time it is the two seater aircraft which

have comparable goal achievement of their first level goals, not the four seater aircraft as in the

previous scenario. The PPCEM 4 seater aircraft deviation function (PLEV1) is about 1.5 that

of the benchmark design while the six seater PPCEM is about twice that of the benchmark.

Unlike in Scenario 2, however, the benchmark designs also have lower PLEV2 compared to

the PPCEM designs. To see the discrepancy between the individual goal achievement at the

first priority level, the deviation variables for the four performance goals which are considered at

the first priority level—fuel weight (WFUEL), maximum lift to drag ratio (LDMAX), maximum

cruise speed (VCRMX), and maximum flight range (RANGE)—are plotted in Figure 7.13.

Some observations based on these spider plots are as follows:

• The fuel weight target is met by all three benchmark aircraft while the PPCEM aircraft
do not achieve their target. In fact, the PPCEM aircraft exhibit increasingly worse

285
achievement of the target as the aircraft is scaled to accommodate more passengers
which can account for the increase in PLEV1 for the four and six seater PPCEM
aircraft.

• Neither family of aircraft achieves the target for maximum lift/drag ratio well; however,
the benchmark aircraft consistently perform better.

• All three of the PPCEM aircraft do better at achieving the target for maximum cruise
speed than do the individually designed benchmark aircraft.

• The PPCEM aircraft have only slightly worse RANGE achievement than the benchmark
aircraft.

286
SCENARIO 3 - Performance Tradeoff Study

2 Seater WFUEL 4 Seater WFUEL


0.08 0.10
0.06 0.08
0.06
0.04
0.04
0.02 0.02
RANGE LDMAX RANGE LDMAX
0.00 0.00

VCRMX VCRMX

6 Seater WFUEL Deviation Functions: PLEV1


0.15

0.10
Benchmark PPCEM Family
0.05 2 Seater 0.0223 0.0240
RANGE LDMAX 4 Seater 0.0293 0.0446
0.00 6 Seater 0.0335 0.0671

PPCEM Family
Benchmark
VCRMX

Figure 7.13 Graphical Comparison of Benchmark Aircraft and PPCEM Family for
Design Scenario 3, Priority Level 1 Only

In summary, in order to improve the commonality of the aircraft within the GAA

product family, the overall performance of the individual aircraft within the product family

decreases. This decrease in performance, however, varies from aircraft to aircraft and scenario

to scenario. The question that the designers/managers are now faced with is: how much

performance degradation are we willing to accept so that we can have as common a

287
product platform as possible? Ideally, minimal performance would have to be sacrificed to

increase commonality between derivative products, but it appears that a tradeoff does exist as

one might expect. In reality, however, it would not be known how much performance was

being sacrificed by designing a common platform for the product family because benchmark

designs would not necessarily exist (unless this was a redesign process). Toward this end, a

product variety tradeoff study is performed in the next section.

7.6.4 Product Variety Tradeoff Study

To assess the tradeoff between product commonality and product performance within

the family of GAA, a product variety tradeoff study is performed using the PDI and NCI

measures described in Section 3.1.5. Currently, there are two points on the PDI vs. NCI

graph: the family of aircraft based on the PPCEM solutions and the group (family) of benchmark

aircraft which have been individually designed. What is interesting to study is the effect of

allowing one or more design variables to vary in the PPCEM for each aircraft while holding the

remaining variables constant at the platform values found using the PPCEM. In this manner, the

PPCEM facilitates generating a variety of alternatives for the product platform and

corresponding product family. By allowing one or more variables to vary between aircraft, the

performance of the individual aircraft within the resulting product family can be improved such

that there is minimal tradeoff between product commonality and performance. Before this

tradeoff can be assessed, however, the relative importance of the design variables is needed in

order to compute NCI for each family of aircraft.

288
The weightings in NCI used in this study are based on rank ordering the design

variables with regard to relative ease/cost with which they can be allowed to vary—the more

costly it is to allow that variable to change, the more important it is to have that variable stay the

same across derivative products. For this example, the weightings listed in Table 7.17 are used.

Cruise speed (CSPD) is the easiest/cheapest variable to allow to vary between designs because

it is easy to vary the cruise speed throughout the mission without having to make any

modifications to the aircraft; meanwhile, seat width (WS) is the most expensive to allow to vary

because it is costly not to have the same fuselage width (fuselage width being directly

proportional to seat width) for all of the aircraft within the GAA family. These weights are

derived from a pairwise comparison of the design variables; the justification for the pairwise

comparison and computation of the rank ordering and relative importance are explained in

Section F.4.1.

Table 7.17 Relative Importance of Design Variables

Design Rank Relative


Variable Order† Importance
AF 3 0.1429
AR 5 0.2381
CSPD 1 0.0476
DPRP 2 0.0952
WL 4 0.1905
WS 6 0.2857

Larger numbers indicate preference.

289
For this product variety study, two design scenarios are considered: the economic

tradeoff study (Scenario 2) and the performance tradeoff study (Scenario 3) listed in Table

7.13. For these two scenarios, the individual PPCEM and benchmark aircraft are listed in

Table 7.15 and Table 7.16 for Scenarios 2 and 3, respectively. The resulting PDI and NCI for

each group of aircraft based on these design variable values are computed and listed in Table

7.18 and Table 7.20 for Scenarios 2 and 3, respectively; remember that only the first priority

level is used when computing PDI, and the weightings used in the NCI are the relative

importances listed in Table 7.17 and do not include the variation in the scale factor.

Knowing the two extremes of the PDI vs. NCI curve, it is possible to work

“backward” along the curve from the PPCEM solutions toward the benchmark designs by

allowing one or multiple design variables to vary between each aircraft while holding the others

fixed at the PPCEM platform values. This process proceeds as follows:

1. Starting with the individual PPCEM aircraft, vary one variable at a time for each aircraft;
for instance, hold {AF, AR, CSPD, DPRP, WL} at the settings prescribed by the
PPCEM platform and vary WS for each aircraft to improve the performance of that
aircraft as much as possible. This entails solving a compromise DSP for each aircraft,
with the PPCEM value for WS taken as the starting point in DSIDES. All six variables
are allowed to vary one-at-a-time from the PPCEM platform values, solving a
compromise DSP for each aircraft for each variable. NCI and PDI are then computed
for each of the six resulting aircraft families, e.g., the family of aircraft which share
common {AF, AR, CSPD, DPRP, WL} but varying WS.

2. Repeat Step 1, allowing any two variables to vary at a given time between aircraft from
the PPCEM platform values. There are 15 possible pairs of variables which are varied

290
two-at-a-time, and a compromise DSP is solved for each possible pair with the
PPCEM value taken as the starting point. NCI and PDI are computed for each of the
15 resulting product families.

3. Repeat Step 1, allowing any three variables to vary at a given time between aircraft. In
order to reduce the number of combinations that must be examined, CSPD is not varied
from aircraft to aircraft because it is known not to change much between aircraft in the
group of benchmark designs. Hence, only AF, AR, DPRP, WL, and WS are allowed
to vary from their PPCEM platform values, resulting in 10 different combinations of the
five variables taken three at a time. NCI and PDI are computed for each of the 10
resulting product families.

In total, (6x3)+(15x3)+(10x3) = 93 compromise DSPs are solved in this product

variety study for each scenario. The resulting NCI and PDI are listed in Table 7.18 for

Scenario 2 and in Table 7.20 for Scenario 3. In the tables, the results are grouped by the

number of variables where the variables not listed are being held constant. For instance, in

Table 7.18 the NCI and PDI for the family of aircraft when allowing WL to vary from one

aircraft to the next are 0.0171 and 0.0155, respectively, with all over design variables fixed at

the PPCEM values; when AF and WL are allowed to vary from one aircraft to the next, the

resulting NCI and PDI for the group of products are 0.0234 and 0.0150, respectively.

291
Table 7.18 Product Variety Tradeoff Study - Scenario 2

NCI PDI
Benchmark Designs - Each aircraft is optimized; all
0.1795 0.0038
variables can vary
PPCEM Designs using Cdk - Each aircraft is
0.0000 0.0184
designed to have same variables
Allow 1 AF 0.0178 0.0181
variable to AR 0.0040 0.0181
vary between CSPD 0.0000 0.0182
aircraft from DPRP 0.0026 0.0179
PPCEM WL 0.0171 0.0155
designs WS 0.0381 0.0117
AR 0.0147 0.0181
CSPD 0.0178 0.0181
AF DPRP 0.0155 0.0175
WL 0.0234 0.0150
Allow 2 WS 0.0509 0.0113
variables to CSPD 0.0041 0.0181
vary between AR DPRP 0.0172 0.0175
aircraft from WL 0.0230 0.0147
PPCEM WS 0.0559 0.0096
designs DPRP 0.0027 0.0179
CSPD WL 0.0233 0.0152
WS 0.0382 0.0117
DPRP WL 0.0249 0.0154
WS 0.0528 0.0106
WL WS 0.0702 0.0086
DPRP 0.0110 0.0181
AR WL 0.0370 0.0146
Allow 3 AF WS 0.0848 0.0100
variables to DPRP WL 0.0495 0.0154
vary between WS 0.0672 0.0147
aircraft from WL WS 0.1068 0.0081
PPCEM AR DPRP WL 0.0405 0.0151
designs WS 0.0572 0.0107
AR WL WS 0.0803 0.0068
DPRP WL WS 0.0701 0.0075

The gray shaded rows in Table 7.18 indicate the best increase in PDI which can be

achieved by allowing 1, 2, or 3 variables to vary at a given time. So, if only one variable is

allowed to vary between aircraft, then allowing WS to vary yields the best improvement in

292
Scenario 2; if two can vary, then WL and WS should be allowed to vary; if three can vary, then

varying AR, WL, and WS yields the best improvement. The complete set of results for each

scenario for each aircraft are listed in Section F.4. Plots of NCI versus PDI for each scenario

follow each table and are discussed in turn. The PDI and NCI values for Scenario 2 are plotted

in Figure 7.14.

Scenario 2 - Economic Tradeoff


0.020

0.018
KEY:
0.016
²PDI 1 Cdk
0.014 WL Cdk-Vary1
²PDI 2 Cdk-Vary2
0.012 WS
Cdk-Vary3
PDI

²PDI 3 ²PDI lost


0.010 AR,WS Benchmark
WL,WS
0.008 ²PDI i = best
DPRP,WL,WS change in PDI
0.006 AR,WL,WS by allowing
i variables to
0.004 vary b/n each
²NCI aircraft design
gain
0.002
0.000
0.00 0.03 0.06 0.09 0.12 0.15 0.18
NCI - Weighted by Importance

Figure 7.14 Scenario 2 Product Variety Tradeoff Study Results

Notice that the PPCEM solution using the Cdk formulation yields the top left point in

Figure 7.14; the individual benchmark designs provide the bottom right point with all of the

293
variations on the PPCEM Cdk solutions falling in between the two, creating an envelope of

possible combinations of NCI and PDI. As highlighted in the Table 7.18, varying {WS}, {WS

and WL}, and {AR, WL, and WS} yields the best improvement in PDI if 1, 2, and 3 variables

are allowed to vary between each of the PPCEM aircraft; notice that these points lay on the

front of the product variety envelope. In general, as more design variables are allowed to vary,

greater ?PDI can be achieved but NCI does increase.

Is there any way to move down this curve without having to look at all possible

combinations? As it turns out, the design variables that have the most impact on the

performance of aircraft are the ones that progress down the front of the curve. This information

can be obtained from a statistical Analysis of Variance (ANOVA) of the data used to build the

kriging metamodels in Step 3 of the PPCEM. The full ANOVA for the family of GAA is given

in Section F.3, and Pareto plots based on the results of the ANOVA are illustrated in Figure

7.15. The Pareto plots provide a means of quickly identifying which variables have the most

impact on a particular response; the larger the horizontal bar, the more influence a variable has

on the response. In Figure 7.15, only the effects of the design variables on the response means

have been plotted because they govern the average performance of the GAA family.

Based on these Pareto plots for the GAA response means, the effect of each factor on

each response can be ranked by order of importance, see Table 7.19. In the table, 1 indicates

most important and 6 the least. So for example, the seat width (WS) has the largest effect on

the purchase price (PURCH) and cruise speed (CSPD) has the least. The economic responses

294
in the first priority level in Scenario 2 are shown in the top half of the table; the performance

responses which are in the first priority level in Scenario 3 are shown in the bottom half of the

table.

Term Orthog Estimate .2 .4 .6 .8 Term Orthog Estimate .2 .4 .6 .8


WS 38.206370 CSPD 6.6543136
AR 20.124813 DPRP -0.8645803
WL -17.199147 WL -0.6941198
DPRP 4.408426 AF -0.5676188
AF 2.979447 AR 0.1848961
CSPD 2.248510 WS 0.0400254

(a) ? WEMP (b) ? DOC

Term Orthog Estimate .2 .4 .6 .8 Term Orthog Estimate .2 .4 .6 .8


WS -39.709244 WS 620.43621
WL 18.061805 AR 449.97104
AR -17.572171 DPRP 336.36582
DPRP -5.366124 WL -142.43496
AF -3.216492 AF 116.66169
CSPD -2.750743 CSPD 44.55396

(c) ? WFUEL (d) ? PURCH

Term Orthog Estimate .2 .4 .6 .8 Term Orthog Estimate .2 .4 .6 .8


WL -349.65163 WL 3.5626436
WS -96.81741 DPRP 2.9891508
DPRP -84.83165 WS -2.7846897
AR -42.94887 AR 0.6015649
AF -30.40475 CSPD -0.3211545
CSPD -7.67476 AF 0.2301845

(e) ? RANGE (f) ? VCRMX

Term Orthog Estimate .2 .4 .6 .8 KEY: DOC = direct oper cost


AR 1.0556939 AF = eng act factor LDMAX = max lift/drag
WL -0.3815507 AR = aspect ratio PURCH = purchase price
WS -0.3732309 CSPD = cruise spd RANGE = max flight range
CSPD 0.3281411 DPRP = prop diam VCRMX = max speed
AF -0.0144062 WL = wing loading WEMP = empty weight
DPRP 0.0030519 WS = seat width WFUEL = fuel weight

(g) ? LDMAX

Figure 7.15 Pareto Plots for GAA Response Means

Table 7.19 Rank Ordering of Effects on Means of Responses

Importance on Economic Related Goals

295
Response 1 2 3 4 5 6
DOC CSPD DPRP WL AR WS AF
WEMP WS AR WL DPRP AF CSPD
PURCH WS AR DPRP WL AF CSPD
Importance on Performance Related Goals
Response 1 2 3 4 5 6
LDMAX AR WL WS CSPD DPRP AF
RANGE WL WS DPRP AF AR CSPD
WFUEL WS WL AR DPRP AF CSPD
VCRMX WL DPRP WS AR CSPD AF

Returning to the tradeoff study for Scenario 2, the design variables that shape the front

of the envelope when allowed to vary are as follows: WS, WL, {AR,WS}, {WS,WL}, {WS,

WL, DPRP}, and {AR, WL, WS}. Looking at the rank ordering of importance in Table 7.19,

it can be seen that the variables in these combinations are variables that have the largest effect

on the responses. WS has the largest effect on WEMP and PURCH, two of the three

economic responses in Scenario 2; {WS, AR} are the two most important factors both of these

economic responses; {AR, WL, WS} are among the top three variables that are most

important to the three economic responses in Scenario 2. Thus, by allowing the design variables

with the most impact to vary between aircraft while keeping the others fixed, substantial

improvements in performance can be obtained.

To see if the same holds true in Scenario 3, the NCI and PDI values for Scenario 3 are

listed in Table 7.20 and plotted in Figure 7.16. As highlighted in the table, the best

improvement in PDI can be obtained by allowing AR to vary between aircraft if only one

variable is allowed to vary. Notice in Table 7.19 that AR is most important to LDMAX.

296
Recall from Figure 7.13 that the largest discrepancy between the PPCEM family of aircraft and

the benchmark group of aircraft is the achievement of LDMAX. By allowing AR to vary

between aircraft within the PPCEM family, each aircraft is able to achieve better LDMAX,

resulting in a lower PDI for the PPCEM product family. Meanwhile, if two variables are

allowed to vary, then AF and WL yield the best improvement in PDI because WL has a large

impact on both RANGE and VCRMX. In the three variable case, varying DPRP, WL, and

WS yield the best improvement; notice that all three of these variables are among the most

influential variables on the performance responses as shown in Table 7.19.

297
Table 7.20 Product Variety Tradeoff Study - Scenario 3

NCI PDI
Benchmark Designs - Each aircraft is optimized; all
0.0918 0.0284
variables can vary
PPCEM Designs using Cdk - Each aircraft is
0.0000 0.0452
designed to have same variables
Allow AF 0.0267 0.0452
1 variable AR 0.0059 0.0434
to vary CSPD 0.0000 0.0453
b/n aircraft DPRP 0.0013 0.0452
from Cdk WL 0.0010 0.0437
designs WS 0.0068 0.0443
AR 0.0269 0.0430
CSPD 0.0000 0.0453
AF DPRP 0.0193 0.0450
WL 0.0119 0.0390
Allow WS 0.0159 0.0440
2 variables CSPD 0.0017 0.0452
to vary AR DPRP 0.0101 0.0428
b/n aircraft WL 0.0082 0.0402
from Cdk WS 0.0183 0.0404
designs DPRP 0.0000 0.0453
CSPD WL 0.0049 0.0397
WS 0.0093 0.0449
DPRP WL 0.0150 0.0401
WS 0.0081 0.0446
WL WS 0.0071 0.0422
DPRP 0.0530 0.0430
AR WL 0.0085 0.0394
Allow AF WS 0.0361 0.0399
3 variables DPRP WL 0.0272 0.0388
to vary WS 0.0158 0.0442
b/n aircraft WL WS 0.0349 0.0408
from Cdk AR DPRP WL 0.0135 0.0414
designs WS 0.0203 0.0397
AR WL WS 0.0154 0.0417
DPRP WL WS 0.0194 0.0361

The PDI and NCI values for Scenario 3 are plotted in Figure 7.16. As in the previous

graph for Scenario 2, the PPCEM solution using the Cdk formulation yields the top left point; the

individual benchmark designs provide the bottom right point. Notice that the combinations of

298
design variables which move the family of aircraft down the front of the product variety

envelope are, in general, the ones which are rank ordered highest in Table 7.19. Notice also

that more than half of ?PDIlost can be gained back if {DPRP, WL, WS} are allowed to vary

between aircraft with only a minimal increase in NCI.

Scenario 3 - Performance Tradeoff


0.050

KEY:
0.045 ²PDI 1 Cdk
WL
²PDI 2 Cdk-Vary1
AR
²PDI 3 Cdk-Vary2
0.040
Cdk-Vary3
PDI

CSPD,WL
Benchmark
0.035 AF,WL ²PDI i = best
²PDI lost change in PDI
by allowing
DPRP,WL,WS i variables to
0.030 vary b/n each
aircraft design
²NCI gain

0.025
0.00 0.02 0.04 0.06 0.08 0.10
NCI - Weighted by Importance

Figure 7.16 Scenario 3 Product Variety Tradeoff Study Results

In closing, despite the tradeoff between commonality and performance observed in

comparing the benchmark and PPCEM aircraft in Section 7.1.2, considerable improvement in

the performance of the PPCEM family of aircraft can be obtained by allowing 1 or more

299
variables to vary between aircraft while holding the remainder of the variables at the platform

setting. In the product variety study performed in this section, it has been shown how statistical

analysis of variance can be used to traverse the front of the product variety tradeoff envelope,

maximizing the gains in PDI with minimal loss in commonality. It is now up to the discretion of

the designers/managers to evaluate the implications of this tradeoff on inventory, production, and

sales to decide the appropriate compromise between commonality and performance. A closer

look at some of the lessons learned from this example is offered in the next section along with a

summary of the chapter.

7.7 LESSONS LEARNED: A LOOK BACK AND A LOOK AHEAD

In this chapter, the PPCEM is applied in full to the design of a family of General

Aviation aircraft. The GAA family is based on a common scalable product platform which is

scaled around the number of passengers in much the same way that Boeing has scaled their 747

series of aircraft around the capacity and flight range (cf., Rothwell and Gardiner, 1990). The

market segmentation grid has been used to help identify an appropriate leveraging strategy for

the family of aircraft based on the initial problem statement, i.e., horizontally leverage the family

of GAA to satisfy a variety of low-end market segments. Each aircraft eventually could be

vertically scaled as well through the addition and removal of features as technology improves to

increase its performance and attractiveness to a mid-range or high-end customer base.

Particularization of the PPCEM for this example occurs through GASP, the General

Aviation Synthesis Program, which is used to model and simulate the performance of each

300
aircraft mathematically . Kriging metamodels for response means and variances are employed

within the PPCEM to facilitate the implementation of robust design based on GASP analyses.

These kriging metamodels then are used in conjunction with design capability indices and a

GAA compromise DSP to synthesize a robust aircraft platform which is scalable into a family of

aircraft.

Three different design scenarios are used to exercise the GAA compromise DSP to

create alternative product platforms and the product platform portfolio. Instantiation of the

individual aircraft within the PPCEM family reveals that the PPCEM provides an effective

means for designing a common scalable aircraft platform for the family of GAA. However,

upon comparison with individually designed benchmark aircraft, a tradeoff is found to exist

between having a common set of design variables which define the aircraft platform and the

performance of the scaled derivatives based on that platform. To examine the extent to which

this tradeoff occurs, a product variety tradeoff study is performed using the PPCEM to

demonstrate the ease with which alternative product platforms and product families can be

generate and to make use of the NCI and PDI measures proposed in Section 3.1.5. It is

observed that considerable improvement can be made by allowing one or more variables to

vary between each aircraft based on the original PPCEM platform; however, commonality

between the aircraft is sacrificed. To determine which variables to vary, ANOVA of the data

used to build the kriging metamodels in Step 3 of the PPCEM can be used to determine the

variables that have the largest effect on each response, allowing the front portion of the product

301
variety envelope to be traversed for maximum improvement in PDI with minimal loss of

commonality. The implications of this tradeoff on inventory, production, and sales must be

considered in order to decide upon an appropriate compromise between commonality and

performance. However, the purpose of the PPCEM is to facilitate generating a variety of

alternatives for the common product platform and corresponding product platform and not to

select one from them.

A summary of the hypotheses tested in this example is as follows:

Hypothesis 1 - The PPCEM is employed in this chapter to design a family of aircraft


based on a common scalable product platform. The success of the method to improve
the baseline design and generate a variety of alternatives for the GAA platform as
discussed in Sections 7.5 and 7.6 provides further verification of Hypothesis 1.

Sub-Hypothesis 1.1 - The market segmentation grid is utilized in Section 7.1.3 to help
identify an appropriate (horizontal) scale factor for the family of GAA—the number of
passengers—in order to achieve the desired platform leveraging based on the problem
objectives; this further supports Sub-Hypothesis 1.1.

Sub-Hypothesis 1.2 - The scale factor for the GAA product family is the number of
passengers, see Sections 7.1.3 and 7.2. Robust design principles then are used in this
example to develop an aircraft platform—defined by six design variables—which is
insensitive to variations in the scale factor and is thus good for the family of General
Aviation aircraft based on the two, four, and six seater configurations. The success of
this implementation helps to support Sub-Hypothesis 1.2.

Sub-Hypothesis 1.3 - Design capability indices are utilized in this example to aggregate
individual targets and constraints and to facilitate the design of a family of General
Aviation aircraft. Combining this formulation with the compromise DSP allows a family

302
of GAA to be designed around a common, scalable product platform, further verifying
Sub-Hypothesis 1.3.

Hypothesis 2 - Kriging metamodels are utilized to facilitate the implementation of the


design capability indices and expedite the search for a suitable product platform for the
family of GAA. Validation of the kriging metamodels in Section 7.3 and 7.6.2 indicates
that the kriging models are sufficiently accurate to benefit the PPCEM.

So, are the solutions obtained from the PPCEM useful? The PPCEM has been

used to generate a variety of feasible options for the GAA platform and corresponding family of

aircraft. While there is some tradeoff between the performance of the individual aircraft based

on the PPCEM platform when compared to a family of individually designed benchmark

aircraft, the increased commonality between the design specifications of each aircraft (i.e.,

aspect ratio, seat width, propeller diameter, etc.) should generate sufficient savings to offset the

minimal loss in performance. Regardless of whether it does or not, the family of aircraft

obtained using the PPCEM yields considerable improvement over the initial family of aircraft

based on the baseline Beechcraft Bonanza design, see Table 7.8 and the discussion thereafter.

Are the time and resources consumed within reasonable limits? Basically, the

PPCEM has been used to design a family of three aircraft almost as efficiently as a single

aircraft. The initial start up costs to use the PPCEM in this example are about one day which is

the time it takes to sample the GAA design space and construct kriging metamodels to

approximate GASP. Once this is accomplished, the computational savings resulting from using

the PPCEM are significant in two regards:

• computational efficiency gained by using kriging metamodels instead of GASP, and


303
• design efficiency gained by using the PPCEM to design a family of aircraft
simultaneously around a common scalable platform.

As a result, the computational savings are comparable to, if not greater than, those obtained in

the universal electric motor example in Chapter 6. Consider, for instance, that it requires

approximately 45 seconds to complete one “run” of GASP on a Unix-based SparcServer 670

MP. Meanwhile, the kriging metamodels require approximately 0.25 seconds to run after about

a minute of “pre-processing” which is done automatically prior to optimization. The savings

from using metamodels in the PPCEM are substantial when one considers the large numbers of

design scenarios and tradeoff studies used in this chapter and in Appendix F, not to mention the

fact that multiple starting points are used in all cases. The cost savings (in terms of number of

analysis) are not as clear cut as they are in the universal motor example in Chapter 6 (see

Section 6.5.3); therefore, they are not estimated. However, the discussion in (Simpson, 1995)

regarding the cost savings of using approximations to replace GASP sheds some light on the

magnitude of these savings.

Is the work grounded in reality? As stated in Section 7.1.3, the baseline design (i.e.,

starting point) for the GAA product family is the Beechcraft Bonanza B36TC presented in

Section 7.1.3. While the Beechcraft Bonanza is only a six seater aircraft, its specifications are

employed in GASP to provide a family of baseline to compare with the PPCEM family of

aircraft based on a common scalable platform. Discussion of these results in Section 7.5.1

reveals that the PPCEM solutions are able to improve upon both the technical and economic

performance characteristics of this family of baseline aircraft significantly. While these

304
improvements are slightly less than the improvements obtained by individually designing each

aircraft (i.e., the benchmark designs), the time savings resulting from using the PPCEM to design

the family of three aircraft simultaneously can be used to “tweak” the individual designs as

needed to ensure adequate performance and product quality. It still stands, however, that the

PPCEM solutions, even with all six design variables held at the common product platform

specifications, yield improvement over the baseline design. The results of the product variety

tradeoff studies discussed in Section 7.6.4 provides several options to improve the PPCEM

family of aircraft.

Finally, do the benefits of the work outweigh the cost? The true benefit from using

the PPCEM in a problem like this is the wealth of information that is obtained during its

implementation. The PPCEM greatly facilitates the generation of a variety of alternatives for a

common product platform and its corresponding scaled derivative products. Use of the

PPCEM permits the product platform and the scaled product family to be designed

simultaneously, thus increasing the commonality of specifications across the products within the

family. Product variety tradeoff studies can be easily performed using the PPCEM (and NCI

and PDI metrics) to evaluate the compromise between commonality and individual product

performance, yielding a variety of options for the company to pursue.

This concludes the second, and final, example for testing and verifying the PPCEM

having demonstrated the full implementation of the PPCEM to design a family of products and

facilitate product variety tradeoff studies. In the next and final chapter, a summary of

305
achievements and contributions from the work is offered along with critical review of the

research and a discussion of possible avenues of future work.

306
8.
CHAPTER 8

CLOSURE: ACHIEVEMENTS AND


RECOMMENDATIONS

In this dissertation, a method has been developed, presented, and tested to facilitate the

design of a scalable product platform for a product family. The development and presentation

of this method is brought to a close in this chapter. In Section 8.1, closure is sought by returning

to the research questions posed in Chapter 1 and reviewing the answers that have been offered.

The resulting contributions are then summarized in Section 8.2. Limitations of the research are

discussed in Section 8.3, and possible avenues of future work are described in Section 8.4.

Concluding remarks are given in Section 8.5, closing this chapter and the dissertation.

293
Chp 8: Achievements and Recommendations

Family of Universal Family of General


Kriging/DOE Testbed Electric Motors Aviation Aircraft

294
8.1 CLOSURE: ANSWERING THE RESEARCH QUESTIONS

As stated in the introduction to Chapter 1, the principal objective in this dissertation is to

develop the Product Platform Concept Exploration Method (PPCEM) to facilitate the design of

a common product platform which can be scaled to realize a product family. In particular, the

concept of platform scalability is introduced and exploited in the context of the following

motivating research question.

Q1. How can a common scalable product platform be modeled and designed for a

product family?

Two secondary research questions are also offered in Section 1.3.1 for investigation in this

dissertation in conjunction with the primary research question.

Q2. Is kriging a viable metamodeling technique for building approximations of

deterministic computer analyses?

Q3. Are space filling designs better suited for building approximations of deterministic

computer analyses than classical experimental designs?

To address these questions, research hypotheses and posits are introduced and

identified in support of achieving the principal objective for the dissertation. Their elaboration

295
and verification have provided the context in which the research work has proceeded. The end

result is a synthesis of engineering design, operations research, applied statistics, and strategic

management methods and tools to form the Product Platform Concept Exploration Method. Its

development has been portrayed pictorially using Figure 8.1 which depicts the flow of the

research throughout the dissertation.

296
Chp 8: Achievements and Recommendations

Family of Universal Family of General


Kriging/DOE Testbed Electric Motors Aviation Aircraft

Chp 6 Chp 7

Chp 5

Platform

Nozzle Design
Product Platform Concept Exploration Method

Chp 4 Chp 3

Space Modeling Conceptual Scalable Market


Filling Kriging Mean and Noise Product Segmentation
DoE Variance Factors Platform Grid

Metamodeling Robust Design Principles Product Family Design

§2.3 §2.2 §2.1

FOUNDATIONS: Decision-Based Design & the Robust Concept Exploration Method

Figure 8.1 Pictorial Overview of the Dissertation

Answering Question 1: Question 1 is the primary research question posed for the

work in this dissertation and its answer is embodied by the Product Platform Concept

297
Exploration Method: a Method which facilitates the synthesis and Exploration of a common

Product Platform Concept which can be scaled into an appropriate family of products. The

method consists of a prescription for formulating the problem and a description for solving it.

Application of the method is demonstrated by means of two examples, namely,

• the design of a universal electric motor platform which is (vertically) scaled around the
stack length of the motor to realize a family of electric motors capable of satisfying a
variety of torque and power requirements (Chapter 6), and

• the design of a General Aviation aircraft platform which is (horizontally) scaled into a
two, a four, and a six seat configuration to realize a family of aircraft capable of
satisfying a variety of performance and economic requirements (Chapter 7).

While only demonstrated for these two examples, it is asserted that the method is generally

applicable to other examples in this class of problems: parametrically scalable product platforms

whose performance can be mathematically modeled or simulated. Other examples which have

taken advantage of this type of scaling include the design of a family of oil filters (Seshu, 1998)

and the design of a family of absorption chillers for a variety of refrigeration capacities

(Hernandez, et al., 1998). Both examples integrate nicely within the framework of the PPCEM.

In support of the primary research question and objective, three additional questions

also are offered in Section 1.3.1. Answers to these questions are summarized as follows.

Q1.1. How can product platform scaling opportunities be identified from overall

design requirements?

298
In this research, the market segmentation grid (Meyer, 1997) is employed to help

identify platform scaling opportunities based on overall design requirements. Its success as an

attention directing tool for mapping scaling opportunities within a product family is discussed in

Section 2.2.1 and then demonstrated in both examples. In the universal motor example in

Chapter 6, the market segmentation grid is used to identify vertical scaling opportunities within

the desired product family to realize a range of torque and power ratings for different

price/performance tiers within the market; standardization of the motor interfaces will provide

horizontal leveraging opportunities of this family of motors into other market segments in a

manner similar to Black & Decker’s response to Double Insulation in the 1970s (Lehnerd,

1987). Meanwhile, in the General Aviation aircraft example in Chapter 7, a horizontal

leveraging strategy is identified by means of the market segmentation grid, resulting in a family of

three aircraft based on a two, four, and six seater configuration leveraged about a common

product platform. Opportunities for vertical scaling of the resulting family of aircraft through

engine upgrades, add-on features, and technological advancements also are discussed;

however, none of these features are implemented in this example.

Q1.2. How can robust design principles be used to facilitate designing a common

scalable product platform?

299
By identifying “conceptual noise” factors around which a family of products can be

scaled, robust design principles can be abstracted for use in product family and product

platform design. Consequently, the idea of a scale factor is introduced in Section 2.3.2 as a

factor around which a product platform can be “scaled” or “stretched” to realize derivative

products within a product family. Scale factors are, in essence, noise factors for a scalable

product platform, and robust design principles can be used accordingly to minimize the

sensitivity of the product platform to variations in these scale factors. Implementation of

this approach is demonstrated through the two examples. In the universal motor example in

Chapter 6, the stack length of the motor is taken as the (parametric) scale factor around which a

family of motors is created. In the General Aviation Aircraft example, the number of passengers

is the (configurational) scale factor around which a family of three aircraft are developed. In

both cases, robust design principles are employed to develop a common set of design variables

which are robust with respect to variations in the scaling factor as the product platform is scaled

and instantiated to realize the product family. A product variety tradeoff study also is performed

in the General Aviation aircraft example (see Section 7.6.2) to further verify this approach.

Q1.3. How can individual targets for derivative products be aggregated and modeled

for product platform design?

300
Through the identification of appropriate scaling factors during the product family design

process, the individual targets for derivative products can be aggregated into a mean and

variance around which the product family can be simultaneously designed either by having

separate goals for “bringing the mean on target” and “minimizing the variation” or through the

formulation and implementation of appropriate design capability indices to measure the

capability of a family of designs to satisfy a ranged set of design requirements. The former

approach is utilized to design the universal electric motor platform in Chapter 6. Goals for

“bringing the mean on target” and “minimizing the variation” caused by variations in the scale

factor (stack length) are used within a compromise DSP to effect a platform design which

matches the target mean and variation for the aggregated product family. In Chapter 7, design

capability indices are employed to design a family of General Aviation aircraft around a common

platform which is instantiated to seat 2, 4, and 6 passengers.

Answering Question 2: Since its introduction into the literature as a useful

metamodeling tool for engineering design by Sacks, et al. (1989), kriging has received little

attention by the engineering community building surrogate models. Perhaps this is because of

the added complexity of fitting the model or using it or of the inability to glean useful information

directly from the MLE parameters used to fit the model. Whatever the reason, the research in

this dissertation has been directed at improving the ease with which kriging models can be built,

validated, and used. Moreover, the initial feasibility study and comparison of kriging models—

with a global underlying constant—with second-order response surface models in Chapter 4

301
and the extensive kriging/DOE investigation in Chapter 5 is aimed at familiarizing the reader with

kriging and making it a viable alternative for building surrogate metamodels of deterministic

computer experiments. Its utility was tested extensively in Chapter 5 wherein it was concluded

that the Gaussian correlation function provides the most accurate kriging predictor, on average,

and that kriging can accurate model a wide variety of functions typical of engineering analysis.

While the study is not all inclusive, nor is it intended to be, it has provided valuable insight into

the utility of kriging metamodels for engineering design. Potential avenues of future work to

extend this promising metamodeling alternative are discussed in Section 8.4.1.

Answering Question 3: As discussed in Section 2.4.3, many researchers argue that

classical experimental designs are not well suited for sampling computer experiments which are

deterministic; rather, points should be chosen to “fill the space,” providing good coverage of the

design space since replicate sample points are not needed. In an effort to verify the utility of

space filling experimental designs, a comparison of nine space filling and two classical

experimental designs is performed in Chapter 5 (see Section 5.4 in particular) to address this

third research question. The eleven experimental designs are compared on the basis of their

capability to produce accurate kriging metamodels for the testbed of six engineering problems

used in this dissertation. For the sample sizes investigated in this study, it was observed that the

space filling experimental designs yielded more accurate kriging models in the larger design

spaces (3 and 4 variables) while the classical experimental designs (CCDs) performed well in

the two dimensional design space for the reasons discussed at the end of Section 5.4.4. Prior

302
to this investigation, few researchers had compared their experimental designs against one

another, or to classical designs for that matter. As such, the findings in the kriging/DOE study in

Chapter 5 represent unique contributions from the research. A summary of the research

contributions is offered in the next section.

8.2 ACHIEVEMENTS: REVIEW OF RESEARCH CONTRIBUTIONS

The contributions offered in this dissertation are introduced in Section 1.3.2 and realized

throughout the dissertation. As stated at the beginning of Chapter 1, the primary contribution

from this work is embodied in the Product Platform Concept Exploration Method which

provides a method to identify, model, and synthesis scalable product platforms for a product

family. The other contributions can be summarized as follows:

Contributions Related to Hypothesis 1 and Sub-Hypotheses 1.1-1.3:

• A procedure for identifying scale factors for a product platform, see Sections 3.1.1 and
3.1.2.

• An abstraction of robust design principles for realizing scalability in product family


design, see Sections 3.1.2 and 3.1.4.

• Non-commonality and performance deviation indices for performing product variety


tradeoff studies, see Sections 3.1.5 and 7.6.2.

Contributions Related to Hypothesis 2:

• An algorithm to build, validate, and use a kriging model, see Section 2.4.2, Chapter 4
and 5, and Appendix A.

303
• A preliminary comparison of the predictive capability of second-order response
surfaces and kriging models in the design of a rocket nozzle, see Section 4.2.

• An extensive investigation of the effect of five different spatial correlation functions on


the accuracy of a kriging model, see Sections 2.4.2 and Chapter 5.

Contributions Related to Hypothesis 3:

• An extensive investigation of the effect of eleven different sampling strategies on building


an accurate kriging model, see Sections 2.4.3 and Chapter 5.

• An algorithm for generating minimax Latin hypercube designs, see Section 2.4.3 and
Appendix C.

What is the value of these contributions? These contributions must be of sufficient

worth to be either an addition to the fundamental knowledge of the field or a new and better

interpretation of the facts already known. The contributions associated with kriging represent a

new interpretation of facts already known. Kriging has been around since the 1960s (see, e.g.,

Cressie, 1993; Matheron, 1963) when it was developed originally for mining and geostatistics

applications; however, it has received limited attention in the engineering design community until

recently. The kriging algorithm presented is not totally unique to this dissertation; however, the

use of a simulated annealing algorithm (see Appendix A for more details on its use in the

maximum likelihood estimation for the kriging metamodels) to find the “best” kriging model is.

Moreover, the comparison of the accuracy of different correlation functions on the resulting

kriging model had never been performed in such depth. Likewise, the comparison of space

filling and experimental designs represents a new and better interpretation of facts already

known because such an extensive study has never been undertaken. With the exception of the

304
minimax Latin hypercube design, the experimental designs investigated in this dissertation are the

result of years of research work by statisticians and mathematicians. The minimax Latin

hypercube design, however, represents an addition to the fundamental knowledge of the field of

experimental design.

The contributions made in the area of product family design, specifically the method of

designing scalable product platforms, represents an addition to the fundamental knowledge of

the field. While other product family design strategies and methods have been slowly evolving,

the investigation of a method for platform scaling is previously unrecorded. The incorporation of

the market segmentation grid into the engineering design process provides a new interpretation

of facts already known, demonstrating how the market segmentation grid becomes a useful

attention directing tool for identifying platform leveraging strategies in product family and, with a

little engineering knowledge, appropriate scale factors for the intended scalable platform. In this

regard, the concept of scale factors in product family design and extending robust design to

product family and product platform design is unique to this dissertation as are the NCI and PDI

measures for product family non-commonality and performance deviation. The measures are

not of significant value in and of themselves, however, the product variety tradeoff studies which

these indices make possible provide significant insight into the tradeoffs of product family design.

Taken together, the resulting Product Platform Concept Exploration Method for designing

scalable product platforms for a product family provides an addition to the fundamental

knowledge of the nascent field of product family design. However, the PPCEM is by no means

305
a panacea for product platform and product family design nor is it without its limitations.

Toward this end, a critical evaluation of the work is offered in the next section followed by

recommendations for future work in Section 8.4.

8.3 CRITICAL ANALYSIS: LIMITATIONS OF THE RESEARCH

This section comprises the confessional portion of the dissertation wherein the research

itself is critically evaluated. Already the PPCEM has been critically evaluated as it pertains to

the two example problems in Chapters 6 and 7, see Sections 6.5 and 7.6.5. In this section, the

critical evaluation is applied to the work as a whole.

So what is really necessary for the PPCEM to be applied to the design of a

product family based on a common scalable product platform? There are two basic

requirements which must be met in order for the PPCEM to be applicable to the design of a

scalable product platform. First, the concept of scalability must be exploitable within the

product family; exploited in the sense that having one or more scale factors provides a means to

realize a variety of performance requirements while also facilitating the manufacturing process.

For instance, in the electric motor example in Chapter 6, the motor could have just as easily

been scaled in the radial dimension as it was in the axial direction (i.e., stack length) to achieve

the necessary torque requirements; however, the underlying assumption in the choice of stack

length as the scale factor is that it can be exploited from both a technical sense and a

manufacturing sense. As Lehnerd (1987) alludes to in his article on Black & Decker and their

universal motor platform, by varying only the stack length of the motor, all of the motors—

306
ranging from 60 Watts to 660 Watts—could be produced on the same machine simply by

stacking more laminations onto the field and armature. Had the radius of the motor been scaled

instead of the stack length, different machines and tooling configurations would have been

required to produce the family of motors since varying the radius of the motors is more than a

stacking operation. Consequently, it is very important that one or multiple scale factors be

identified for the product family and that it be capable of being exploited from both a

technical standpoint and a manufacturing standpoint in order for the PPCEM to yield

useful results.

The second consideration when applying the PPCEM is that the performance of the

product family must be able to mathematically modeled, simulated, or quantified in order for the

PPCEM to be employed. It would be extremely difficult, if not impossible, for the PPCEM to

be utilized to design a common scalable automotive body platform based solely on aesthetic

considerations for instance. Consider the examples discussed in Section 1.1.1; to which of

these examples could the PPCEM be applied and why (or why not)? For the sake of

brevity, the answers are summarized in Table 8.1.

Table 8.1 Applicability of PPCEM to Product Families from Section 1.1.1

Would
Example
PPCEM Why or Why Not?
from §1.1.1
Apply?
Their platform strategy involves modular design and
Sony: Walkman No standardization of components; few, if any, scaling issues are
present within the product family.

307
They employ a combinatoric strategy to realize the necessary
Nippondenso:
No product variety based on a few well-designed, standardized
Panel Meters
parts; few, if any, scaling issues are present.
Lutron: Lighting No Same reasoning as Nippondenso.
Control Systems
The platform is scaled around the stack length of motor and an
Black & Decker:
Yes attempt was made to recreate their family of motors as the
Universal Motor
initial “proof of concept” for the PPCEM in Chapter 6.
The majority of copier design involves modular design of
Not components and assemblies; however, some scaling issues may
Canon: Copiers really arise to accommodate different print volumes, paper sizes, etc.
Yes, in The RTM322 was scaled to create a new product platform,
Rolls Royce:
some but modularity of engine components facilitated vertical scaling
RTM322 Engine
aspects of the platform to upgrade and derate engine.

As stated in Section 1.1.2, the types of problems to which the PPCEM is readily

applicable (given that the previous two conditions regarding scalability and quantifiability are

met) typically involve parametric or variant design. The fact that the PPCEM is intended

primarily for parametric or variant design raises another important issue, namely, successful

implementation of the PPCEM assumes that the basic concept or configuration on which the

product platform is being based is good for the entire product family. In order for the

PPCEM to be employed, a good underlying concept or configuration must have already been

established in order to obtain the full benefit of the method. In the GAA example in Chapter 7,

for instance, if the three blade, high wing position, retractable landing gear configuration had not

been a suitable concept for the two, four, and six seater aircraft, then no matter what parameter

settings were obtained from using the PPCEM, the performance of the family of aircraft would

have been poor regardless because the underlying concept was not good for all three aircraft.

308
An attempt to identify a good configuration for the family of GAA is discussed in (Simpson,

1995) but the existence of such a concept is assumed to already exist in this work.

Incorporation of the conceptual and configurational design of the product family along with the

parametric scaling of the product platform is a fertile area for future work.

Furthermore, it is important to keep in mind that the PPCEM facilitates generating

options for common product platforms which can be scaled into an appropriate product family.

The PPCEM is not necessarily intended to be used to evaluate these options or select one of

them. The idea behind the product platform portfolio—the output from applying the PPCEM—

is to maintain sufficient design flexibility to accommodate a wide variety of customer

requirements for as long as possible. As the product platform design progresses into the

detailed stages of design, this design freedom is reduced; however, during the early stages of the

design process, formulating and answering a variety of "what if" type questions and examining a

wide variety of design scenarios is important to the product platform design process.

Meanwhile, the NCI and PDI measures introduced in Section 3.1.5 and employed in

Section 7.6.4 represent an attempt to provide a means to evaluate different product platforms

and their respective product families. Ultimately the non-commonality of a set of parameters

would be linked to corresponding savings in manufacturing costs and performance deviation to

losses in customer sales; however, this is extremely difficult to accomplish without sufficient

industry input. Modeling the process and manufacturing aspects of product platforms and

product families is another fertile research area which has yet to be explored.

309
As far as the scale factors themselves goes, the concept of a scale factor—while

discussed in Section 2.3.2 and 3.1.2—is still not fully understood. In the motor example in

Chapter 6, for instance, the mean and standard deviation of the motor stack length was a scale

factor which was treated much like a design variable. Meanwhile, in the GAA example in

Chapter 7, the scale factor was the number of passengers which was treated as a design

parameter which varied from two to six, i.e., its permissible range of values was known a priori

based on the intended leveraging strategy. In any event, when metamodels are to be utilized

within the PPCEM, an initial range for each scale factor is necessary in order to construct these

metamodels. This follows in the same manner that a permissible range of any noise factor is

expected to be known before robust design principles can be applied to a problem (cf.,

Phadke, 1989). It is important to examine the concept of scale factors further, finding more

examples of scaled product platforms to understand the manner in which they have been scaled

and, more importantly, how those scale factors are identified during the design process.

This brings to light another shortcoming of the PPCEM, namely, the use of the market

segmentation grid to “identify” scale factors around which the product platform is leveraged

within a product family. As stated in Section 2.2.1, the market segmentation grid is only an

attention directing tool and considerable engineering “know-how” and problem insight are

required before a successful platform leveraging strategy can be identified. Then, only after a

suitable platform leveraging strategy is identified, can engineers hope to find (and be able to

exploit) scaling opportunities within the product family to realize the necessary product variety.

310
The market segmentation grid is the end result of this process and is really only useful for

mapping the resulting platform leveraging strategy. The two examples used in this

dissertation trivialize this process when in reality it is extremely difficult, if not impossible, to

identify one or more scaling factors which can be exploited within a product family. Developing

tools and methods to facilitate the process of identifying scale factors is one potential avenue for

further investigation.

Part of understanding scale factors better involves understanding their effect on product

performance and how scale factors can be used effectively to satisfy a wide variety of customer

requirements. If scale factors induce too much variability in product performance, then it might

not be possible to apply the PPCEM to develop a common product platform which does not

significantly compromise the performance of the product family over the range of interest. In

such a case, it might be necessary to “split” the design space into two or more product

platforms and corresponding product families rather than compromise product performance and

quality by having one single product platform which is scaled over the entire range of

performance. The work in (Chang and Ward, 1995; Lucas, 1994; Rangarajan, 1998; Seshu,

1998) further investigates and discusses these types of issues. Lucas (1994) in particular

presents interesting remarks on how to resolve these types of issues using concepts from robust

design as mentioned in Section 2.3.2.

Turning to specific implementation issues within the PPCEM, it may not have been

sufficiently clear that kriging, while part of the PPCEM, is not an integral part of the PPCEM

311
since it is not the only metamodeling technique which can be used within the PPCEM.

Response surfaces, neural nets, radial basis functions, etc. are all viable metamodeling options

for use in engineering design and with the PPCEM. The extensive literature review of

metamodeling applications in engineering design in (Simpson, et al., 1997b) supports this. The

most important consideration when using metamodels as surrogate approximations in

engineering design is that they are sufficiently accurate for the task at hand.

The investigations into kriging in this dissertation are primarily intended to shed light on

alternative metamodeling techniques which offer some advantages to response surface models

which are typically employed. The case for investigating alternatives to response surfaces has

been made in Section 2.4.1 and is also discussed in (Simpson, et al., 1998; Simpson, et al.,

1997b). The objective in this research is not to prove that kriging metamodels are better than

response surface models; rather, it is to demonstrate that kriging metamodels are a viable

alternative for building surrogate approximations of deterministic computer analysis.

Similarly, the use of space filling experimental designs as opposed to classical designs

are not mandated by this research. The investigation served to gain a better understanding of

the different sampling strategies which exist and the associated advantages and disadvantages of

each. If one experimental design type had proven superior in every example, then perhaps only

that design should be considered in the future. However, that was not the case, and the results

of this study are by no means generalizable to all types of engineering design problems. Very

few engineering problems, for instance, involve only two to four variables, and the availability of

312
codes to generate these space filling designs, the computation expense of them, and the nature

of the underlying analyses are just a few of the key factors that influence the decision of how to

sample a design space efficiently and effectively. Recommendations for future work in the areas

of experimental design and kriging are discussed in more detail in the next section.

8.4 RECOMMENDATIONS: AVENUES OF FUTURE WORK

8.4.1 Potential Avenues of Future Work in Metamodeling

While extensive in nature, the experimental design study offered in Chapter 5 is by no

means complete nor is it intended to be. Obviously, a wider variety problems should be

considered in order to obtain more generalizable recommendations. Additional space filling and

classical experimental designs which have not been considered include the following:

Classical Experimental Designs: fractional factorial designs and small central composite
designs (see, e.g., Box and Draper, 1987); D-optimal designs (see, e.g., Box and
Draper, 1971; Giunta, et al., 1994; Mitchell, 1974; St. John and Draper, 1975); I-, A-,
E-, and G-optimal designs (see, e.g., Hardin and Sloane, 1993; Myers and
Montgomery, 1995); minimum bias designs (see, e.g., Myers and Montgomery, 1995;
Venter and Haftka, 1997); and other hybrid designs (see, e.g., Giovannitti-Jensen and
Myers, 1989; Myers and Montgomery, 1995)

Space Filling Experimental Designs: median Latin hypercubes (see, e.g., Kalagnanam
and Diwekar, 1997; McKay, et al., 1979); minimax and maximin designs (Johnson, et
al., 1990); scrambled nets (Koehler and Owen, 1996); orthogonal arrays of different
strengths (Owen, 1992); Maximum entropy designs (Currin, et al., 1991; Shewry and

313
Wynn, 1987; Shewry and Wynn, 1988); and factorial hypercube designs (Salagame
and Barton, 1997).

In addition to including a wider array of experimental designs and sampling strategies in

the current testbed of problems, larger problems also should be investigated because very few

engineering problems only have 2-4 variables. However, problems with larger dimensional

design spaces (i.e., more design variables), invoke new complications. For instance, many of

the generators used to create the space filling experimental designs become computationally

expensive in and of themselves for large numbers of factors. For example, the simulated

annealing algorithm for generating maximin Latin hypercube designs (Morris and Mitchell, 1992;

1995) becomes extremely slow even for four factor designs with as few as 25 variables as

discussed in Section 5.1. Moreover, fractional factorial based central composite designs are

available for problems with five or more factors. Hence, larger problems require different

classes of designs all together.

As for the minimax Latin hypercube design, which is unique to this dissertation, the

genetic algorithm which is employed to generate these designs needs further studying to develop

a better understanding of its workings and to learn the optimal combination of parameters for

their use, namely, population size, number of permissible generations, mutation rates, and

termination criteria. Also, as it stands right now, the current design criterion—minimize the

maximum distance between sample points and prediction points—does not yield a unique

design for a given sample size and number of design variables. Developing and implementing an

314
optimization criterion such as that proposed by Mitchell and Morris (1995) for their maximin

Latin hypercube designs could improve the effectiveness of the minimax Latin hypercubes.

As for kriging, only kriging metamodels which employ an underlying constant for the

global portion of the model have been investigated in this work. In general, f(x) in Equation

2.14 could be taken as a linear or quadratic model instead of a constant which may permit more

accurate kriging approximations; however, the problem of having a sufficient number of samples

to estimate all of the unknown coefficients in f(x) resurfaces. A preliminary investigation of such

an approach is documented in (Giunta, et al., 1998); they find that minimal improvement in the

accuracy of the kriging approximations is obtained for their analyses.

Meanwhile, the power of kriging lies in its capability to interpolate accurately a wide

range of linear and non-linear functions. An iterative or sequential strategy which takes

advantage of this may prove useful provided the kriging models can be fit and validated quickly

from one iteration to the next. Consequently, trust region based approaches which incorporate

approximations are being developed by researchers in an effort to capitalize on the potential of

kriging metamodels (see, e.g., Alexandrov, et al., 1997; Booker, et al., 1996; Booker, et al.,

1995; Cox and John, 1995; Dennis and Torczon, 1996; Osio and Amon, 1996; Schonlau, et

al., 1997).

Finally, alternative optimization algorithms for finding the “best” kriging model also must

be investigated for use with larger problems. The simulated annealing algorithm currently

employed to fit the kriging models, see Appendix A, becomes extremely inefficient for problems

315
with more than eight variables and approximately 180 sample points. Moreover, the matrix

inversion routines in the current prediction software do not take full advantage of the properties

of the correlation matrix, R, in kriging which is always symmetric and positive definite. Several

matrix decomposition and inversion algorithms have been developed to take advantage of these

properties; however, they have not been exploited in this work.

8.4.2 Future Work in Product Family and Product Platform Design

The concept of scalability and scalable product platforms has provided an excellent

inroads into product family and product platform design, marrying current research efforts in

Decision-Based Design, the Robust Concept Exploration Method, and robust design with tools

from marketing/management science. The end result is the Product Platform Concept

Exploration Method which has been demonstrated by means of two examples: the design of a

family of universal motors and the design of a family of General Aviation aircraft. While it has

been shown that the PPCEM is effective at producing a family of products based on scaled

instantiations of a product platform, verification through additional applications will help to

further verify the method and improve it.

Furthermore, much in the same way that the product platform provides a platform for

leveraging with a product family, the Product Platform Concept Exploration Method provides a

platform for leveraging future work in product family and product platform design, see Figure

8.2. The different types of systems can be classified on the vertical axis of a market

segmentation grid and different characteristics of product platform design on the horizontal axis.

316
The use of the PPCEM to design scalable product platforms for a variety of systems then can

be plotted on this market segmentation grid as illustrated in Figure 8.2 for the two examples in

this dissertation. Perhaps through the addition of different “Processors” to the PPCEM,

additional capabilities could be developed within the framework of the PPCEM to design

modular platforms or facilitate product family redesign around a common platform, for instance.

Complex
Systems

GAA ...
Simple Universal
Systems Motor

Scalable Modular ...


Platform ? ?
Product Platform Concept Exploration Method

Figure 8.2 The PPCEM as a Platform for Other Platform Design Methods

Several avenues of future work have also been mentioned during the critical analysis in

Section 8.3. In addition to these potential research areas, additional verification and extensions

of the PPCEM are offered in the following sections as they tie to current research within the

Systems Realization Laboratory in the G. W. Woodruff School of Mechanical Engineering at

the Georgia Institute of Technology. These sections have been co-written with colleagues who

317
are planning to pursue (or are currently pursuing) the discussed research. A summary of those

providing input for this section and their standing within the Systems Realization Laboratory are

listed in Table 8.2.

Table 8.2 Contributions to Future Work Discussion

Section is written with input from: Standing


§8.4.3 Additional Verification of the
PPCEM and Kriging
Yao Lin &
Metamodels through the M.S. students
Kiran Krishnapur
Concurrent Design of an Engine
Lubrication System
§8.4.4 Configuration Design of
Common Automotive Platforms Zahed Siddique Ph.D. candidate
§8.4.5 Integrated Product and Process
Design of Product Families and Gabriel Hernandez Ph.D. student
Mass Customized Goods
§8.4.6 Product Family Mappings and Marc McLean M.S. student
“Ideality” Metrics
§8.4.7 Modeling the Value of Reuse
and Remanufacturing in a Mark McIntosh M.S. student
Product Family

8.4.3 Additional Verification of the PPCEM and Kriging Metamodels through the
Concurrent Design of an Engine Lubrication System

The objective in the Ford Engine Design Project is to develop and improve engine

lubrication system models to support advanced concurrent powertrain design and development

(cf., Rangarajan, 1998). As part of this work, robust design specifications are sought which are

capable of satisfying a wide variety of torque and power requirements for different automobile

engines. After developing a better understanding of the engine lubrication system and its

318
components, potential scaling opportunities within the engine lubrication systems components

can be identified and exploited using the PPCEM to develop a robust and common platform

design for the valves, pistons, bearings, etc. This platform then can be instantiated quickly using

minimal additional analysis for different classes of vehicles (e.g., automobiles, trucks, and vans)

in an effort to maintain better economies of scale across a wide variety of automobile makes and

models.

In addition to applying the PPCEM to the preliminary design of engine lubrication

components, the use of kriging metamodels for building surrogate approximations of the

associated complex fluid dynamics analyses also can be investigated. Currently second-order

response surfaces are used extensively during the design process; however, the complex

analyses for friction losses, power losses, etc. cannot be modeled well by response surfaces

over a large region of the design space, thus limiting the search for good solutions. Building

accurate global approximations of these analyses using kriging metamodels may yield additional

insight into the complexities of the design space, allowing better solutions to be identified. The

utility of the kriging for partitioning and screening large systems also can be examined in the

context of the engine lubrication system since a large number of factors (˜ 20) currently are

being utilized which would push the limits of the kriging metamodeling software (i.e., fitting the

model, matrix inversion, etc.). Finally, additional metamodeling techniques such as neural

networks (see, e.g., Cheng and Titterington, 1994; Hajela and Berke, 1992; Rumelhart, et al.,

319
1994; Widrow, et al., 1994) also can be compared to kriging given the size and complexity of

the problem.

8.4.4 Configuration Design of Common Automotive Platforms

Balancing the need to customize products for target markets while enabling the

economies of scale of a “world car” is a challenge faced by every automotive manufacturer. A

proliferation of options and model derivatives leads to increased tooling cost and production line

complexity. At first glance, it may appear that automotive platforms are prime examples for

product variety design research. However, in a recent study, Siddique, et al. (1998) identified

significant differences between the variety characteristics of automotive platforms from some of

the examples that other researchers have studied (e.g., the Sony Walkman family). For

example, the majority of product family design research is applicable to products that are

modular with respect to functions as discussed in Section 2.2.3. The automotive platform, on

the other hand, is not modular because the platform accomplishes one function as a whole. As

a result, many product family design approaches do not readily apply; however, careful

commonization of platforms can still be used to increase product variety while reducing the

number of components between different models and the product line complexity.

Developing a common platform requires a robust platform that can support all of the

requirements for different car models and also a common assembly process that can support

these variations. For the automotive industry, platform requirements come from packaging

constraints (underhood, passenger, etc.), safety/crash requirements, size of the vehicle, styling,

320
and other requirements/regulations. Cars in similar classes have similar types of requirements

(except for styling, maybe); as such, the underbody for similar cars have the potential to be

commonized. Toward this end, a method for the configuration design of common product

platforms is to be developed, extending the parameter design capabilities of the PPCEM for

designing scalable product platforms. As discussed in (Siddique, 1998; Siddique and Rosen,

1998), this includes the following:

• identification of different product family design concepts and investigation of the


applicability of these concepts towards automotive platform design,

• development of a representation scheme for automotive platform commonization,

• development of a scheme to measure commonality for automotive platforms, and

• establishment of configuration design methods for developing common product


platforms.

Using configuration design methods, the underlying common core for different platforms can be

identified along with the required variations. This information then can be used to increase the

commonality of the product platform and determine how to isolate the variability in specific

modules.

In addition to having a common product platform, a commonized assembly process also

is desired so that the same assembly line can be used to produce all of the (minor) platform

derivatives. Using the same component loading sequences, tooling sequences, etc. provides

some of the requirements when developing a common assembly process (cf., Nevins and

Whitney, 1989; Whitney, 1993). Other requirements that need to be considered specifically for

321
automobile platforms include common locators, weld lines, transfer points, etc. Hence, it is

imperative to integrate product and process design of product families.

8.4.5 Integrated Product and Process Design of Product Families and Mass
Customized Goods

Mass customization, i.e., the manufacture of customized products with the efficiency and

speed of mass produced systems, is increasingly recognized as a source of competitive

advantage and possibly the next world-class manufacturing paradigm. Although the

marketplace is rapidly moving towards mass customization, very little work has been done on

formalizing an integrated product and process development method that would enable

companies to practice mass customization in a systematic and efficacious manner. For example,

the PPCEM provides a method to develop a common product platform which can be scaled to

provide the necessary variety for a product family; however, its focus is solely on modeling the

product itself. Meanwhile, research in flexible, agile, and/or reconfigurable manufacturing

systems (see, e.g., Abair, 1995; Anderson and Pine, 1997; Chinnaiah, et al., 1998; Dhumal, et

al., 1996; Hormozi, 1994; Richards, 1996) focuses primarily on developing cost effective

manufacturing systems to realize a wide variety of products. Integrating the two fields of

research has received little attention in the context of designing families of products.

Mass customization places an onus on companies to integrate closely their various

activities in order to respond quickly to an environment of continuous change. Thus, emphasis

should be given to the integration of product design, production system design, and organization

design. The key areas to be addressed include the following:

322
• Principles of product and process development for mass customized production:

- systematic product and process evolution based on optimization techniques,


parameterization, modularity, standardization, commonality, scalability, and
robustness,

- concurrent design of flexible product and process architectures, and

- optimal design under uncertain demand and customer requirements.

• Structuring design projects for mass customized production:

- identifying appropriate organizational structuring based on the notion of multi-


functional design teams and the particular requirements of mass customized
production, and

- systems to support the required information transfer and group decision making.

• Design for dis-aggregated production, i.e., decentralized supply chains and production
systems for the growing global economy.

An initial investigation into the concurrent modeling of product and process for design of

make-to-order customized systems is discussed in (Hernandez, et al., 1998) wherein the

integrated product and process design of a family of absorption chillers for a variety of

capacities is presented. In related work, game-theoretic models of product and process design

have been implemented (see, Hernandez, 1998) to facilitate the formulation and solution of such

an approach, providing a foundation for future integration of product and process design of

family of products.

323
8.4.6 Product Family Mappings and “Ideality” Metrics

One of the major obstacles encountered by many industries, particularly in the

telecommunications industry, is (1) that several solution paths exist to satisfy a given set of

customer requirements using available components and (2) when customers ask for new

functional capabilities, it is difficult to determine how this functionality can be created and

provided seamlessly in the presence of a pre-existing set of components. Therefore, metrics

must be established for the purpose of identifying the most appropriate solution strategy given

the specific design and customer requirements. The NCI and PDI measures presented in this

work provide relative assessments of product commonality and performance deviation;

however, these measures cannot be used in “real-time” by designers to guide the product

platform development process. Therefore, the objective is to survey further the existing

engineering and strategic manufacturing literature to the following:

1. refine the definition of the product family,

2. establish a useful product family model,

3. define useful “real-time” metrics to guide engineering design and improve the product
family architecture,

4. map new functionality and products into the product family, and

5. assess decisions made in the context of the product family.

Establishment of a suitable product family model and corresponding metrics provides a

means of identifying non-ideal substructures within a product (family) architecture, targeting

areas of greatest improvement. Possible metrics include those relating to the system flexibility,

324
complexity, upgradability, etc., in addition to improving current metrics for commonality,

modularity, etc. for “real-time” use by designers. The end result will be an efficient process for

designing assemble-to-order systems, thereby replacing the expensive and time consuming

process of realizing engineer-to-order systems.

8.4.7 Modeling the Value of Reuse and Remanufacturing in a Product Family

As discussed in Section 1.1, there are a number of benefits to developing families of

products, one of which is the ability to reuse and remanufacture components and modules from

one product to the next (cf., Alexander, 1993; Paula, 1997; Rothwell and Gardiner, 1990;

Sanderson and Uzumeri, 1995). Product reuse is the act of reclaiming products (or parts of

products) from a previous use and remanufacturing them for another use (where the second use

may or may not be the same as the original). Product reuse is both economically and

environmentally desirable due to a number of benefits including:

• previously used products are diverted from landfill or other means of disposal,

• fewer natural resources are consumed in new production, and

• all of the energy, emissions, and financial resources involved in creating the geometric
form of components are reduced.

Assessing the impact of product family development on product reuse can be

accomplished by modeling remanufacturing systems. McIntosh, et al. (1998) develop and

apply a model of an integrated remanufacturing-manufacturing organization to assess the impact

of product design characteristics, product development strategies, and external factors on the

value of reuse and remanufacture over time. The model can be used to assess the potential

325
value of product remanufacture for an OEM (original equipment manufacturer) which integrates

the reclaiming and reuse of products into its existing production system. The model allows

decision-makers to specify the following:

• product design characteristics of each product model, (e.g., the number of components
and required disassembly sequence),

• product development decisions over time (e.g., the level of product variety, the rate of
product evolution, and the degree of component standardization across product variety
and evolution), and

• external business factors which affect reclaiming and remanufacturing (e.g., the cost of
labor and the retirement distribution of used products over time).

The model then is used to determine which products to reclaim, which components to recycle

and remanufacture, and the resulting costs and benefits of these actions over time. Thus, it

provides an analysis tool to assess the potential value of reuse and remanufacturing on the

development of product families based on common product platforms, providing additional cost

justification for developing product families.

8.5 CONCLUDING REMARKS

In closing this dissertation, a quote by T.S. Eliot found in the introductory section of

Cressie’s book on Spatial Statistics (Cressie, 1993) is perhaps most fitting.

“We shall not cease from exploration


And the end of all our exploring
Will be to arrive where we started

326
And know the place for the first time.”
— T.S. Eliot

The PPCEM is not an end in itself; rather, it provides a stepping stone for future research work

in this nascent field of engineering design. For it is only at the end of this dissertation that the

problems and difficulties associated with product family and product platform design are truly

understood and appreciated. And now that we understand them, either for the first time or in

greater depth, new paths can be explored and new methods can be developed which continue

to advance the state-of-the-art in product family and product platform design. It is the hope of

the author that the PPCEM enjoys the same success as the RCEM, providing a foundation on

which future research can be established in the same manner that this work builds upon the

work before it.

327
REFERENCES

1982, The Concise Oxford Dictionary, Oxford University Press, Oxford, UK.

Abair, R., 1995, October 22-27, "Agile Manufacturing: This Is not Just Repackaging of
Material Requirements Planning and Just-In-Time," 38th American Production and
Inventory Control Society (APICS) International Conference and Exhibition,
Orlando, FL, APICS, pp. 196-198.

Alexander, B., 1993, June 14-16, "Kodak Fun Saver Camera Recycling," Society of Plastics
Engineers Recycling Conference - Survival Tactics thru the '90's, Chicago, IL, pp.
207-212.

Alexandrov, N., Dennis, J. E., Jr., Lewis, R. M. and Torczon, V., 1997, "A Trust Region
Framework for Managing the Use of Approximation Models in Optimization,"
NASA/CR-20145, ICASE Report. No. 97-50, Institute for Computer Applications in
Science and Engineering (ICASE), NASA Langley Research Center, Hampton, VA.

Anderson, D. M. and Pine, B. J., II, 1997, Agile Product Development for Mass
Customization, Irwin, Chicago, IL.

Arora, J. S., 1989, Introduction to Optimum Design, McGraw-Hill, New York.

Balling, R. J. and Clark, D. T., 1992, September 21-23, "A Flexible Approximation Model for
Use with Optimization," 4th AIAA/USAF/NASA/OAI Symposium on
Multidisciplinary Analysis and Optimization, Cleveland, OH, AIAA, Vol. 2, pp.
886-894. AIAA-92-4801-CP.

Barton, R. R., 1992, December 13-16, "Metamodels for Simulation Input-Output Relations,"
Proceedings of the 1992 Winter Simulation Conference (Swain, J. J., Goldsman,
D., et al., eds.), Arlington, VA, IEEE, pp. 289-299.

Barton, R. R., 1994, December 11-14, "Metamodeling: A State of the Art Review,"
Proceedings of the 1994 Winter Simulation Conference (Tew, J. D., Manivannan,
S., et al., eds.), Lake Beuna Vista, FL, IEEE, pp. 237-244.

Bond, A. H. and Ricci, R. J., 1992, "Cooperation in Aircraft Design," Research in


Engineering Design, Vol. 4, pp. 115-130.

467
Booker, A. J., 1996, "Case Studies in Design and Analysis of Computer Experiments,"
Proceedings of the Section on Physical and Engineering Sciences, American
Statistical Association.

Booker, A. J., Conn, A. R., Dennis, J. E., Frank, P. D., Serafini, D., Torczon, V. and Trosset,
M., 1996, "Multi-Level Design Optimization: A Boeing/IBM/Rice Collaborative
Project," 1996 Final Report, ISSTECH-96-031, The Boeing Company, Seattle, WA.

Booker, A. J., Conn, A. R., Dennis, J. E., Frank, P. D., Trosset, M. and Torczon, V., 1995,
"Global Modeling for Optimization: Boeing/IBM/Rice Collaborative Project," 1995
Final Report, ISSTECH-95-032, The Boeing Company, Seattle, WA.

Box, G. E. P. and Behnken, D. W., 1960, "Some New Three Level Designs for the Study of
Quantitative Variables," Technometrics, Vol. 2, No. 4, pp. 455-475, "Errata," Vol. 3,
No. 4, p. 576.

Box, G. E. P. and Draper, N. R., 1987, Empirical Model Building and Response Surfaces,
John Wiley & Sons, New York.

Box, M. J. and Draper, N. R., 1971, "Factorial Designs, the |X'X| Criterion, and Some Related
Matters," Technometrics, Vol. 13, No. 4 (November), pp. 731-742.

Bras, B. A. and Mistree, F., 1991, "Designing Design Processes in Decision-Based Concurrent
Engineering," SAE Transactions, Journal of Materials & Manufacturing, SAE
International, Warrendale, PA, pp. 451-458.

Byrne, D. M. and Taguchi, S., 1987, "The Taguchi Approach to Parameter Design," Quality
Progress, Vol. December, pp. 19-26.

Chaloner, K. and Verdinelli, I., 1995, "Bayesian Experimental Design: A Review," Statistical
Science, Vol. 10, No. 3, pp. 273-304.

Chambers, J. M., Freeny, A. E. and Heiberger, R. M., 1992, "Chapter 5: Analysis of Variance;
Designed Experiments," Statistical Models in S (Chambers, J. M. and Hastie, T. J.,
eds.), Wadsworth & Brooks/Cole, Pacific Grove, CA, pp. 145-193.

Chang, T.-S. and Ward, A. C., 1995, September 17-20, "Design-in-Modularity with
Conceptual Robustness," Advances in Design Automation (Azarm, S., Dutta, D., et
al., eds.), Boston, MA, ASME, Vol. 82-1, pp. 493-500.

Chang, T.-S., Ward, A. C., Lee, J. and Jacox, E. H., 1994, November 6-11, "Distributed
Design with Conceptual Robustness: A Procedure Based on Taguchi's Parameter

468
Design," Concurrent Product Design Conference (Gadh, R., ed.), Chicago, IL,
ASME, Vol. 74, pp. 19-29.

Chapman, S. J., 1991, Electric Machinery Fundamentals, McGraw-Hill, New York.

Chen, W., Rosen, D., Allen, J. K. and Mistree, F., 1994, "Modularity and the Independence of
Functional Requirements in Designing Complex Systems," Concurrent Product Design
(Gadh, R., ed.), ASME, Vol. 74, pp. 31-38.

Chen, W., 1995, "A Robust Complex Exploration Method for Configuring Complex Systems,"
Ph.D. Dissertation, G. W. Woodruff School of Mechanical Engineering, Georgia
Institute of Technology, Atlanta, GA.

Chen, W., Allen, J. K., Mavris, D. and Mistree, F., 1996a, "A Concept Exploration Method
for Determining Robust Top-Level Specifications," Engineering Optimization, Vol.
26, No. 2, pp. 137-158.

Chen, W., Allen, J. K., Tsui, K.-L. and Mistree, F., 1996b, "A Procedure for Robust Design:
Minimizing Variations Caused by Noise and Control Factors," Journal of Mechanical
Design, Vol. 118, No. 4, pp. 478-485.

Chen, W., Simpson, T. W., Allen, J. K. and Mistree, F., 1996c, August 18-22, "Use of Design
Capability Indices to Satisfy a Ranged Set of Design Requirements," Advances in
Design Automation (Dutta, D., ed.), Irvine, CA, ASME, Paper No. 96-
DETC/DAC-1090.

Chen, W., Allen, J. K., Schrage, D. P. and Mistree, F., 1997, "Statistical Experimentation
Methods for Achieving Affordable Concurrent Systems Design," AIAA Journal, Vol.
35, No. 5, pp. 893-900.

Chen, W., Allen, J. K. and Mistree, F., 1997, "A Robust Concept Exploration Method for
Enhancing Productivity in Concurrent Systems Design," Concurrent Engineering:
Research and Applications, Vol. 5, No. 3, pp. 203-217.

Cheng, B. and Titterington, D. M., 1994, "Neural Networks: A Review from a Statistical
Perspective," Statistical Science, Vol. 9, No. 1, pp. 2-54.

Chinnaiah, P. S. S., Kamarthi, S. V. and Cullinane, T. P., 1998, "Characterization and Analysis
of Mass-Customized Production Systems," International Journal of Agile
Manufacturing, under review.

Clark, K. B. and Wheelwright, S. C., 1993, Managing New Product and Process
Development, Free Press, New York.
469
Cogdell, J. R., 1996, Foundations of Electrical Engineering, Prentice Hall, Upper Saddle
River, NJ.

Collier, D. A., 1981, "The Measurement and Operating Benefits of Component Part
Commonality," Decision Sciences, Vol. 12, No. 1, pp. 85-96.

Collier, D. A., 1982, "Aggregate Safety Stock Levels and Component Part Commonality,"
Management Science, Vol. 28, No. 22, pp. 1296-1303.

Cox, D. D. and John, S., 1995, March 13-16, "SDO: A Statistical Method for Global
Optimization," Proceedings of the ICASE/NASA Langley Workshop on
Multidisciplinary Optimization (Alexandrov, N. M. and Hussaini, M. Y., eds.),
Hampton, VA, SIAM, pp. 315-329.

Cressie, N. A. C., 1993, Statistics for Spatial Data, Revised Edition, John Wiley & Sons,
New York.

Currin, C., Mitchell, T., Morris, M. and Ylvisaker, D., 1991, "Bayesian Prediction of
Deterministic Functions, With Applications to the Design and Analysis of Computer
Experiments," Journal of the American Statistical Association, Vol. 86, No. 416,
pp. 953-963.

Davis, S. M., 1987, Future Perfect, Addison-Wesley Publishing Company, Reading, MA.

Dennis, J. E. and Torczon, V., 1995, March 13-16, "Managing Approximation Models in
Optimization," Proceedings of the ICASE/NASA Langley Workshop on
Multidisciplinary Design Optimization (Alexandrov, N. M. and Hussaini, M. Y.,
eds.), Hampton, VA, SIAM, pp. 330-347.

Dennis, J. E., Jr. and Torczon, V., 1996, September 4-6, "Approximation Model Management
for Optimization," 6th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary
Analysis and Optimization, Bellevue, WA, AIAA, Vol. 2, pp. 1044-1046. AIAA-
96-4099-CP.

Dhumal, A., Dhawan, R., Kona, A. and Soni, A. H., 1996, August 18-22, "Reconfigurable
System Analysis for Agile Manufacturing," 5th ASME Flexible Assembly Conference
(Soni, A., ed.), Irvine, CA, ASME, Paper No. 96-DETC/FAS-1367.

DiCamillo, G. T., 1988, "Winning Turnaround Strategies at Black & Decker," Journal of
Business Strategy, Vol. 9, No. 2, pp. 30-33.

Diwekar, U. M., 1995, "Hammersley Sampling Sequence (HSS) Manual," Engineering &
Public Policy Department, Carnegie Mellon University, Pittsburgh, PA.
470
Eggert, R. J. and Mayne, R. W., 1993, "Probabilistic Optimal Design Using Successive
Surrogate Probability Density Functions," Journal of Mechanical Design, Vol. 115,
No. 3, pp. 385-391.

Erens, F. and Breuls, P., 1995, "Structuring Product Families in the Development Process,"
Proceedings of ASI'95, Lisbon, Portugal, .

Erens, F. and Verhulst, K., 1997, "Architectures for Product Families," Computers in
Industry, Vol. 33, No. 165-178, pp.

Erens, F. J. and Hegge, H. M. H., 1994, "Manufacturing and Sales Co-ordination for Product
Variety," International Journal of Production Economics, Vol. 37, No. 1, pp. 83-
99.

Erens, F., 1997, "Synthesis of Variety: Developing Product Families," Ph.D. Dissertation,
University of Technology, Eindhoven, The Netherlands.

Fang, K.-T. and Wang, Y., 1994, Number-theoretic Methods in Statistics, Chapman & Hall,
New York.

Finger, S. and Dixon, J. R., 1989a, "A Review of Research in Mechanical Engineering Design.
Part 1: Descriptive, Prescriptive, and Computer-Based Models of Design Processes,"
Research in Engineering Design, Vol. 1, pp. 51-67.

Finger, S. and Dixon, J. R., 1989b, "A Review of Research in Mechanical Engineering Design.
Part 2: Representations, Analysis, and Design for the Life Cycle," Research in
Engineering Design, Vol. 1, pp. 121-137.

Fujita, K. and Ishii, K., 1997, September 14-17, "Task Structuring Toward Computational
Approaches to Product Variety Design," Advances in Design Automation (Dutta, D.,
ed.), Sacramento, CA, ASME, Paper No. DETC97/DAC-3766.

G.S. Electric, 1997, "Why Universal Motors Turn On the Appliance Industry,"
http://www.gselectric.com/electric/univers4.htm.

Giovannitti-Jensen, A. and Myers, R. H., 1989, "Graphical Assessment of the Prediction


Capability of Response Surface Designs," Technometrics, Vol. 31, No. 2 (May), pp.
159-171.

Giunta, A. A., 1997, "Aircraft Multidisciplinary Design Optimization Using Design of


Experiments Theory and Response Surface Modeling," Ph.D. Dissertation and MAD
Center Report No. 97-05-01, Department of Aerospace and Ocean Engineering,
Virginia Polytechnic Institute and State University, Blacksburg, VA.
471
Giunta, A. A., Dudley, J. M., Narducci, R., Grossman, B., Haftka, R. T., Mason, W. H. and
Watson, L. T., 1994, September 7-9, "Noisy Aerodynamic Response and Smooth
Approximations in HSCT Design," 5th AIAA/USAF/NASA/ISSMO Symposium on
Multidisciplinary Analysis and Optimization, Panama City, FL, AIAA, Vol. 2, pp.
1117-1128. AIAA-94-4376-CP.

Giunta, A., Watson, L. T. and Koehler, J., 1998, September 2-4, "A Comparison of
Approximation Modeling Techniques: Polynomial Versus Interpolating Models," 7th
AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis &
Optimization, St. Louis, MI, AIAA, AIAA-98-4758.

Goffe, W. L., Ferrier, G. D. and Rogers, J., 1994, "Global Optimization of Statistical Functions
with Simulated Annealing," Journal of Econometrics, Vol. 60, No. 1-2, pp. 65-100.
Source code is available at http://netlib2.cs.utk.edu/opt.

Goldberg, D. E., 1989, Genetic Algorithms in Search, Optimization, and Machine


Learning, Addison-Wesley Publishing Company, Inc., New York.

Hagemann, G., Schley, C.-A., Odintsov, E. and Sobatchkine, A., 1996, July, "Nozzle
Flowfield Analysis with Particular Regard to 3D-Plug-Cluster Configurations," AIAA-
96-2954.

Hajela, P. and Berke, L., 1992, "Neural Networks in Structural Analysis and Design: An
Overview," Computing Systems in Engineering, Vol. 3, No. 1-4, pp. 525-538.

Hardin, R. H. and Sloane, N. J. A., 1993, "A New Approach to the Construction of Optimal
Designs," Journal of Statistical Planning and Inference, Vol. 37, pp. 339-369.

Hernandez, G., 1998, "A Probablistic-Based Design Approach with Game Theoretical
Representations of the Enterprise Design Process," M.S. Thesis, G. W. Woodruff
School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA.

Hernandez, G., Simpson, T. W., Allen, J. K., Bascaran, E., Avila, L. F. and Salinas, F., 1998,
September 13-16, "Robust Design of Product Families for Make-to-Order Systems,"
Advances in Design Automation Conference, Atlanta, GA, ASME, DETC98/DAC-
5595.

Hollins, B. and Pugh, S., 1990, Successful Product Design, Butterworths, Boston, MA.

Hormozi, A., 1994, October 30-November 4, "Agile Manufacturing," 37th American


Production and Inventory Control Society (APICS) International Conference and
Exhibition, San Diego, CA, APICS, pp. 216-218.

472
Hubka, V. and Eder, W. E., 1988, Theory of Technical Systems: A Total Concept Theory
for Engineering Design, Springer, New York.

Hubka, V. and Eder, W. E., 1996, Design Science: Introduction to the Needs, Scope and
Organization of Engineering Design Knowledge, Springer, New York.

Iacobellis, S. F., Larson, V. R. and Burry, R. V., 1967, December, "Liquid-Propellant Rocket
Engines: Their Status and Future," Journal of Spacecraft and Rockets, Vol. 4, pp.
1569-1580.

Ignizio, J. P., 1985, Introduction to Linear Goal Programming, Sage University Papers,
Beverly Hills, CA.

Ignizio, J. P., 1990, An Introduction to Expert Systems: The Methodology and its
Implementation, McGraw-Hill, New York.

Ignizio, J. P., Wyskida, R. M. and Wilhelm, M. R., 1972, "A Rationale for Heuristic Program
Selection and Evaluation," Vol. 4, No. 1, pp. 16-19.

Iman, R. J. and Shortencarier, M. J., 1984, "A FORTRAN77 Program and User's Guide for
Generation of Latin Hypercube and Random Samples for Use with Computer Models,"
NUREG/CR-3624, SAND83-2365, Sandia National Laboratories, Albuquerque, NM.

Jacobson, G. and Hillkirk, J., 1986, Xerox: American Samurai, Macmillan Publishing
Company, New York.

Johnson, M. E., Moore, L. M. and Ylvisaker, D., 1990, "Minimax and Maximin Distance
Designs," Journal of Statistical Planning and Inference, Vol. 26, No. 2, pp. 131-
148.

Johnson, N. L., Kotz, S. and Pearn, W. L., 1992, "Flexible Process Capability Indices,"
Institute of Statistics Mimeo Series, University of North Carolina, Chapel Hill, NC.

Journel, A. G. and Huijbregts, C. J., 1978, Mining Geostatistics, Academic Press, New
York.

Kalagnanam, J. R. and Diwekar, U. M., 1997, "An Efficient Sampling Technique for Off-Line
Quality Control," Technometrics, Vol. 39, No. 3, pp. 308-319.

Kannan, B. K. and Kramer, S. N., 1994, "An Augmented Lagrange Multiplier Based Method
for Mixed Integer Discrete Continuous Optimization and Its Application to Mechanical
Design," Journal of Mechanical Design, Vol. 116, No. 2, pp. 405-411.

473
Kleijnen, J. P. C., 1987, Statistical Tools for Simulation Practitioners, Marcel Dekker,
New York.

Kobe, G., 1997, "Platforms - GM's Seven Platform Global Strategy," Automotive Industries,
Vol. 177, pp. 50.

Koch, P. N., 1997, "Hierarchical Modeling and Robust Synthesis for the Preliminary Design of
Large Scale, Complex Systems," Ph.D. Dissertation, G. W. Woodruff School of
Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA.

Koch, P. N., Allen, J. K., Mistree, F. and Mavris, D., 1997, September 14-17, "The Problem
of Size in Robust Design," Advances in Design Automation, Sacramento, CA,
ASME, Paper No. DETC97/DAC-3983.

Koch, P. N., Mavris, D., Allen, J. K. and Mistree, F., 1998, September 13-16, "Modeling
Noise in Approximation-Based Robust Design: A Comparison and Critical Discussion,"
Advances in Design Automation, Atlanta, GA, ASME, DETC98/DAC-5588.

Koehler, J. R. and Owen, A. B., 1996, "Computer Experiments," Handbook of Statistics


(Ghosh, S. and Rao, C. R., eds.), Elsevier Science, New York, pp. 261-308.

Korte, J. J., Salas, A. O., Dunn, H. J., Alexandrov, N. M., Follett, W. W., Orient, G. E. and
Hadid, A. H., 1997, "Multidisciplinary Approach to Aerospike Nozzle Design," NASA-
TM-110326, NASA Langley Research Center, Hampton, VA.

Kota, S. and Sethuraman, K., 1998, September 13-16, "Managing Variety in Product Families
Through Design for Commonality," Design Theory and Methodology - DTM'98,
Atlanta, GA, ASME, DETC98/DTM-5651.

Lee, H. L. and Billington, C., 1994, "Designing Products and Processes for Postponement,"
Management of Design: Engineering and Management Perspective (Dasu, S. and
Eastman, C., eds.), Kluwer Academic Publishers, Boston, MA, pp. 105-122.

Lee, H. L. and Tang, C. S., 1997, "Modeling the Costs and Benefits of Delayed Product
Differentiation," Management Science, Vol. 43, No. 1, pp. 40-53.

Lee, H. L., Billington, C. and Carter, B., 1993, "Hewlett-Packard Gains Control of Inventory
and Service through Design for Localization," Interfaces, Vol. 32, No. 4, pp. 1-11.

Lehnerd, A. P., 1987, "Revitalizing the Manufacture and Design of Mature Global Products,"
Technology and Global Industry: Companies and Nations in the World Economy
(Guile, B. R. and Brooks, H., eds.), National Academy Press, Washington, D.C., pp.
49-64.
474
Lewis, K., Lucas, T. and Mistree, F., 1994, September 7-9, "A Decision Based Approach to
Developing Ranged Top-Level Aircraft Specifications: A Conceptual Exposition," 5th
AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and
Optimization, Panama City, FL, Vol. 1, pp. 465-481.

Lewis, R. M., 1996, "A Trust Region Framework for Managing Approximation Models in
Engineering Optimization," 6th AIAA/USAF/NASA/ISSMO Symposium on
Multidisciplinary Analysis and Optimization, Bellevue, WA, AIAA, Vol. 2, pp.
1053-1055. AIAA-96-4101-CP.

Li, H.-L. and Chou, C.-T., 1994, "A Global Approach for Nonlinear Mixed Discrete
Programming in Design Optimization," Engineering Optimization, Vol. 22, No. 2, pp.
109-122.

Lin, S., 1975, "Heuristic Programming as an Aid to Network Design," Networks, Vol. 5, No.
1, pp. 33-43.

Lucas, J. M., 1976, "Which Response Surface Design is Best," Technometrics, Vol. 18, No.
4, pp. 411-417.

Lucas, J. M., 1994, "Using Response Surface Methodology to Achieve a Robust Process,"
Journal of Quality Technology, Vol. 26, No. 4, pp. 248-260.

Martin, C. G. V. a. J. E., 1986, Fractional and Subfractional Horsepower Electric Motors,


McGraw-Hill, New York.

Martin, M. and Ishii, K., 1996, August 18-22, "Design for Variety: A Methodology for
Understanding the Costs of Product Proliferation," Design Theory and Methodology -
DTM'96 (Wood, K., ed.), Irvine, CA, ASME, Paper No. 96-DETC/DTM-1610.

Martin, M. V. and Ishii, K., 1997, September 14-17, "Design for Variety: Development of
Complexity Indices and Design Charts," Advances in Design Automation (Dutta, D.,
ed.), Sacramento, CA, ASME, Paper No. DETC97/DFM-4359.

Mather, H., 1995, October 22-27, "Product Variety -- Friend or Foe?," Proceedings of the
1995 38th American Production & Inventory Control Society International
Conference and Exhibition, Orlando, FL, APICS, pp. 378-381.

Matheron, G., 1963, "Principles of Geostatistics," Economic Geology, Vol. 58, pp. 1246-
1266.

MathSoft, 1997, S-Plus User's Guide Version 4.0, Seattle, WA

475
Mavris, D. N., Bandte, O. and Schrage, D. P., 1995, May, "Economic Uncertainty Assessment
of an HSCT Using a Combined Design of Experiments/Monte Carlo Simulation
Approach," 17th Annual Conference of International Society of Parametric
Analysts, San Diego, CA, .

Mavris, D., Bandte, O. and Schrage, D., 1996, September 4-6, "Application of Probabilistic
Methods for the Determination of an Economically Robust HSCT Configuration," 6th
AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and
Optimization, Bellevue, WA, AIAA, Vol. 2, pp. 968-978. AIAA-96-4090-CP.

McCullers, L. A., 1993, "Flight Optimization System, User's Guide, Version 5.7," NASA
Langley Research Center, Hampton, VA.

McDermott, C. M. and Stock, G. N., 1994, "The Use of Common Parts and Designs in High-
Tech Industries: A Strategic Approach," Production and Inventory Management
Journal, Vol. 35, No. 3, pp. 65-68.

McGrath, M. E., 1995, Product Strategy for High-Technology Companies, Irwin


Professional Publishing, New York.

McKay, A., Erens, F. and Bloor, M. S., 1996, "Relating Product Definition and Product
Variety," Research in Engineering Design, Vol. 8, No. 2, pp. 63-80.

McKay, M. D., Beckman, R. J. and Conover, W. J., 1979, "A Comparison of Three Methods
for Selecting Values of Input Variables in t