Ph.D. Dissertation
Timothy W. Simpson
Abstract
Current design research is directed at improving the efficiency and effectiveness of designers in
the product realization process, and until recently, the focus has been predominantly on
designing a single product. However, today’s market—characterized by words such as mass
customization, rapid innovation, and maketoorder products—requires a new approach to
provide the necessary product variety to remain competitive. The answer advocated in this
dissertation is the design and development of scalable product platforms around which a family
of products can be realized to satisfy a variety of market niches. In particular, robust design
principles, statistical metamodeling techniques, and the market segmentation grid, an attention
directing tool from management science, are synthesized into the Product Platform Concept
Exploration Method (PPCEM); the PPCEM is an efficient and effective method for designing
scalable product platforms, the cornerstone of an effective product family. The efficiency and
effectiveness of the method are tested and verified through application to three example
problems: the design of a family of oil filters, the design of a family of absorption chillers, and the
design of a family of General Aviation aircraft.
Q1. How can scalability be modeled and realized in product family design?
There is a onetoone correspondence between the hypotheses and research questions. The
Product Platform Concept Exploration Method mentioned in Hypothesis 1 is developed to
answer the first research question, providing a method to model and realize scalability in product
family design. Hypotheses 2 and 3 entail affirmative answers to Questions 2 and 3 which are
explicitly tested and verified in Chapter 4. Confirmation of Hypothesis 1 is not contingent upon
verification of Hypotheses 2 and 3; Hypotheses 2 and 3 help to support Hypothesis 1 but have
implications which extend beyond product family design. These implications are discussed more
thoroughly in the concluding chapter of this dissertation, Section 8.1.
Since Question 1 is quite broad, three supporting research questions and subhypotheses are
proposed to facilitate the verification of Hypothesis 1. As with the preceding research questions
and hypotheses, there is a onetoone correspondence between each of these supporting
question and the correspondingly numbered hypothesis. The supporting questions and sub
hypotheses are stated as follows.
Q1.1. How can product platform scaling opportunities be identified from overall
design requirements?
Q1.2. How can robust design principles be used to design a scalable product
platform?
Q1.3. How can individual targets for product variants be aggregated and modeled for
product platform design?
SubHypothesis 1.1: The market segmentation grid can be utilized to identify scale
factors for a product platform.
SubHypothesis 1.2: Robust design principles can be used to design a scalable
product platform by minimizing the sensitivity of a product platform to variations in
scale factors.
SubHypothesis 1.3: Individual targets for product variants can be aggregated into an
appropriate mean and variance and used in conjunction with robust design
principles to effect a product family .
As with the main hypotheses, the subhypotheses are stated here to provide context for the
literature review in the next chapter and development of the PPCEM in Section 3.1.
Supporting Posits
There are several posits which support the research hypotheses. Six posits support Hypotheses
1 and SubHypotheses 1.11.3.; they are the following.
Posit 1.1: The RCEM provides an efficient and effective means for developing robust
toplevel design specifications for complex systems design.
Posit 1.2: Metamodeling techniques, specifically, design of experiments and response
surface methodology, can be used to facilitate concept exploration and optimization,
thus increasing a designer’s efficiency.
Posit 1.3: Robust design principles can be used to minimize the sensitivity of a design
to variations in uncontrollable (i.e., noise) factors and/or variations in design
parameters (i.e., control factors).
Posit 1.4: Robust design principles can be used effectively in the early stages of the
design process by modeling the response itself with separate goals for “bringing the
mean on target” and “minimizing the variation.”
Posit 1.5: The compromise DSP is capable of effecting robust design solutions through
separate goals for “bringing the mean on target” and “minimizing variation” of noise
factors and/or variations in the design variables.
Posit 1.6: The market segmentation grid can be used to identify opportunities for
platform leveraging in product family design.
Posit 2.1: Building an (interpolative) kriging model is not predicated on the assumption
of underlying random error in the data.
Posit 2.2: Kriging provides very flexible modeling capabilities based on the wide
variety of spatial correlation functions which can be selected to model the data.
• Posit 2.1 is more fact than assumption; it can be substantiated by Sacks, et al.
(1989); Koehler and Owen (1996); and Cressie (1993).
• Posit 2.2 is substantiated by many researchers, most notably: Sacks, et al. (1989);
Welch, et al. (1992); Cressie (1993); and Barton (1992; 1994).
The testing of Posit 2.2 helps to verify Hypothesis 2; the strategy for testing Hypothesis 2 (and
Posit 2.2) is outlined in Section ___.
• Posit 3.1 is taken from Sacks, et al. (1989) who state that the “classical notions of
experimental blocking, replication and randomization are irrelevant” for deterministic
computer experiments which contain no random error. Moreover, any experimental
design text (see, e.g., Montgomery, 1991) can verify that experimental design
properties such as replication, blockability, and rotatability are developed explicitly
to handle and account for random (measurement) error in a physical experiment for
which classical experimental designs have been developed.
Furthermore, since kriging (using an underlying constant model) is being advocated in this
dissertation for metamodeling deterministic computer experiments, an additional posit in support
of Hypothesis 3 is the following.
Posit 3.2: Since kriging (with an underlying constant model) models rely on the spatial
correlation between data, confounding and aliasing of main effects and twofactor
interactions have no significant meaning when predicting a response.
• Posit 3.2 is substantiated by Sacks, et al. (1989); Currin, et al. (1991); Welch, et
al. (1990); and Barton (1992; 1994). In physical experimentation, great care is
taken to ensure that aliasing and confounding of main effects and twofactor
interactions do not occur to ensure accurate estimation of coefficients of the
polynomial response surface model (see, e.g., Montgomery, 1991).
The experimental procedure for testing Hypothesis 3 is discussed in the next section along with
the specific strategy for verification and testing of all of the hypotheses.
First and foremost, the Product Platform Concept Exploration Mehotd (PPCEM) is
hypothesized as a method for designing scalable product platforms. In addition, the PPCEM is
hypothesized to be efficient and effective. The efficiency of the PPCEM is attained by using:
• metamodels to:
– create inexpensive approximations of computer analyses, and
– facilitate the implementation of robust design; and
• robust design principles to design simultaneously a family of products around a
scalable product platform.
The effectiveness of the PPCEM is attained by using:
• robust design principles to design a scalable product platform, and
• lexicographic minimum concept to generate a solution portfolio to maintain design
flexibility.
To verify the effectiveness of the PPCEM as a method to design scalable product platforms,
three example problems are utilized. In each example, the resulting product platform obtained
using the PPCEM is compared to the results obtained from designing each product in the family
separately and then aggregating the products into a common set of specifications. The three
example problems for testing the PPCEM are:
Efficiency of the PPCEM is verified by comparing the time involved for building, validating, and
using the necessary metamodels against the time the procedure takes to repeat the process
without the metamodels. Similarly, the efficiency achieved through implementation of robust
design principles to design simultaneously a family of products is discussed at the end of each
example problem.
Verification and testing of the subhypotheses related to Hypotheses 1 entail:
Testing SubHypothesis 1.1  The procedure for using the market segmentation grid
to identify scale factors for a product platform is shown in Figure ___ and described
in Section ___. Further verification of this subhypothesis requires demonstrating
that this procedure can indeed be used to identify scale factors for a product
platform. This is demonstrated in all three examples wherein the appropriate scale
factors are identified for leveraging the product platform in the product family.
Testing SubHypothesis 1.2  If appropriate scale factors can be identified for a
product platform (i.e., SubHypothesis 1.1 is true) then the principles of robust
design can be employed to develop a design which is robust with respect to these
noise factors much in the same way that Chang, et al. (1994) use robust design
principles to develop “conceptually robust” designs with regard to appropriate
“conceptual noise” factors resulting from distributed, concurrent engineering
practices. Verification of this subhypothesis requires demonstration of the
approach, and the three examples provide such a demonstration.
Testing SubHypothesis 1.3  The procedure for aggregating the individual targets of
the product variants is outlined in Section ___. As with SubHypothesis 1.1, further
verification of this subhypothesis requires demonstrating that this procedure can
indeed be used; the three examples are used to demonstrate just that.
Verification of these subhypotheses helps to further verify Hypothesis 1. The strategy for
testing Hypotheses 2 and 3 is outlined in the next Section.
The remaining chapters of the dissertation flow as shown in Figure Error! No text of
specified style in document..1. Having laid the foundation in DecisionBased Design, robust
design, and the RCEM and presented the research questions for the work in this chapter,
Chapter 2 contains a literature review of related work. Based on the discussion in Section 1.1,
three research areas are reviewed: (1) product family and product platform design, (2) robust
design, and (3) metamodeling in Sections 2.1, 2.2, and 2.3, respectively.
The PPCEM is then introduced in Chapter 3 as the tools, approaches, and philosophies
from Chapters 1 and 2 are synthesized into an efficient and effective method for desiging
scalable product platforms for a product family. As noted in Figure Error! No text of
specified style in document..1, the PPCEM is overviewed in Section 3.1 with the individual
steps elaborated in Sections 3.1.1 through 3.1.5. After the PPCEM is introduced, the research
hypotheses are revisited in Section 3.2, and supporting posits are stated and substantiated.
Section 3.3 contains an outline of the strategy for verification and testing of the hypotheses. This
includes a preview of Chapter 4—wherein Hypotheses 2 and 3 are tested—and Chapters 5
through 7 wherein the PPCEM is applied to three example problems, verifying Hypotheses 1
and SubHypotheses 1.1 through 1.3.
Chapter 4 entails a brief departure from product platform design yet is an intergral part
of the development of the PPCEM. In Section 4.1, an initial feasibility study of the usefulness of
kriging is performed, comparing the predictive capability of kriging models to that of second
order response surfaces, the current “standard” for metamodeling. The experimental setup for
testing Hypotheses 2 and 3 is then introduced in Section 4.2. An extensive study of six
engineering test problems selected from the literature is conducted to determine the utility of
various spatial correlation functions and space filling designs; results are presented and
discussed in Section 4.3.
Once the kriging and space filling design study is completed in Chapter 4, the first of the
three example problems used to demonstrate the PPCEM and verify its associated hypthoses is
given in Chapter 5: the design of a family of oil filters. The second and third example problems
are the design of a family of absorption chillers, Chapter 6, and the design of a family of General
Aviation aircraft, Chapter 7. In each chapter, an overview of the problem is given along with
pertinent analysis information. Then, the steps of the PPCEM are performed and a summary
and discussion of the results is given.
Chapter 8 is the final chapter in the dissertation. It begins in Section 8.1 with a recap of
the dissertation, emphasizing the research hypotheses and resulting contributions from the work.
A critical review of the dissertation is given in Section 8.2; limitations and shortcomings of the
work are addressed. This is followed by a discussion of possible future work to refine the
PPCEM and the associated metamodeling techniques.
Finally, there are three appendices which supplement the dissertation, specifically the
work in Chapter 4. A description of the minimax Latin hypercube design algorithm which is
unique to this work is given in Appendix A. Appendix B outlines the kriging algorithm
developed and utilized in this dissertation. In addition, a stepbystep example of building and
using a kriging model is also included. Finally, the six test problems used in Section 3.4 for
investigating the utility of different experimental design techniques and kriging models are
detailed in Appendix C.
Di
Filter Element
P o, Vo
L  0.02
Oil outflow
P i, Vi
Q
Oil inflow
General
Absorption
Oil Filters Aviation
Chillers
Aircraft
Platform
MDO Example
Product Platform Concept Exploration Method
Chp 4
Chp 3
CHAPTER 1
FOUNDATIONS FOR PRODUCT FAMILY AND PRODUCT PLATFORM
DESIGN
The principal objective in this disseration is to develop the Product Platform Concept
Exploration Method (PPCEM) for efficient and effective design of scalable product platforms
for a product family. As the title of this chapter implies, the foundations for developing the
PPCEM are presented here. The heart of the chapter lies in Section 1.3 wherein the research
objectives, hypotheses, and contributions for the work are described; this section sets the stage
for the chapters that follow, culminating in the development of the PPCEM in Chapter 3.
Specifically, Sections 1.1 and 1.2 provide the motiviation, foundation, and context for
investigating the proposed research and serve to establish context for the reader. More
specifically, in Section 1.1 the concepts of product family and product plaform design are
introduced, and opportunities for advancing this nascent research area are identified. In Section
1.2, the foundations for the work are presented, namely, DecisionBased Design, robust design
principles, and the Robust Concept Exploration Method. Section 1.4 contains an overview of
the dissertation.
CHAPTER 2
STATEOFTHEART IN PRODUCT FAMILY DESIGN, ROBUST DESIGN, AND
METAMODELING: LITERATURE REVIEW
In this chapter a survey of relevant work in product family and product platform design, robust
design, and metamodeling is presented in Sections 2.1, 2.2, and 2.3, respectively. In Section
2.1, the descriptors, tools, and current methods for designing product families and product
platforms are discussed. Section 2.2 contains a review of robust design principles, tracing the
evolution of robust design from parameter design to the early stages of product design. This
segues into a discussion of metamodeling techniques for building inexpensive approximations of
computer analyses to facilitate robust design and concept exploration. In particular, the kriging
approach to metamodeling is introduced in Section 2.3.1, and a variety of space filling
experimental designs for quering the computer code to build these models are described in
Section 2.3.2. Section 2.4 concludes the chapter with a summary of what has been presented
and a preview of what is next.
CHAPTER 3
THE PRODUCT PLATFORM CONCEPT EXPLORATION METHOD
The work in this chapter is a synthesis of the previous chapters and represents the principal
objective in this dissertation, namely, to develop the Product Platform Concept Exploration
Method (PPCEM) for efficient and effective design of scalable product platforms for a product
family. To start, an overview of the PPCEM and its associated steps and tools is given in
Section 3.1 with each step of the PPCEM and its constituent elements elaborated in Sections
3.1.1 through 3.1.5; the resulting infrastructure of the PPCEM is presented in Section 3.1.6. In
Section 3.2 the research hypotheses on which the PPCEM is founded are revisited from
Section 1.3.1. More specifically, in Section 3.2.1 the relationship between the research
hypotheses and modifications to the RCEM are detailed, and in Section 3.2.2 supporting posits
for the research hypotheses are stated. Section 3.3 follows with an outline of the strategy for
verification and testing of the research hypotheses. Section 3.4 concludes the chapter with a
recap of what has been presented and a look ahead to the metamodeling study in Chapter 4
and the example problems in Chapters 5, 6, and 7 which are used to test the research
hypotheses and demonstrate the application of the PPCEM.
CHAPTER 4
THE UTILITY OF KRIGING AND SPACE FILLING EXPERIMENTAL DESIGNS
In this chapter, Hypotheses 2 and 3 are tested, verifying the utility of kriging and space filling
experimental designs for building metamodels of deterministic computer analyses. An initial
feasibility study of kriging as a metamoldeing technique is given in Section 4.1; the study invovles
comparing kriging and response surface models in the multidisciplinary design of an aerospike
nozzle. Once kriging is established as a viable alternative, a detailed study is setup to test
Hypotheses 2 and 3, Section 4.2. Six problems are introduced in Section 4.2.1 to serve as the
test bed for benchmarking kriging and space filling designs. In Sections 4.2.2 and 4.2.3,
experimental design choices and error assessment measures are discussed, respectively.
Section 4.3 contains the results of the study and a discussion of the ramifications of the results.
A summary of the chapter is then given in Section 4.4 along with a discussion of the relevance of
this chapter to the development of the PPCEM.
4.1 INITIAL FEASIBLITY STUDY OF KRIGING: THE MULTIDISCIPLINARY
DESIGN OF AN AEROSPIKE NOZZLE
4.1.1 Background for the Aerospike Nozzle Problem
4.1.2 Metamodeling of the Aerospike Nozzle Problem
4.1.3 Optimization using the Response Surface and Kriging Metamodels
4.2 EXPERIMENTAL SETUP: KRIGING AND SPACE FILLING EXPERIMENTAL
DESIGN TESTBED
4.2.1 Overview of Testbed Problems
4.2.2 Experimental Design Choices for Test Problems
4.2.3 Validation Points and Error Metrics for Assessing Model Accuracy
4.2.4 Summary of Kriging Study
4.3 THE UTILITY OF KRIGING AND SPACE FILLING DESIGNS
4.3.1 Experimental Set Up
4.3.2 Which Correlation Function is Best?
4.3.3 Which Types of Experimental Designs are Best?
4.4 A LOOK BACK AND A LOOK AHEAD
CHAPTER 5
DESIGN OF A FAMILY OF OIL FILTERS
CHAPTER 6
DESIGN OF A FAMILY OF ABSORPTION CHILLERS
6.1 OVERVIEW OF THE ABSORPTION CHILLER PLATFORM PROBLEM
6.1.1 Problem Statement and Leveraging Strategy
6.1.2 Relevant Analyses for Absorption Chillers
6.2 STEPS 1 AND 2: CREATE MARKET SEGMENTATION GRID AND AND
CLASSIFY FACTORS FOR ABSORPTION CHILLER PLATFORM
6.3 STEP 3: BUILD AND VALIDATE METAMODELS
6.4 STEP 4: AGGREGATE PRODUCT SPECIFICATIONS AND FORMULATE
ABSORPTION CHILLER PLATFORM COMPRMISE DSP
6.5 STEP 5: DEVELOP THE ABSORPTION CHILLER PLATFORM PORTFOLIO
6.6 RAMIFICATIONS OF THE RESULTS OF THE ABSORPTION CHILLER
EXAMPLE PROBLEM
6.7 A LOOK BACK AND A LOOK AHEAD
CHAPTER 7
DESIGN OF A FAMILY OF GENERAL AVIATION AIRCRAFT
CHAPTER 8
ACHIEVEMENTS AND RECOMMENDATIONS
8.1 RESEARCH OBJECTIVES AND HYPOTHESES REVISITED
8.2 CRITICAL REVIEW OF THE DISSERATION
8.3 FUTURE WORK
8.3.1 Future Work in Kriging
8.3.2 Future Work with Space Filling Experimental Designs
8.3.3 Future Work in Product Family and Product Platform Design
APPENDIX A
A MINIMAX LATIN HYPERCUBE DESIGN GENERATOR USING A GENETIC
ALGORITHM
APPENDIX B
KRIGING STEPBYSTEP
This appendix is intended to supplement the brief description of kriging which is given in Section
2.4.1. In Section B.1, the question of “What is kriging?” is addressed. As part of this section,
three other questions the reader might be asking him/herself are addressed:
• Section B.1.1  “Why use kriging?”
• Section B.1.2  “What is a spatial correlation function?”
• Section B.1.3  “How is a kriging model built, validated, and implemented?”
After these questions are addressed, a simple one dimensional example is presented, going
stepbystep through the process of building, validating, and using a kriging model.
Six engineering test problems—2 two variabåle problems, 2 three variable problems, and 2 four
variable problems—have been selected from the literature to further test the utility of kriging as a
metamodeling technique. The analysis of these problems is simple enough that building kriging
models of the responses is overkill to say the least. However, these problems do serve a
purpose; they have been selected because: (a) they have been well studied, (b) the behavior of
the system and the underlying equations are known, and (c) the optimum solution is also know.
Thus, because the underlying equations and optimum solution are known, it is easy to determine
the utility of kriging on a variety of problems, hence testing Hypotheses 2 and 3 from Section
1.3.1. Each example is described in turn along with the corresponding constraints, bounds, and
objectives. The optimum solution for each problem is also given, and only the pertinent
equations have been numbered.
The principal objective in this dissertation is to develop the Product Platform Concept
Exploration Method (PPCEM) to facilitate the design of a common scalable product platform
for a product family. As the title of this chapter implies, the foundations for developing this
method are presented here. The heart of the chapter lies in Section 1.3 wherein the research
objectives, hypotheses, and contributions for the work are described; this sets the stage for the
chapters that follow, culminating in the development of the PPCEM in Chapter 3. Sections 1.1
and 1.2 contain the motivation, foundation, and context for investigating the proposed research
and serve to establish context for the reader. Specifically, in Section 1.1 the concepts of
product family and product platform design are introduced and defined, and opportunities for
advancing this nascent research area are identified. In Section 1.2, the foundations for the
1
1.1 FRAME OF REFERENCE: PRODUCT FAMILY AND PRODUCT
PLATFORM DESIGN
Today’s competitive and highly volatile market is redefining the way companies do
business. “Customers can no longer be lumped together in a huge homogeneous market, but
are individuals whose individual wants and needs can be ascertained and fulfilled” (Pine, 1993).
Companies are being called upon to deliver better products faster and at less cost for customers
who are more demanding in a market which is characterized by words such as mass
customization and rapid innovation. Even government agencies like NASA are reexamining the
way they operate and do business, adopting slogans such as “better, faster, cheaper.”
“The sellers’ market of the fifties and sixties was characterized by high demand and a
relative shortage of supply. Firms produced large volumes of identical products,
supported by mass production techniques. ... The buyer’s market of the eighties and
beyond is forcing companies making specific highvolume products to manufacture a
growing range of products tailored to individual customer’s needs at the cost of
standard massproduced goods.”
So why the growing concern for satisfying the individual customer? Stan Davis,
the person who coined the term mass customization, captures it best: “The more a company can
deliver customized goods on a mass basis relative to their competition, the greater is their
competitive advantage” (Davis, 1987). Simply stated, companies which offer customized goods
at minimal extra cost have a competitive advantage over those that do not. Pine (1993)
2
attributes the increasing attention on product variety and customer demand to the saturation of
“Today, demand for new products frequently has to be diverted from older ones. It is
therefore important for new products to meet customer needs more completely, to be of
higher quality, and simply to be different from what is already in the marketplace.”
Similar themes pervade the texts by Wortmann, et al., (1997) who examine industry’s response
in Europe to the “customerdriven” market, and Anderson (1997) who examines the role of
This increasing need to distinguish and differentiate products from competitors is further
"The customer now has plenty of choice for almost every product within a price range.
With this increased choice, consumers have become more aware of the good and bad
features of a product...they select the product that most closely fulfills their opinion of
being the best value for the money. This is not just price but a wide range of nonprice
factors such as quality, reliability, aesthetics..."
Chinnaiah, et al. (1998) also examine the trend toward mass customized goods, citing more
demanding customers and market saturation as impetus for the shift. Uzumeri and Sanderson
(1997) state that “The emergence of global markets has fundamentally altered competition as
many firms have known it” with the resulting market dynamics “forcing the compression of
product development times and expansion of product variety.” The study by Womack, et al.
(1990) of the automobile industry in the 1980s provides just one of numerous examples of this
trend.
3
Since many companies typically design new products one at a time, Meyer and Lehnerd
(1997) have found that the focus on individual customers and products results in “a failure to
products or product lines.” Similarly, Erens (1997) states that “If sales engineers and designers
focus on individual customer requirements, they feel that sharing components compromises the
quality of their products.” The end result is a “mushrooming” or diversification of products and
parts with proliferating variety and costs. Mather (1995) states that “Rarely does the full
spectrum of product offerings get reviewed at one time to ensure it is optimal for the business.”
Consequently, companies are being faced with the challenge of providing as much
variety as possible for the market with as little variety as possible between products.
Toward this end, the approach advocated in this dissertation and by many strategic
family of products with as much commonality between products as possible with minimal
compromise in quality and performance. Several engineering examples are presented in the
next section to provide context and foster a better understanding of the product family concept
and how product families have been successfully developed and realized. Research
opportunities in product family and product platform design then are discussed in Section 1.1.2.
The following examples from Sony, Lutron, Nippondenso, Black & Decker, Canon,
and RollsRoyce exemplify successful product families and have been studied as such.
4
Additional examples which might interest the reader include: Swiss army knives and Swatch
watches (Ulrich and Eppinger, 1995), Xerox copiers (Paula, 1997), Anderson windows
(Stevens, 1995), HewlettPackard printers (see, e.g., Lee, et al., 1993), the Boeing 747 family
of aircraft (see, e.g., Rothwell and Gardiner, 1990), and the Kodak single use camera (see,
The design of the Sony Walkman is a classic example of managing the design of a product
family (Sanderson and Uzumeri, 1997). Sony first introduced the Walkman in 1979, which has
dominated the personal portable stereo market for over a decade, and has remained the leader
both technically and commercially despite fierce competition from worldclass competitors, e.g.,
Matushita, Toshiba, Sanyo and Sharp. Sony built all of their Walkman models around key
modules and platforms and used modular design and flexible manufacturing to produce a wide
variety of quality products at low cost. Incremental design changes accounted for only 2030 of
the 250+ models Sony introduced in the U.S. in the 1980s. “The remaining 85% of Sony's
models were produced from minor rearrangements of existing features and cosmetic redesigns
of the external case...topological changes [such as these] can be made with little cost or risk”
(Sanderson and Uzumeri, 1995). The basic mechanisms in each platform were refined
continually while following a disciplined and creative approach of focusing its families on clear
5
Lutron  Electronic Lighting Control Systems
When engineers at Lutron design a new product line, they begin with a fairly standard product
with very few options (see, e.g., Spira, 1993). They then work with individual customers to
extend the product line until they eventually have a hundred or so models which customers can
purchase. Then engineering and production work together to redesign the product line with 15
20 standardized components that can be configured into the same hundred models from which
customers could initially chose. Additional customization work can be performed to meet
individual customer requirements; in its electronic lighting systems line, used in conference
rooms, ballrooms, and hotel lobbies, Lutron has rarely shipped the same system twice
(Spira, 1993).
Nippondenso Co. Ltd. makes automotive components for Toyota, other Japanese car makers,
and car makers in other countries. They design their panel meters using a combinatoric strategy
as illustrated in Figure 1.1. A panel meter is composed of six parts (in rare cases, only five),
and in order to reduce inventory and production costs, each type of part has been redesigned
so that its mating features to its neighbors are identical across the part type. This was done by
standardizing the design (denoted by SD in the figure) in an effort to reduce the number of
variants of each part. Inventory and manufacturing costs were reduced without sacrificing the
product offering. Each zigzag line on the right hand side of Figure 1.1 represents a valid type of
6
meter, and as many as 288 types of meters can be assembled from 17 different components
(Whitney, 1993).
The most common component in all power tools is the universal motor which Black & Decker
redesigned in the early 1970s. The redesign was in response to the threat of required double
insulation on electrical devices to protect the user from electrical shock if the main insulation
system fails. Double insulation was incorporated into 122 basic tools with hundreds of
variations, from jig saws and grinders to edgers and hedge trimmers. Through standardization
of the product line, Black & Decker was able to produce all of its power tools using a line of
motors that varied only in the stack length and the amount of copper wrapped within the motor.
As a result, all of the motors could be produced on a single machine with stack lengths varying
7
from 0.8 in to 1.75 in and power output ranging from 60 to 650 watts. Furthermore, new
designs were developed using standardized components such as the redesigned motor, which
allowed products to be introduced, exploited and retired with minimal expense related to
Canon  Copiers
Canon has successfully dominated the low volume end of the copier market since the mid
1980s. Canon's copiers offer a wide range of functions and market uses, including: 50070,000
copies in either black and white or as many as six different colors. To provide this variety,
Canon has a number of different series (base models or platforms) from which variant
derivatives are created to cover most of the customer's economic and technical requirements.
About 80 percent of the components of these copiers are standard; the remaining 20 percent
are altered and modified to produce product variants within the product family, see (Rothwell
RollsRoyce designs its aircraft engines around a common platform and then “derates” or
“upgrades” the platform to suit specific customer needs (cf., Rothwell and Gardiner, 1990). An
example is the RTM322 engine which was designed to allow several versions to be produced to
cater to different market requirements and power outputs. As shown in Figure 1.2, the
8
RTM322 platform is common to multiple versions of the engine, namely, the turboshaft,
turbofan and turboprop. When the RTM322 engine is scaled by a factor of 1.8, the engine
platform becomes the core for the RB550 series which is produced in two versions: turboprop
and turbofan .
In light of these examples, the following definitions for product family, product platform,
and derivatives and product variants are offered to provide context for the remainder of the
dissertation.
A product family is a group of products which share common form features and
function(s), targeting one or multiple market niches. Here, form features refer
generally to the shape and characterizing features of a product; function refers generally
to the utilization intent of a product. The Sony Walkman product family is one such
example; it contains a variety of models with different features and functions, e.g.,
graphic equalizer, autoreverse, and waterproof casing, to target specific market niches.
9
A product platform, in this dissertation, is the common set of design variables around
which a family of products can be developed. In general terms, a product platform is
the common technological base from which a product family is derived through
modification and instantiation of the product platform to target specific market niches
(cf., Erens, 1997; McGrath, 1995; Meyer and Lehnerd, 1997). The universal motor
platform developed by Black & Decker is an example of a successful product platform.
Product platforms are also prevalent in the automobile industry, for example, where
several car models are typically derivatives of a common platform (cf., Siddique and
Rosen, 1998); Kobe (1997) and Naughton (1997) describe GM’s and Honda’s global
platform strategies, respectively.
In light of these examples and definitions, opportunities for making contributions in product
family and product platform design are discussed in the next section.
10
1.1.2 Opportunities in Product Family Design and Product Platform Design
platform design, a closer look at the previous examples is needed. The examples from Lutron,
product family design. Each company redesigned or consolidated a group of distinct products
to create a more “efficient and effective” product family. Here, efficient and effective refers to
the increased economies of scale each company was able to realize by standardizing
• standardizing components so as to
• reduce manufacturing variability (i.e., the variety of parts that are produced in a given
manufacturing facility) and thereby
While the cost savings in manufacturing and inventory begin almost immediately from this type of
approach, the rewards are typically longterm since the capital investments and redesign costs
can be significant. Black & Decker, for example, estimated that it would take seven years to
reach the breakeven point when they redesigned their universal motor platform for Double
11
tooling, Black & Decker spent $17M to redesign their motors; however, by paying attention to
standardization and exploiting platform scaling around the motor stack length, all of their motors
could be produced on the same machines. As a result, material costs dropped from $0.77 to
$0.42 per motor while labor costs fell from $0.248 to $0.045 per motor, yielding an annual
savings of $1.82M per year. The cost of Black & Decker tools decreased by as much as 62%,
But must a company spend millions of dollars in costly redesign to achieve a good
product family? The answer is obviously no, and the examples from Rolls Royce, Canon, and
Sony demonstrate such an approach. These three companies exemplify an a priori or top
down approach to product family design, i.e., strategically manage and develop a family of
products based on a common platform and its derivatives. McGrath (1995) states that “A clear
platform strategy leverages the resulting products, enabling them to be deployed rapidly and
“Companies target new platforms to meet the needs of a core group of customers but
design them for easy modification into derivatives through the addition, substitution, or
removal of features. Welldesigned platforms also provide a smooth migration path
between generations so neither the customer nor the distribution channel is disrupted.”
Finally, commonality and standardization across product families allow new designs to be
introduced, exploited, and retired with minimal expense related to product development
(Lehnerd, 1987).
12
As discussed in Section 1.1.1, Sony and Canon have been able to dominate their
respective markets despite serious local and global competition through a well managed product
platform implementation strategy. The Sony Walkman has been the leader in the personal
stereo market for decades; Sanderson and Uzumeri (1995) studied the success of the Sony
“Sony's strategy employed a judicious mix of design projects, ranging from large team
efforts that produced major new model 'platforms' to minor tweaking of existing
designs. Throughout, Sony followed a disciplined and creative approach to focus its
subfamilies on clear design goals and target models to distinct market segments. Sony
supported its design efforts with continuous innovation in features and capabilities, as
well as key investments in flexible manufacturing.”
Similiarly, Canon was able to steal, and henceforth dominate, the lowend copier market from
Xerox through careful development and realization of a family of products derived from
common platforms (Jacobson and Hillkirk, 1986). Companies like Xerox now are in the
process of reengineering their product development processes to facilitate the design and
development of new families of copiers in record time (Paula, 1997). Along these same lines,
Rolls Royce can boast similar success. By scaling the RTM322 engine platform to satisfy a
range of thrust and power requirements, Rolls Royce was able to (a) reduce manufacturing and
inventory costs by using similar modules and components from one engine to the next and, more
importantly, (b) facilitate the costly certification phase of its engine development process.
Good product platforms do not just come off the shelf; they must be carefully planned,
designed, and developed. This requires intimate knowledge of customer requirements and a
13
thorough understanding of the market. However, as discussed in the literature review in Section
2.2.1, many of the tools and methods which have been developed to facilitate the
management and development of effective product platforms and product families are at
modeling and design synthesis. Meanwhile, engineering design methods and tools for
synthesizing product families and product platforms are limited or slowly evolving. Consider the
brief summary in Table 1.1 of the product family examples from Section 1.1.1 and the
availability of design support. The majority of the examples from Section 1.1.1 require modular
design to facilitate upgrading and derating product variants through the addition and removal of
clustering approaches have been developed to reduce variability within a product family and
facilitate redesigning product families to improve component commonality, see Section 2.2.3.
Meanwhile, little to no attention has been paid to platform scaling issues for product
14
Table 1.1 Product Family Examples: Approach and Available Support
• In many product families, scalability can be exploited from both a technical standpoint
and a manufacturing standpoint to increase the potential benefits of having a common
product platform. The Rolls Royce RTM322 engine and the Black & Decker universal
motor are excellent examples of this.
• Finally, and perhaps most importantly, the concept of scalability and scalable product
platforms provides an excellent inroads into product family and product platform design
15
through the synthesis of current research efforts in DecisionBased Design and the
Robust Concept Exploration Method (described in Sections 1.2.1 and 1.2.2,
respectively), robust design (described in Section 2.3) and tools from
marketing/management science (described in Section 2.2.1).
How can a common scalable product platform be modeled and designed for a
product family?
(PPCEM) is developed in this dissertation to provide a Method which facilitates the synthesis
and Exploration of a common Product Platform Concept which can be scaled into an
appropriate family of products. The PPCEM and its associated tools and steps are introduced
in Section 3.1. The underlying assumption behind the PPCEM is that a common set of
specifications (i.e., design variable settings) can be found for a product platform which can then
be scaled in one or more of its “dimensions” to realize a product family. This product family can
then satisfy a wide variety of customer requirements with minimal compromise in individual
product quality and performance even though the product family is derived from a common
platform through scaling. Although the PPCEM is predominantly a method for parameteric or
16
costs through better economies of scale and amortization of capital investment over a wider
variety of derivative products based on the common product platform. In special cases, such as
the Rolls Royce RTM322 engine platform mentioned earlier and the Boeing 747 series of
aircraft, an added benefit of scaling a common product platform is to expidite the testing and
certification phase of development (cf., Rothwell and Gardiner, 1990). The foundation for
developing this approach is presented in the next section. The specific research focus for the
The technology base for the dissertation is described in this section. An overview of
DecisionBased Design, the design paradigm subscribed to in this dissertation, and the
overview of the Robust Concept Exploration Method (from which the Product Platform
1.2.1 DecisionBased Design, the Decision Support Problem Technique, and the
Compromise Decision Support Problem
DecisionBased Design (DBD) is rooted in the notion that the principal role of a
designer in the design of an artifact is to make decisions (see, e.g., Muster and Mistree, 1988).
This role is useful in providing a starting point for developing design methods based on
paradigms that spring from the perspective of decisions made by designers (who may use
17
methods (computeraided design optimization), or methods that evolve from specific analysis
Decision Support Problem (DSP) Technique (see, e.g., Bras and Mistree, 1991), a technique
that supports human judgment in designing systems that can be manufactured and maintained.
In the DSP Technique, designing is defined as the process of converting information that
characterizes the needs and requirements for a product into knowledge about a product
(Mistree, et al., 1990). This definition is extended easily to product family design: the process
of converting information that characterizes the needs and requirements for a product family into
knowledge about a product family, or as is the case of this work, a common scalable product
platform. A complete description of the DSP Technique can be found in, e.g., (e.g., Mistree, et
al., 1990).
Among the tools available within the DSP Technique, the compromise DSP (Mistree, et
al., 1993) is a general framework for solving multiobjective, nonlinear, optimization problems.
In this dissertation, the compromise DSP is central to modeling multiple design objectives and
assessing the tradeoffs pertinent to product family and product platform design. Examples of
these tradeoffs are discussed in the context of the two example problems in Chapters 6 and 7.
hybrid formulation based on Mathematical Programming and Goal Programming (Mistree, et al.,
1993), see Figure 1.3. The compromise DSP is used to determine the values of the design
18
variables which satisfy a set of constraints and bounds and achieve as closely as possible a set
of conflicting goals. The compromise DSP is solved using the Adaptive Linear Programming
(ALP) algorithm which is based on sequential linear programming and is part of the DSIDES
scheme or rankordered into priority levels using a preemptive approach to effect a solution on
the basis of preference. For the preemptive approach, the lexicographic minimum concept
(Ignizio, 1985) is used to evaluate different design scenarios quickly by changing the priority
levels of the goals to be achieved. The capabilities of the lexicographic minimum concept are
employed to develop the product platform portfolio as discussed in Section 3.1.4, with further
examples in Sections 6.4 and 7.5. Differences between the Archimedean and preemptive
deviation functions and a description of the ALP algorithm, design and deviation variables,
system constraints, goals, and bounds are discussed by, e.g., Mistree, et al. (Mistree, et al.,
1993).
19
Given
An alternative to be improved. Assumptions used to model the domain of interest.
The system parameters:
n number of system variables
p + q number of system constraints (p equality constraints, q inequality constraints)
m number of system goals
gi(x) system constraint function
fk(di) function of deviation variables to be minimized at priority level kth for the
preemptive case.
Find
The values of the independent system variables:
xi i = 1, …, n;
The values of the independent system variables:
di, di+ i = 1, …, m
Satisfy
System constraints (linear, nonlinear)
gi(x) = 0 for i = 1, .., p; gi(x) = 0 for i = p+1, .., p+q
System goals (linear, nonlinear)
Ai(x) + di + di+ = Gi i = 1, …, m
Bounds
ximin = xi = ximax i = 1, …, n
di, di+ = 0 ; i = 1, …, m; di . di+ = 0 ; i = 1, …, m
Minimize
Preemptive deviation function (lexicographic minimum):
Z = [ f1(di , di+), ..., fk(dk , dk+) ]
because it is a feasible point that achieves the system goals to the “best” extent that is possible.
This notion of satisficing solutions is in philosophical harmony with the notion of developing a
broad and robust set of toplevel design specifications. The efficacy of the compromise DSP in
creating ranged sets of toplevel design specifications has been demonstrated in both aircraft
20
design (Lewis, et al., 1994; Simpson, et al., 1996) and ship design (Smith and Mistree, 1994).
Developing ranged sets of toplevel design specifications is generalized into the notion of
“portfolio” of solutions rather than a single point solution, greater design flexibility can be
maintained during the design process. Finally, the compromise DSP also provides the
cornerstone of the Robust Concept Exploration Method which is reviewed in the next section.
The Robust Concept Exploration Method (RCEM) has been developed to facilitate
quick evaluation of different design alternatives and generation of toplevel design specifications
with quality considerations in the early stages of design (see, e.g., Chen, et al., 1996a). It is
primarily useful for designing complex systems and facilitating computationally expensive design
analysis. The RCEM is created by integrating several methods and tools—robust design
methods (see, e.g., Phadke, 1989), the Response Surface Methodology (see, e.g., Myers and
Montgomery, 1995), and Suh's Design Axioms (Suh, 1990)—within the compromise DSP
(Mistree, et al., 1993). A review of the wide variety of applications that have successfully
The RCEM is a four step process as illustrated in Figure 1.4. The corresponding
computer infrastructure is illustrated in Figure 1.5. The steps are described as follows.
Step 1  Classify Design Parameters : Given the overall design requirements, this step
involves the use of Processor A, see Figure 1.5, to (a) classify different design
21
parameters as either control factors, noise factors, or responses following the
terminology used in robust design, and (b) define the concept exploration space.
Step 2  Screening Experiments: This step requires the use of the point generator
(Processor B), simulation programs (Processor C), and an experiment analyzer
(Processor D) shown in Figure 1.5 to set up and perform initial screening
experiments and analyze the results. The results of the screening experiments are used
to (a) fit loworder response surface models, (b) identify significant main effects, and (c)
reduce the design region.
Step 3  Elaborate the Response Surface Model: This step also requires the use of the
point generator (Processor B), simulation programs (Processor C), and experiment
analyzer (Processor D) to set up and perform secondary experiments and analyze the
results. The results from the secondary experiments are used to (a) fit secondorder
response surface models (using Processor E) which replace the original computer
analyses, (b) identify key design drivers and the significance of different design factors
and their interactions, and (c) quickly evaluate different design alternatives and answer
"whatif" questions in Step 4.
22
RCEM Steps: Methods, Tools, and Math Construct:
Overall Design Requirements
STEP 2
Conduct “screening experiments”
Response Surface Methods /
DOE/ANOVA Statistical Methods
STEP 3
Elaborate response surface models
The RCEM is taken as the foundation for the research work in this dissertation for
• demonstrated effectiveness for complex systems and robust design, see, e.g., (Chen, et
al., 1997),
The usefulness of these features of the RCEM to this research work are elaborated throughout
the dissertation, particularly in Sections 3.1 and 3.2 wherein the PPCEM is introduced. The
research objectives for the dissertation are described in the next section.
• a set of research questions that capture motivation and specific issues to be addressed,
• a set of corresponding research hypotheses that offer a context by which the research
proceeds, defining the structure of the verification studies performed in this work, and
• a set of resulting research contributions that embody the deliverables from the research
in terms of intellectual value, a repeatable method of solution, limitations, and avenues of
further investigation.
The research questions are presented in Section 1.3.1 along with the corresponding research
hypotheses. The research hypotheses (and supporting posits) are discussed in more detail in
Section 3.2 along with issues of verification and validation. The resulting research contributions
24
1.3.1 Research Questions and Hypotheses in the Dissertation
The principal goal in this dissertation is the development of a method to facilitate the
design of a scalable product platform around which a family of products can be developed. As
discussed in the previous section, DecisionBased Design and the RCEM provide the
foundation on which this work is built. Given this foundation and goal, the motivation for this
research is embodied in the primary research question identified in Section 1.1.2 which is
repeated here.
Q1. How can a common scalable product platform be modeled and designed for a
product family?
This research question is related directly to the principal goal in this research which is to
advance product family design through the development of a method to design a scalable
product platform for a product family. The following hypothesis is investigated in this
for designing a common product platform which can be scaled to realize a product
family.
25
Since Question 1 is quite broad, three supporting research questions and sub
hypotheses are proposed to facilitate the verification of Hypothesis 1. The supporting questions
Q1.1. How can product platform scaling opportunities be identified from overall
design requirements?
Q1.2. How can robust design principles be used to facilitate designing a common
Q1.3. How can individual targets for product variants be aggregated and modeled for
SubHypothesis 1.1: The market segmentation grid can be utilized to help identify
SubHypothesis 1.2: Robust design principles can be used to facilitate the design of a
SubHypothesis 1.3: Individual targets for product variants can be aggregated into an
appropriate mean and variance and used in conjunction with robust design
26
There is a onetoone correspondence between each supporting question and sub
hypothesis. The subhypotheses are stated here primarily to provide context for the literature
review in the next chapter and the development of the PPCEM in Section 3.1. The strategy for
In addition to the primary research question related to the design of scalable product
platforms, two secondary research questions are also investigated in this dissertation.
Q3. Are space filling designs better suited for building approximations of deterministic
response surface models—are employed in the RCEM to facilitate concept exploration and the
techniques such as kriging may be better suited for building approximations of deterministic
computer analyses than the response surface models currently employed in Steps 2 and 3 of the
27
RCEM (see Section 1.2.2). Moreover, the traditional or “classical” experimental designs which
are typically used to sample the design space by querying the computer code to generate data
to build these approximations, may not be wellsuited for deterministic computer analyses either;
hence, alternative “space filling” designs also are investigated as part of the research in this
dissertation. The specific hypotheses, which are investigated in response to the secondary
Hypothesis 3: Space filling experimental designs are suited better for building
designs.
The motivation for these last two research questions and hypotheses is discussed in
Section 2.4 wherein the limitations of response surface modeling and design of experiments
techniques within the RCEM are discussed in greater detail. It is worth noting that Hypotheses
2 and 3 are related to Hypothesis 1 but have implications which extend beyond product family
The relationship between the hypotheses and the various sections of the dissertation are
summarized in Table 1.2. The hypotheses are elaborated more in the literature review in the
28
next chapter in the sections listed in the table and revisited in Chapter 3 after the Product
Platform Concept Exploration Method is presented. Verification and validation issues are
discussed in Section 3.3, and testing of the individual hypotheses commences in Chapter 4,
lasting until Chapter 7. Although it is not noted in the table, Chapter 8 contains a review of the
hypotheses and their verification. The resulting contributions from these hypotheses are
described in the next section to provide context for the development of the research in the
dissertation.
Sections Sections
Hypothesis Discussed Tested
H1 Product Platform Concept Exploration Method Chp 3 Chp 6 & 7
§2.2.1, §3.1.1, §6.2, §7.1.3
SH1.1 Usefulness of market segmentation grid
§3.1.2, §3.2
§2.3, §3.1.2, §6.36.5,
SH1.2 Robust design of scalable product platform
§3.1.4, §3.2 §7.47.6
§2.3.3, §3.1.4, §6.36.5,
SH1.3 Aggregating product family specifications
§3.2 §7.47.6
Utility of kriging for metamodeling deterministic §2.4.1, §2.4.2, Chp 4, §5.2,
H2
computer experiments §3.1.3, §3.2 §7.3
§2.4.3, §3.1.3,
H3 Utility of space filling experimental designs §5.3
§3.2
The hypotheses and subhypotheses, taken together, define the research presented in
this dissertation and hence the contributions from the research. As evidenced by the principal
29
goal in the dissertation and Hypothesis 1, the PPCEM is the primary contribution in the
• The notion of scale factors in product platform design and a means of identifying them
for a product platform: Sections 2.3, 3.1.1, 3.1.2, 6.2, and 7.17.2.
• An abstraction of robust design principles for realizing scalable product platforms for
product family design: Sections 2.3, 3.1.2, 3.1.4, 6.36.5, and 7.47.6.
• An algorithm to build, validate, and use a kriging model: Section 2.4.2, Chapters 4, 5,
and 7, and Appendix A.
• An algorithm for generating minimax Latin hypercube designs: Section 2.4.3 and
Appendix C.
30
This being the first chapter of the dissertation, these contributions cannot be substantiated;
therefore, they are revisited in Section 8.1 after all of the research findings have been
Figure 1.6. Having lain the foundation by introducing the research questions and hypotheses for
the work in this chapter, the next chapter contains a literature review of related research,
elucidating the problems and opportunities in product family and product platform design.
Three research areas are reviewed: (1) product family and product platform design with
particular emphasis on scalability and sizing, (2) robust design and its application in engineering
design, and (3) statistical metamodeling and its role in engineering design, see Sections 2.2, 2.3,
and 2.4, respectively. A discussion of how these disparate research areas relate to one another
synthesized into a method for designing a scalable product platform for a product family. The
PPCEM and its associated steps are presented in Section 3.1. After the PPCEM is presented,
the research hypotheses are revisited in Section 3.2, and supporting posits are stated and
substantiated. Section 3.3 contains an outline of the strategy for verification and testing of the
hypotheses which includes a preview of Chapters 4 and 5—wherein Hypotheses 2 and 3 are
31
tested—and Chapters 6 and 7 wherein the PPCEM is applied to two example problems,
Testing of the hypotheses begins in Chapter 4, but Chapters 4 and 5 entail a brief
departure from product platform design yet are an integral part of the development of the
familiarize the reader with the method and to begin to verify Hypotheses 2 by comparing the
accuracy of kriging models to secondorder response surface models, the current standard in
metamodeling. In Chapter 5, an extensive study of six engineering test problems selected from
the literature is conducted to determine the utility of kriging metamodels and various
Once the kriging/DOE study is completed in Chapter 5, the first of two examples used
to demonstrate the PPCEM and verify its associated hypotheses is given in Chapter 6: the
design of a family of universal electric motors. This first example employs the PPCEM without
any metamodeling, providing “proof of concept” that the method works. Then, in Chapter 7 the
PPCEM is applied to the design of a family of General Aviation aircraft, making full use of the
kriging metamodels and robust design capabilities. In each chapter, an overview of the problem
is given along with pertinent analysis information, the steps of the PPCEM are performed, and
32
Relevance Hypotheses
Chapter 1 • Introduction, motivation, and
Problem Identification
Chapter 3
• Introduce PPCEM and its steps
Product Platform
Method
• Demonstrate implementation of
Chapter 6 Verify
PPCEM without metamodels
Design of a Family • Provide proof of concept and
H1; SH1.1,
of Universal Motors SH1.2, & SH1.3
initial verification of method
Chapter 8
contributions, and limitations Summarize
Closing Remarks • Identify avenues of future work
33
Chapter 8 is the final chapter in the dissertation and contains a summary of the
dissertation, emphasizing answers to the research questions and resulting research contributions
in Sections 8.1 and 8.2, respectively. Possible avenues of future work are discussed in Section
There are six appendices which supplement the dissertation. Appendix A contains a
contains detailed descriptions of the experimental designs investigated in the kriging/DOE study
in Chapter 5; the minimax Latin hypercube design, which is introduced in Section 2.4.3 and
dissertation. Appendix D contains descriptions of the six engineering test problems used in the
kriging/DOE study, and supplemental information for the kriging/DOE study is given in
Appendix E. Supplemental information for the General Aviation aircraft problem in Chapter 7 is
given in Appendix F.
from bottom to top, beginning with the foundation provided in this chapter: DecisionBased
Design and the Robust Concept Exploration Method. This figure provides a road map for the
dissertation, and it is referred to at the end of each chapter to help guide the reader through the
34
Chp 8: Achievements and Recommendations
Chp 6 Chp 7
Chp 5
Platform
Nozzle Design
Product Platform Concept Exploration Method
Chp 4 Chp 3
35
2.
CHAPTER 2
Given the research focus identified in Section 1.3, a survey of relevant work in product
family and product platform design, robust design, and metamodeling is presented in this chapter
in Sections 2.2, 2.3, and 2.4, respectively. A thorough description of what is in this chapter and
how these disparate fields of research relate to each other is offered in Section 2.1. In Section
2.2, the tools and methods for designing product families and product platforms introduced in
Section 1.1.2 are discussed in more detail. Section 2.3 then contains a review of robust design
principles, focusing on robust design opportunities in product family and product platform
design. This segues into a discussion of metamodeling and approximation techniques in Section
2.4 to facilitate the implementation of robust design. In particular, the kriging approach to
of deterministic computer experiments, and a variety of space filling experimental designs for
querying a computer code to build kriging models are described in Section 2.4.3. Section 2.5
31
concludes the chapter with a summary of what has been presented and a preview of what is
next.
32
2.1 WHAT IS PRESENTED IN THIS CHAPTER
In the preceding chapter, product families and product platforms were introduced along
with several illustrative examples. In this chapter, a literature review of tools and methods which
facilitate the development of product families and product platforms is presented; the focus is on
three areas: (1) approaches for product family and product platform design, (2) robust design
principles and their implementation, and (3) metamodeling, in Sections 2.2, 2.3, and 2.4,
respectively. At first glance, these three research areas appear unrelated; however, transitional
elements presented at the end of each section preface the discussion in the section that follows
as the literature review moves from the general area of product family design to the specific area
of metamodeling, see Figure 2.1. The relevant hypotheses covered in each section are noted in
Figure 2.1.
33
H3 H2
SH1.3 SH1.2
Space
Filling Kriging H1 SH1.1
DoE
Modeling Conceptual
Metamodeling Mean and Noise
Variance Factors
Scalable Market
Robust Design Principles Product Segmentation
Platform Grid
As shown in Figure 2.1, the discussion in Section 2.2 explores in greater depth some of
the tools and approaches for product family and product platform design including: product
family maps and the market segmentation grid (Meyer, 1997); approaches to product family
and product platform design; and finally, the notion of a scalable product platform (Rothwell and
Gardiner, 1990). The work by Rothwell and Gardiner then is used to provide a transition to a
discussion of robust design principles in Section 2.3 by relating Rothwell and Gardiner’s
concept of “robust design” for product families to the idea of a “conceptual noise factor” in a
distributed design environment as introduced in (Chang and Ward, 1995; Chang, et al., 1994).
This notion of a “conceptual noise factor” then is extended to scale factors within a scalable
34
product platform, providing a means to abstract robust design principles for application in
In Section 2.3, the focus also shifts from extending Taguchi’s robust design to its
implementation within the Robust Concept Exploration Method (RCEM), i.e., through the use
the beginning of Section 2.4, providing a transition from robust design to utilizing metamodels to
facilitate its implementation as alluded to in Figure 2.1. The general approach to metamodeling
also is discussed in the beginning of Section 2.4, followed by a closer look at some of the
limitations of secondorder response surface models in engineering design in Section 2.4.1. This
discussion provides the impetus for a closer look at two specific aspects of metamodeling—
model selection and experimental sampling—which also are investigated as part of this research.
Specifically, kriging and space filling experimental designs are examined as potential alternatives
to the response surface methods and classical design of experiments (DOE) currently employed
in the RCEM. Taken together, this literature review provides the necessary elements for the
development of the Product Platform Concept Exploration Method for designing scalable
product platforms for a product family as presented in Chapter 3. Toward this end, the state
ofart in product family and product platform design is discussed in the next section.
35
2.2 PRODUCT FAMILY AND PRODUCT PLATFORM DESIGN TOOLS AND
METHODS
As stated in Section 1.1, in order to provide as much variety as possible for the market
with as little variety as possible between products, many researchers advocate a product
platform and product family approach to satisfy effectively a wide range of customer needs. In
Section 2.2.1, several attention directing tools developed to facilitate product family and
product platform design are presented. In Section 2.2.2, metrics for assessing product platform
effectiveness are discussed. Finally, in Section 2.2.3, methods for product family design are
reviewed.
2.2.1 Attention Directing Tools for Product Family and Product Platform Design
A large portion of the work in strategic marketing and management is focused on either
categorizing or mapping the evolution and development of product families. These maps
typically are applied a posteriori to a product family but can be used a priori to identify new
directions for product development within the product family. Examples of product family maps
include the work by Meyer and Utterback (1993) and Wheelwright and Sasser (1989); a brief
Meyer and Utterback (1993) use the Product Family Map shown in Figure 2.2 to trace
the evolution of a product family. In their map, each generation of the product family employs a
platform as the foundation for targeting specific products at different (or complimentary)
markets. Improved designs and new technologies spawn successive generations, and cost
reductions and the addition and removal of features can lead to new products. Multiple
36
generations can be planned from existing ones, expanding to different markets or revitalizing old
ones. A more formal map, with four levels of hierarchy in the product family (i.e., product
family, product platforms, product extensions, and specific products) also is introduced in their
work in an effort to assess the dynamics of a firm’s core capabilities for product development;
Time
Product 7
Product 8
Product 1
Product 2
Product 3
Product 4
New Niches
Figure 2.2 Product Family Map (adapted from Meyer and Utterback, 1993)
37
In related work, Wheelwright and Sasser (1989) have developed the Product
Development Map to trace the evolution of a company’s product lines, see Figure 2.3. In
addition to mapping the evolution of the product line, they also categorize a product line into
“core” and “leveraged” products, dividing leveraged products into “enhanced,” “customized,”
“These distinctions—core, hybrid, and the others—are immediately useful because they
give managers a way of thinking about their products more rigorously and less
anecdotally. But the various turns on the product map—the various “leverage points”—
also serve as crucial indicators of previous management assumptions about the
corporate strengths and market forces shaping product evolutions.” (Wheelwright and
Sasser, 1989, p. 114)
38
Enhanced Customized
• • •Prototype
•••••• Core
Hybrid
Cost
reduced
Core
Time
As shown in Figure 2.3., the core product, typically derived from an engineering
prototype, provides the engineering platform upon which further enhancements are made.
Enhanced products are developed from the core by adding distinctive features to target specific
market niches; enhanced products are typically the first products leveraged from the core
product. Enhanced products can be customized further to provide more choice if necessary.
Costreduced products are “scaled” or “stripped” down versions (e.g., less expensive materials
and fewer features) of the core which are targeted at pricesensitive markets. Finally the hybrid
39
product is an entirely new design, resulting from the combination of characteristics of two or
more core products. As an example, the evolution of three generations of a family of vacuum
These product family maps are useful attention directing tools for product family design
and development but offer little direction for designing a scalable product platform. Toward this
end, the market segmentation grid developed by Meyer (1997) facilitates identifying leveraging
High Cost
High Performance
Low Cost
Low Performance
Derivative Products
Product Platform
products are listed horizontally in the grid. The vertical axis reflects different tiers of price and
performance within each market segment. Several example instantiations of this grid can be
40
found in (Meyer, 1997; Meyer and Lehnerd, 1997) for companies such as Hewlett Packard,
This simple market segmentation grid can be used by firms to segment their markets,
helping to define a clear product platform strategy. For instance, a marketing strategy which
employs no leveraging is shown in Figure 2.5a. Companies which fail to maintain a good
platform leveraging strategy often have too many products that share too little technology,
Three types of platform leveraging strategies can be identified within the market
shown in Figure 2.5bd. All three leveraging strategies enable a more efficient and effective
product family to be developed. Examples of these leveraging strategies include the following
(Meyer, 1997):
41
Vertically leveraging  a product platform is leveraged to address a range of
price/performance tiers within a specific market segment. A company which excels in
the highend segment of its market may scale down its platform into lower
price/performance tiers by removing functionality from its highend platform to achieve
lower price products. The other option is to scale up a lowend platform by adding
more powerful component technologies or modules to meet the higher performance
demands for the higher tiers. The main benefit of this approach is the capability of the
company to leverage its knowledge about a particular market niche without having to
develop a new platform for each price/performance tier. The Rolls Royce RTM322
engine and Canon’s lowend copiers discussed in Section 1.1.1 exemplify this
approach.
High Cost Platform 1 Platform 2 Platform 3 High Cost High End Platform Leverage
High Performance High Performance
Platform 4 Platform 5
MidRange MidRange
Scaled Up
MidRange MidRange
42
Beachhead approach  combines horizontal and vertical leveraging to achieve perhaps the
most powerful platform leveraging strategy. In a beachhead approach, a company
develops a lowcost effective platform for a particular market segment and then scales
up the performance characteristics of the platform and adds other features to target new
market segments. The example of Compaq computers is offered in (Meyer, 1997).
Compaq entered the personal computer market in 1982 and, after establishing a
foothold in the portable computer market niche, slowly introduced a stream of new
products for other market segments and different price/performance tiers, including a
line of desktop PCs for business and home use. Of the examples discussed in Section
1.1.1, the Sony Walkman, Black & Decker’s universal electric motor platform, and
Lutron’s lighting systems also exemplify this type of approach to platform leveraging.
Sony initiated a beachhead approach from the start with their Walkman product lines.
The same is not true for Black & Decker and Lutron. Both companies began with no
leveraging strategy and only after redesigning their product lines as discussed in Section
1.1.2 where they able to achieve a more efficient and effective beachhead approach.
Consequently, they are now both leaders in their respective fields.
The market segmentation grid provides a useful attention directing tool to help map and
idenfity product platform leveraging opportunities within a product family, providing an answer
to the question:
Q1.1. How can product platform scaling opportunities be identified from overall
design requirements?
43
Keep in mind, however, that the market segmentation grid is only an attention directing tool;
leveraging strategy and exploit scaling opportunities within a product family. The market
segmentation grid is simply a way of representing that strategy, providing a clear mapping of
product leveraging opportunities within the product family. Use of the market segmentation grid
to help identify scaling opportunities within the Product Platform Concept Exploration Method is
further elaborated in Section 3.1.1. In the next section, metrics for assessing product platforms
are discussed.
Several metrics and cost models have been developed to assess either the efficiency
and effectiveness of a product platform or the commonality between a group of products within
a product family. Meyer, et al. (1997), in particular, define two metrics—platform efficiency
and platform effectiveness—to manage the research and development costs of product
which assesses how much it costs to develop derivative products relative to how much it costs
to develop the product platform within the product family. The platform efficiency metric can
also be used to compare different platforms across different product families to assess the
44
Platform effectiveness is a ratio of the revenue a product platform and its derivatives
Product Sales
Platform Effectiveness = [2.2]
Product Development Costs
where the effectiveness of the platform can be assessed at the individual product level or for a
These metrics require costing and revenue information which is typically known only
after the product platform and its derivatives have been developed and reached the market.
These metrics prove useful for managing research and development within the product family
and determining when to renew or refocus product platform efforts; however, they offer little
commonality of parts within a product family. Many commonality indices have been proposed
for assessing the degree of commonality within a product family. Products which share more
parts and modules within a product family achieve greater inventory reductions, exhibit less part
variability, improve standardization, and shorten development and lead times because more
parts are reused and fewer new parts have to be designed (cf., Collier, 1981). McDermott and
Stock (1994) discuss the benefits of commonality on new product development time, inventory,
and manufacturing; they also cite several researchers who have shown that part commonality
45
across a range of products has reduced inventory costs while maintaining a desired level of
customer service. Particular measures for assessing commonality includes the following:
• Kota and Sethuraman (1998) introduce the Product Commonality Index for determining
the level of part commonality in a product family. Through the study of a family of
portable personal stereos, they illustrate methods to “measure and eliminate nonvalue
added variations, suggest robust design strategies including modularity and
postponement of product differentiation.” Their approach provides a means to
benchmark product families based on their capability to simultaneously share parts
effectively and reduce the total number of parts.
• Siddique, et al. (1998) propose a commonality index to aid in the configuration design
of common automotive platforms. They are working with an automobile manufacturer
to reduce the number of platforms they utilize across their entire range of cars and
trucks in an effort to reduce development times, costs, and product variety. Ongoing
research efforts for measuring the “goodness” of a common platform are discussed in
(Siddique, 1998).
Commonality measures such as these are based primarily on the ratio of the number of
shared parts, components, and modules to the total number of parts, components, and modules
in the product family. Taking this one step further, Martin and Ishii (1996) seek to assess the
46
cost of producing product variety through the measurement of three indices: commonality,
differentiation point, and setup costs. The commonality index is similar to that proposed by
Collier (1981) and measures the percentage of common components within a group of products
in a product family. The second index measures the differentiation point for product variety
within an assembly or manufacturing process; the idea being that the later the differentiation
point can be postponed the lower the costs of producing the necessary variety (cf., Lee and
Billington, 1994; Lee and Tang, 1997). Finally, the setup cost index assesses the cost
contributions needed to provide variety compared to the total cost for the product. The indirect
costs of providing product variety then is taken as a weighted linear combination of these
indices; the weightings for the individual indices may vary from industry to industry. The direct
costs of providing product variety, they assert, are relatively straightforward to determine.
Generalizations are made regarding the costs of product variety based on these indices;
however, there is no work to substantiate their claims or the usefulness of the indices. The
In later work, Martin and Ishii (1997) introduce a process sequence graph which
provides a qualitative assessment of the flow of a product through the assembly process and its
differentiation point. A product family of eighteen instrument panels is analyzed, citing that
differentiation for product variety begins in the second step in the assembly process. This leads
differentiation to reduce production costs and leadtimes. The end result is a graph of Variety
47
Voice of the Customer (V2OC) versus percentage commonality for the family of instrument
Figure 2.6 V2OC Rating vs. Commonality (from Martin and Ishii, 1997)
assemblies shared between products to the total number of assemblies in the product family and
is again very similar to that of Collier (1981). The V2OC measure assesses “the importance of
a component’s variety to the aggregated market—not the individual buyer. V2OC is a measure
the importance of a component to a customer, as well as the heterogeneity of the market with
response to that component” (Martin and Ishii, 1997). They do not describe how to measure
V2OC or explain how the V2OC ratings for the instrument panel family are created;
consequently, V2OC does not provide a useful measure for product variety. The resulting
graph, however, is insightful and similar to the product variety tradeoff graph which is introduced
in Section 3.1.5 and illustrated in Section 7.6.2 in the context of the General Aviation aircraft
example. The reasoning behind the target region in the figure is not discussed in their paper
48
either; however, intuition suggests that components with low V2OC rating (i.e., are not
important to the customer) can be common from one product to the next while it is important to
customize components (i.e., decrease their commonality) that have a high V2OC. This idea is
explored in greater depth in Sections 3.1.5 and 7.6.2 wherein a noncommonality index based
assessing and studying product variety tradeoffs. In the meantime, methods for designing
The majority of engineering design research has been directed at improving the
efficiency and effectiveness of designers in the product realization process, and until recently, the
focus has been on designing a single product. For instance, Suh (1990) offers his two axioms
for design: (1) maintain independence of functional requirements, and (2) minimize the
information content of a design. Pahl and Beitz (1988; 1996) offer their four phase approach to
product design which involves the following: clarification of the task, conceptual design,
embodiment design, and detail design. Similarly, Hubka and Eder (1988; 1996) advocate an
approach which involves the following: elaboration of the assigned problem, conceptual design,
laying out, and elaboration. Pugh (1991) introduces the notion of total design which has at its
core market/user needs and demands, the product design specification, conceptual design,
detail design, manufacturing, and selling. In the wellknown review of mechanical engineering
49
design research conducted by Finger and Dixon (1989a; 1989b), scant trace of product family
Perhaps the most developed method for product family design which currently exists is
the work by Erens (1997). Erens, in conjunction with several of his colleagues (Erens and
Breuls, 1995; Erens and Verhulst, 1997; Erens and Hegge, 1994; McKay, et al., 1996),
develops a product modeling language for product variety. The primary focus is on the product
little aid for design synthesis and analysis, only representation. The product modeling language
physical. Use of the product modeling langurage is demonstrated in the context of a family of
office chairs, a family of overhead projectors, and a family of cardiovascular Xray machines.
Excerpts from the family of office chairs example is illustrated in Figure 2.7. The office chair
itself is shown in Figure 2.7a, and the variety of options from which to choose: upholstery,
materials, colors, fixtures, etc., are shown in Figure 2.7b. In Figure 2.7c, the general
representation of the product architecture for the office chair is depicted, and the hierarchy in
the product variety model is illustrated in Figure 2.7d. As illustrated in this example, the product
modeling language provides an effective means for representing product variety but offers little
50
(a) An office chair (b) Office chair options
(c) Office chair architecture (d) Office chair product variety model
In other work, Fujita and Ishii (1997) outline a series of tasks—design specification
analysis, system structure synthesis, configuration, and model instantiation—for product variety
design as their foundation for a formal approach for the design and synthesis of product families.
They decompose product families into systems, modules, and attributes as shown in Figure 2.8.
Under this hierarchical representation scheme, product variety can be implemented at different
levels within the product architecture. For instance, two shared modules and two sets of shared
attributes are shown in Figure 2.8. A formal algorithm has not yet been developed however.
51
System Modules Attributes
Configuration/Geometry
Shared Shared
Architecture
Different
Shared
Functional/Physical
Figure 2.8 Product Variety Decomposed into Systems, Modules, and Attributes (from
Fujita and Ishii, 1997)
investigated for product family design. Stadzisz and Henrioud (1995) cluster products based on
geometric similarities to obtain product families in order to decrease product variability within a
product family in order to minimize the required flexibility of the associated assembly system. A
similar Design for Mass Customization approach is developed in (Tseng, et al., 1996) which
groups similar products into families based on product topology or manufacturing and assembly
similarity and provides a series of steps to formulate an optimal product family architecture
52
a process for redesigning a set of related products through similarity and clustering of common
products around a “core product concept,” i.e., a product platform. The resulting product
family is composed of a set of product variants which share characteristics in common with the
variety and in the context of a product platform and product family. Modularity greatly
facilitates the addition and removal of features to upgrade and derate a product platform (cf.,
Ulrich, 1995). Ulrich and Tung (1991), Ulrich (1995), and Ulrich and Eppinger (1995)
investigate product architecture and modularity and its the impact on product change, product
standardization as a means for enhancing product flexibility and offering a wide variety of
products. Meanwhile, Chen, et al. (1994) suggest designing flexible products which can be
reduce the cost of offering product variety. Rosen (1996) investigates the use of discrete
He emphasizes, as do Ulrich and Eppinger (1995), that the design of product architectures is
“critical in being able to mass customize products to meet differentiated market niches and
53
and other strategic issues.” A Product Module Reasoning System (Newcomb, et al., 1996)
currently is being developed “to reason about sets of product architectures, to translate design
requirements into constraints on these sets, to compare architecture modules from different
viewpoints, and to directly enumerate all feasible modules without generateandtest or heuristic
Pahl and Beitz (1996) also discuss the advantages and limitations of modular products
to fulfill various overall functions through the combination of distinct modules. Because such
modules often come in various sizes, modular products often involve size ranges where the initial
size is the basic design and derivative sizes are sequential designs. In the context of a scalable
product platform, the initial size constitutes the product platform and the derivative sizes are its
product variants. Their approach for designing size ranges is as follows (Pahl and Beitz, 1996):
• Prepare the basic design for the range either from a new or existing product;
• Use similarity laws to determine the physical relationships between geometrically similar
product ranges;
• Determine appropriate “theoretical” steps sizes within the desired size range;
• Check the product size range against assembly layouts, checking any critical
dimensions; and
54
In the context of their approach, the method developed in this dissertation facilitates the
development of the basic design (i.e., the platform) and the sequential designs (i.e., derivative
products) simultaneously.
The concept of sizing leads into an area of product platform design that has received
little attention—product platforms that can be “scaled” or “stretched” into derivative products
for a product family (in combination to being upgraded/degraded through the addition/removal
of modules). The implications of design “stretching” and “scaling” within the context of
developing a family of products are discussed first in (Rothwell and Gardiner, 1988; 1990), see
Figure 2.9. Rothwell and Gardiner (1988) use the term “robust designs” to refer to designs that
have sufficient inherent design flexibility or “technological slack” to enable them to evolve into a
design family of variants that meet a variety of changing market requirements by “uprating,”
“rerating,” and “derating” a platform design as shown in Figure 2.9. The process of developing
these designs is shown in Figure 2.9 and consists of three phases, namely, composite,
55
Figure 2.9 Robust Designs (from Rothwell and Gardiner, 1990)
Rothwell and Gardiner (1990) provide several examples of successful robust designs
and discuss how they “allow for change because essentially they contain the basis for not just a
single product but rather a whole product family of uprated or derated variants.” Consider the
Rolls Royce RB211 engine family illustrated in Figure 2.10. The original RB211 consisted of
seven modules which could be easily upgrade or scaled down to improve or derate the engine.
For example, by replacing the large front low pressure fan with a scaled down fan, the lower
thrust, derated, 535C engine was derived. Further improvements are made by scaling different
components of the engine to improve fuel consumption while increasing thrusts. Rolls Royce
takes advantage of similar stretching and scaling in its RTM322 engine which was discussed
56
Figure 2.10 Rolls Royce RB211 Engine Family
(from Rothwell and Gardiner, 1990)
Several other products also have benefited from platform scaling. For example, Black
& Decker scales the stack length of their universal motor platform to vary the output power of
the motor for a wide variety of applications, see Section 1.1.1 and (Lehnerd, 1987). The
Boeing 747200, 747300, and 747400 are scaled derivatives of the Boeing 747 (Rothwell
and Gardiner, 1990). Many automobile manufacturers also scale their passenger car platforms
to offer, for example, twodoor coupes, two and fourdoor sedans, three and fivedoor
hatchbacks, and maybe a wagon which are all derived from the same platform (Rothwell and
Gardiner, 1990). Honda, for instance, is taking full advantage of platform scaling to compete in
today’s global market by developing two scaled versions of their Accord for the U.S. and
Japanese markets from one platform (Naughton, et al., 1997). Siddique, et al. (1998)
document efforts at Ford to improve the commonality of their product platforms to capitalize on
57
Despite the apparent advantages of scalable product platforms, a formal approach for
the design and synthesis of stretchable and scalable platforms does not exist. Rothwell and
Gardiner state that it has “become increasingly possible to develop a robust design which has
the deliberate designedin capability of being stretched;” however, they only offer the process
shown in Figure 2.9 as a guide to designers. Consequently, developing a method to model and
design scalable product platforms around which a family of products can be developed through
scaled derivatives of the product platform is the principal objective in this dissertation. In an
effort to realize such a method, an extension of robust design principles is offered in the next
section, providing a means to turn Rothwell and Gardiner’s idea of “robust design” for scalable
proposed by Taguchi, is to improve the quality of a product or process by not only striving to
achieve performance targets but also by minimizing performance variation. Taguchi’s methods
have been widely used in industry (see, e.g., Byrne and Taguchi, 1987; Phadke, 1989) for
parameter and tolerance design. Reviews of such applications can be found in, e.g., (Nair,
1992).
factors can be represented with a Pdiagram as shown in Figure 2.11, where P represents either
58
product or process (Phadke, 1989). The three types of factors which serve as inputs to the P
• Control Factors (x) – parameters that can be specified freely by a designer; the
settings for the control factors are selected to minimize the effects of noise factors on the
response y.
• Noise Factors (z) – parameters not under a designer’s control or whose settings are
difficult or expensive to control. Noise factors cause the response, y, to deviate from
their target and lead to quality loss through performance variation. Noise factors may
include system wear, variations in the operating environment, uncertain design
parameters, and economic uncertainties.
• Signal factors (M) – parameters set by the designer to express the intended value for
the response of the product; signal factors are those factors used to adjust the mean of
the response but which no effect on the variation of the response.
Control Factors
x
?z , ?z
z
Noise Factors
This robust design terminology is used to classify design parameters and responses and
to identify sources of variability. The objective in robust design is to reduce the variation of
59
system performance caused by uncertain design parameters, thereby reducing system sensitivity.
Variations in noise factors, shown in Figure 2.11 as normally distributed with mean ? z and
In an effort to generalize robust design for product design, Chen, et al. (1996a) develop
Type I  Robust design associated with the minimization of the deviation of performance
caused by the deviation of noise factors (uncontrollable parameters).
Type II  Robust design associated with the minimization of the deviation of performance
caused by the deviation of control factors (design variables).
The idea behind the two major types of robust design applications are illustrated in Figure 2.12.
As indicated by the Pdiagrams for Type I and Type II applications, the deviation of the
response is caused by variations in the noise factor, z, the uncontrollable parameter in Type I
applications. Type II is different from Type I in that its input does not include a noise factor.
The variation in performance is caused solely by variations in control factors or design variables
The traditional Taguchi robust design method is of Type I as shown in the top half of
Figure 2.12. A designer adjusts control factors, x, to dampen the variations caused by the
noise factor, z. The two curves represent the performance variation as a function of noise factor
60
performance as closely as possible to the target, M, the designs at both levels are acceptable
because their means are the target M. However, introducing robustness, when x = a, the
performance varies significantly with the deviation of noise factor, z; however, when x = b, the
solution because x = b dampens the effect of the noise factors more than when x = a.
Type I
y Control Factor
x = Control Factors
Objective or x=a
Deviation
M = Signal Factors
Function
y = Response
x= b
Type II
Objective or
Deviation
x = Control Factors Function
Robust
Solution
y = Response
M = Signal Factors
²x ²x
Optimal
Solution
M
x
x µ robust
opt Design
(x = a) (x = b) Variable
61
The concept behind Type II robust design is represented in the lower half of Figure
2.12. For purposes of illustration, assume that performance is a function of only one variable, x.
In general, for this type of robust design, to reduce the variation of the response caused by the
deviations of design variables, a designer is interested in the flat part of a curve near the
performance target instead of seeking the peak or optimum value. If the objective is to move
the performance function towards target M and if a robust design is not sought, then the point x
= a is chosen. However, for a robust design, x = b is a better choice. This is because if the
design variable varies within ±?x of its mean, the resulting variation of response of the design at
x = b is much smaller than that at x = a, while the means of the two responses are essentially
equal. Implementation of these two types of robust design are discussed in the next section.
Taguchi’s robust design method to systematically vary and test the different levels of each of the
control factors. Taguchi advocates the use of an innerarray and outerarray approach to
implement robust design (cf., e.g., Byrne and Taguchi, 1987). The innerarray consists of an
OA which contains the control factor settings; the outerarray consists of the OA which contains
the noise factors and their settings which are under investigation. The combination of the inner
array and outerarray constitutes the product array. The product array is used to test various
combinations of the control factor settings systematically over all combinations of noise factors
62
after which the mean response and standard deviation may be approximated for each run using
the equations:
1 n
• Response mean: y? ?y
n i ?1 i
[2.3]
(y i ? y )2
n
• Standard deviation: S? ?i?1 n ? 1 [2.4]
Preferred parameter values then can be determined through analysis of the signaltonoise (SN)
ratio; factor levels that maximize the appropriate SN ratio are optimal. There are three
??y 2 ??
SN T ? 10 log ?? 2 [2.5]
??S ??
• Smaller the better (for making the system response as small as possible):
??1 n 1 ??
SN L ? ? 10log ?? ? 2 ?? [2.6]
??n i ?1 yi ??
• Larger the better (for making the system response as large as possible):
??1 n 2 ??
SN S ? ? 10 log ?? ? y i [2.7]
??n i ?1 ??
63
Once all of the SN ratios have been computed for each run of an experiment, there are
two common options for analysis: Analysis of Variance (ANOVA) and a graphical approach.
ANOVA can be used to determine the statistically significant factors and the appropriate setting
for each. In the graphical approach, the SN ratios and average responses are plotted for each
factor against its levels. The graphs then are examined to “pick the winner,” i.e., pick the factor
levels which (1) best maximize SN and (2) bring the mean on target (or maximize or minimize
There are many criticisms of Taguchi’s implementation of robust design through the
inner and outer array approach: it requires too many experiments, the analysis is statistically
questionable because of the use of orthogonal arrays, it does not accommodate constraints, and
the responses should be modeled directly instead of the SN ratios (see, e.g., Montgomery,
1991; Nair, 1992; Otto and Antonsson, 1993; Shoemaker, et al., 1991; Tribus and Szonyi,
1989; Tsui, 1992). Consequently many variations of the Taguchi method have been proposed
and developed; a review of numerous robust design optimization methods can be found in (Otto
and Antonsson, 1993; Simpson, et al., 1997a; Simpson, et al., 1997b; Su and Renaud, 1996;
response surface models are created and used to approximate the design space, replacing the
computer analysis code or simulation routine used to model the system. The major elements of
64
the response surface model approach for robust design applications are as follows (see, e.g.,
• combining control and noise factors in a single array instead of using Taguchi's
inner and outerarray approach,
Instead of using Taguchi’s orthogonal array as the combined array for experiments, central
composite designs are employed in the RCEM to fit secondorder response surface models for
integration with Taguchi's robust design. The response surface model postulates a single, formal
yˆ = f(x,z) [2.8]
where yˆ is the estimated response and x and z represent the settings of the control and noise
variables, respectively. In Equation 2.8, it is assumed that the noise variables are independent.
From the response surface model, it is possible to estimate the mean and variance of the
response. For Type I applications in which the deviations of noise factors are the source of
variation:
???f ?? 2
2
m
• Variance of response: ?
yˆ
2
? ?i ? 1 ?? ?? ?
???z i ?? z i
[2.10]
65
where ? ?represents the mean values, m is the number of noise factors in the response model,
and ? z i is the standard deviation associated with each noise factor. In Type II robust design,
i.e., when the deviations of control factors are the source of variation, ? z and ? z i in Equations
2.92.10 are replaced by the mean and deviation of the variable control factors. Using this
approach, robust design can be achieved by having separate goals for “bringing the mean on
target” and “minimizing the deviation” within a compromise DSP (cf., Chen, et al., 1996b).
When satisfying the design requirements and reducing the variation of system
performance are equally important, it is effective to model the two aspects of robust design
as separate goals in the compromise DSP. For instance, when designing a power plant, it may
be required to bring the power output as close as possible to its target value while at the same
time, reduce the variation of the system performance so that the power output remains constant
during operation. Moreover, setting an overall design requirement at a specific value during the
early stages of design may sometimes be crucial because a small variation may require significant
changes in other design requirements or incur substantial costs in order to compensate for it.
However, modeling the two aspects of robust design as two separate goals may not be an
effective approach when satisfying a range of design requirements is the major concern,
In Figure 2.13, the quality distributions of two different designs (I and II) are illustrated.
Both designs have the same mean value but different deviations. If the two aspects of robust
design are modeled as separate goals, obviously the design with the least deviation (Design I)
66
would be chosen because both designs have the same performance mean. However, in this
particular situation where the mean of the quality performance lies outside the range of
requirements, a smaller fraction of the performance falls inside the upper and lower requirement
limits (URL and LRL, respectively) with a thinner bell shape, i.e., the shadowed area which is
enclosed by A, B, and C is smaller than the area enclosed by A', B' and C. This is acceptable
in manufacturing when the process itself can be manually shifted to bring the mean back on
target, but when designing a system to accommodate noise, this option is not always
available.
Designs
Distribution
meeting
A’ requirements
A Design II
B’ B C mean, ?
Design capability indices have been developed with exactly this in mind. They are
based on process capability indices from statistical process control and apply in the same
design capability index (see Figure 2.14) is computed to assess the capability of a family of
67
designs to satisfy a ranged set of design requirements (Chen, et al., 1996c; Simpson, et al.,
1997a).
C dk • 1 C dk • 1
C •1
Cdk = Cdu dk Cdk = Cdl
3? 3? 3? 3?
? ? ?
C du C dl C du C dl
Assume that the system performance is normally distributed with mean, ? , and standard
deviation, ? . The design capability indices Cdl, Cdu, and Cdk measure the extent to which a
family of designs satisfies a ranged set of design requirements as specified by upper and lower
requirement limits (URL and LRL, respectively). As shown the figure, when nominal is better,
i.e., upper and lower design requirement limits are given, finding a family of designs with Cdk = 1
satisfies the design requirements. In this scenario, Cdk is computed using Equation 2.11; Cdk is
68
ˆ ? LRL )
(?? (URL ? ??
ˆ)
C dl ? ;C du ? ; Cdk ? min{ Cdl , Cdu} [2.11]
3??
ˆ 3??
ˆ
When smaller is better (e.g., “the motors should weigh less than 0.5 kg”) designs with a
Cdk = 1 are capable of satisfying the requirement. In this case, Cdk = Cdu as shown in Figure
2.14, and designs with a Cdu < 1 do not meet this requirement because a portion of the
distribution falls outside of the URL. Similarly, when larger is better (e.g., “the efficiency of
these motors should be 30% or better”), designs with a Cdk = 1 are capable of meeting this
There are some assumptions associated with the use of Cdk. For example, Cdk = 1
implies only 99.73% of the designs conform to requirements assuming that the system
However, the type of distribution of system performance depends on the actual system
response and the statistical distribution of each design variable or uncertainty parameter. When
the system function is complex, it may be difficult to perform a judicious evaluation to determine
deviate by ±3? z (as is typical in a six sigma approach to quality) around their nominal value ? z,
and that each system response varies by ±3? y around its mean value, ? y, which can be
calculated by:
?y = y (? x) [2.12]
69
The standard deviation, ? y, is approximated using a first order Taylor series expansion
(assuming ? x is small):
???y ??
2
m
ˆ2
?? ? ? ?? ??? 2z [2.13]
i ?1 ???z i ??
y
Modifications to the process capability indices for different variances have been
proposed (see, e.g., Johnson, et al., 1992; Ng and Tsui, 1992; Rodriguez, 1992), and design
capability indices could be modified similarly. For example, if a uniform distribution is used for
each response instead of a normal distribution, then Cdk, Cdu, and Cdl become as follows:
ˆ ? LRL )
(?? (URL ? ??
ˆ)
C dl ? ;C du ? ;Cdk ? min{ Cdl , Cdu} [2.14]
3??
ˆ 3??
ˆ
(b ? a) 2
ˆ2
?? ? [2.15]
12
where a and b are the lower and upper limits of the range of y.
shown in Figure 2.15. Design capability indices can be used for constraints and/or goals,
three cases—smaller is better, nominal is better, and larger is better—if a design requirement is
a wish, then making Cdk as close to one as possible is a goal in the compromise DSP. When a
requirement is a demand, then Cdk = 1 is taken as a constraint. Note that when a deviation
70
function solely includes design capability indices, the negative deviation variable, di, is always
Given:
• Functions y(x), including those ranged design requirements which are constraints, gi(x),
and those which are objectives, Ai(x)
• Deviations of the uncontrollable variables, ? Z
• Target upper and lower design requirement limits, URLi and LRLi
Find:
• Design variables, x
Satisfy:
• Constraints: Cdkconstraints = 1 (or use worstcase analysis)
• Goals: Cdkobjectives + di  di+ = 1
• Bounds on the design variables
• di , di+ = 0 ; di • di+ = 0
Minimize:
• Deviation Function: Z = [f1(di, ..., di+)...fk(di, ..., di+)]
develop an extension for product family design, specifically for scalable product platforms,
Q1.2. How can robust design principles be used to facilitate the design a common
Extensions of robust design for product family design are discussed in the next section.
71
2.3.2 Robust Design for Product Family Design: Scale Factors
There have been two known allusions to using robust design in product family design.
First, Lucas (1994) describes a way that the results of a robust design experiment can be used
to identify the need for product differentiation. When large effects are present in the system,
different product types can be sent to customers having different features as opposed to
designing one product which is robust over the entire range of effects. He states that this is
common practice in the chemical industry where, for example, different polymer viscosities are
desired by different customers and better results often are obtained by customizing the product
for its specific environment rather than delivering a single robust product.
Second, Chang, et al. (1994) and Chang and Ward (1995) introduce the notion of
“conceptual robustness” which is pertinent to this research. The term “conceptual robustness” is
developed by Chang and his colleagues for mathematically modeling and computationally
treating variations in the design proposed by other members of the development team as
“conceptual noise,” robust design principles can be used to make “conceptually robust”
decisions which are robust against these variations (Chang, et al., 1994). The “conceptually
robust” design of a twoaxis CNC milling machine is used as an illustrative example. In (Chang
and Ward, 1995), this idea is applied to modular design which is a “functionoriented design
that can be integrated into different systems for the same functional purpose without (or with
minor) modifications.” The design of an air conditioning system for ten different automobiles is
72
It is this idea of a “conceptual noise factor” that enables the utilization of robust design in
the context of product family design, particularly in the design of a scalable product platform.
By identifying an appropriate “scale factor” for a scalable product platform, robust design
principles can be used to minimize the sensitivity of the product platform to variations in
a scale factor. In this regard, a “conceptually robust” product platform can be realized which
has minimum sensitivity to variations in the scale factor, realizing a robust product family. For
• Scale factor  factor around which a product platform can be “scaled” or “stretched”
to realize derivative products within a product family.
In essence, a scale factor is a noise factor within a scalable product family or, to borrow
terminology from Chang, et al. (1994)), a “conceptual noise factor” around which a
“conceptually robust” product platform can be developed for a product family. Examples of
scale factors include the stack length in a motor, as in the Black & Decker universal motor
example (Lehnerd, 1987), the number of passengers on an aircraft ,as in the Boeing 747 family
(Rothwell and Gardiner, 1990), or the number of compressor stages in an aircraft engine, as in
the Rolls Royce RTM322 example (Rothwell and Gardiner, 1990). Scale factors may be either
discrete or continuous; however, continuous scale factors are investigated primarily in this
dissertation. The specific relationship between different types of scale factors and different
73
Given the definition for a scale factor, a third type of robust design now can be identified
for product family design, complementing the two types of robust design discussed previously:
Type III  Robust design associated with minimizing the sensitivity of a product platform to
variations in a scale factor.
As defined, Type III robust design is nearly identical to Type I robust design as shown in Figure
2.16. Notice that the Pdiagram on the left of the figure has been modified to accommodate
scale factors because essentially they are treated as noise factors in the product platform design
process.
Type III
Platform Design
Variable
x = Control Factors
y x=a
y = Response
Objective or x= b
M = Signal Factors
Deviation
Function
Figure 2.16 Type III Robust Design: Scale Factors for Product Platforms
It should be noted that these scale factors are not the same “scaling/leveling factors”
shown in the Pdiagram in (Taguchi and Phadke, 1986) or (Suh, 1990) which are used to scale
74
a response to achieve a desired value. Using the same diagram shown in Figure 2.12 for the
Type I robust design, the idea behind Type III robust design is illustrated in the right hand side
of Figure 2.16. Given two possible settings (x = a and x = b) for one of the design variables, x,
which defines the platform, the setting x = b should be selected because it minimizes the
scale factor, then robust design can be used to minimize the sensitivity of the product platform to
changes in these scaling factors. In this manner, a scalable product platform can be developed
and instantiated to realize a family of products. This raises the following question then.
Q1.3. How can individual targets for product variants be aggregated and modeled for
product platform design?
Using the concept of a scale factor for a product platform, it is now possible to
aggregate the individual targets for product variants within a product family around an
appropriate mean and a standard deviation. Robust design principles then can be used to
“match” the mean and standard deviation of the product family with the desired mean and
Section 2.3.1:
• creating separate goals for “bringing the mean on target” and “minimizing the deviation”
of the product platform for variations in the scale factor within a compromise DSP, or
75
• using design capability indices to assess the capability of a family of designs to satisfy a
ranged set of design requirements.
To demonstrate these implementations, the former approach is utilized in the universal electric
motor problem in Chapter 6; the latter is employed in the General Aviation aircraft example in
Chapter 7. The General Aviation aircraft example also makes use of metamodels to facilitate
the implementation of robust design and design capability indices and expedite the concept
employed to create approximations of the mean and variation of a response in the presence of
approximation for the actual analysis (i.e., computer code) during the design process. The
general approach to response surface modeling is shown in Figure 2.17. In statistical terms,
design variables are factors, and design objectives are responses; the factors and responses to
be investigated for a particular design problem provide the input for the approach of Figure
2.17, and the solutions (improved or robust) are the output. To identify these solutions, this
approach includes three sequential stages: screening, modeling building, and model exercising.
The first step (screening) is employed only if the problem includes a large number of
factors (usually greater than 10); screening experiments are used to reduce the set of factors to
those that are most important to the response(s) being investigated. Statistical experimentation
76
is used to define the appropriate design analyses which must be run to evaluate the desired
effects of the factors. Often two level fractional factorial designs or PlackettBurman designs
are used for screening (cf., Myers and Montgomery, 1995), and only main (linear) effects of
Given:
Large # of YES
Factors, Run Screening
Responses Factors? Experiment
NO Screening
Run Modeling Reduce #
Experiment(s) Factors
Build Predictive
Model ( y )
Model
Building
NO
Search Design
Model Space
Exercising
Solutions
(improved or robust)
In the second stage (model building) of the approach in Figure 2.17, response surface
models are created to replace computationally expensive analyses and facilitate fast analysis and
77
exploration of the design space. If little curvature appears to exist, a two level fractional
k
y = ?0 + • ? i xi [2.16]
i=1
k k
y = ?0 + • ? ixi + • ? ii xi2 + • • ? ij xixj [2.17]
i=1 i=1 i j
i<j
is commonly used. Among the various types of experimental design for fitting a secondorder
response surface model, the central composite design (CCD) is probably the most widely used
experimental design for regularly shaped (spherical or cuboidal) design spaces (cf., Myers and
Montgomery, 1995). In the case of irregularly shaped design spaces, Doptimal designs have
been successfully employed to build second order response surface models (see, e.g., Giunta, et
al., 1994).
If noise factors are included for robust design, the mean and variance of each response
must be estimated, and predictive metamodels for both are constructed. As discussed in
(Koch, et al., 1998), there are essentially three approaches which can be employed to construct
78
a response surface model is built (see, e.g., Chen, et al., 1996b; Shoemaker, et al.,
1991). The mean value of a response is estimated by evaluating the response surface at
the mean of the noise factor, and the variance is estimated using a Taylor series
approximation. This is the approach currently employed in the RCEM as described
previously in Section 2.3.1.
3. Product array approach: Uses the inner and outerarray approach advocated by
Taguchi (see, e.g., Montgomery, 1991; Phadke, 1989) to develop separate
approximations for the mean and variance of each response. The innerarray prescribes
settings for the control factors, and the outerarray prescribes settings for the noise
factors. This experimentation strategy leads to multiple response values for each set of
control factor settings from which a response mean and variance can be computed from
which metamodels can be constructed.
Of the three approaches, the product array approach typically yields the most accurate
approximations because the metamodels are built directly from the original analysis code rather
approximations of computer analysis and simulation codes involves the following: (a) choosing
79
an experimental design to sample the computer code, (b) choosing a model to represent the
data, and (c) fitting the model to the observed data. There are a variety of options for each of
these steps as shown in Figure 2.18, and some of the more prevalent approximation techniques
have been identified. For example, response surface methodology usually employs central
composite designs, secondorder polynomials, and least squares regression analysis. The
reader is referred to (Simpson, et al., 1997b) for a recent review of numerous mechanical and
2.18 with particular emphasis on response surface methodology, neural networks, inductive
By far the most popular technique for building metamodels these days is the response
surface approach which typically employs secondorder polynomial models fit using least
squares regression techniques (Myers and Montgomery, 1995). These response surface
models replace the existing analysis code while providing the following:
• fast analysis tools for optimization and exploration of the design space.
80
SAMPLE
EXPERIMENTAL MODEL MODEL APPROXIMATION
DESIGN CHOICE FITTING TECHNIQUES
An added advantage of response surfaces is that they can smooth the data in the case of
numerical noise which may hinder the performance of some gradientbased optimizers (cf.,
Giunta, et al., 1994). This “smoothing” effect is both good and bad, depending on the problem.
Su and Renaud (1996) present an example where a secondorder response surface smoothes
out the variability in a response so that the robust solution is lost in the approximating function; a
“flat region” does not exist in a secondorder response surface, only an inflection point. Su and
Renaud’s example is investigated in more detail in Section 4.1 wherein the kriging process is
demonstrated stepbystep as it applies to their example to familiarize the reader with kriging.
In the meantime, additional limitations of response surfaces are discussed in the next section,
81
providing motivation for investigating alternative metamodeling techniques for use in engineering
design.
Response surfaces typically are secondorder polynomial models which make them
easy to use and implement; however, they have limited capabilities to model accurately non
linear functions of arbitrary shape. Some twovariable examples of the types of surfaces that a
secondorder response surface can model are illustrated in Figure 2.19. Obviously, higher
order response surfaces can be used to model a nonlinear design space; however, instabilities
may arise (cf., Barton, 1992), or it may be too difficult to take a sufficient number of sample
points in order to estimate all of the coefficients in the polynomial equation, particularly in high
dimensions. Hence, many researchers advocate the use of a sequential response surface
modeling approach using move limits (see, e.g., Toropov, et al., 1996) or a trust region
approach (see, e.g., Rodriguez, et al., 1997). More generally, the Concurrent SubSpace
develop response surface approximations of the design space which form the basis of the
subspace coordination procedure (Renaud and Gabriele, 1994; Renaud and Gabrielle, 1991;
Wujek, et al., 1995). The Hierarchical and Interactive Decision Refinement methodology uses
statistical regression and other metamodeling techniques to recursively decompose the design
space into subregions and fit each region with a separate model during design space refinement
(Reddy, 1996). Finally, the Model Management Framework (Booker, et al., 1995; Dennis and
82
Torczon, 1995) is being developed collaboratively by researchers at Boeing, IBM, and Rice to
optimization.
Many of the previously mentioned sequential approaches are being developed for
in nature, it is often difficult to isolate a small region of good design which can be accurately
represented by a loworder polynomial response surface model. Koch, et al. (1997) discuss
the difficulties encountered when screening large variable problems with multiple objectives as
part of the response surface approach. Barton (1992) states that the response region of interest
will never be reduced to a “small neighborhood” which is good for all objectives during
techniques which have sufficient flexibility to build accurate global approximations of the design
space and which are suitable for modeling computer experiments which are typically
83
y = 80 + 4x1 + 8x 2 4x12  12x22 12 x 1x2 y = 80 + 4x1 + 8x2  3x 21 12x22  12 x1x2
x2 x1
x2 x1
x2 x1
x2 x1
y = 80 4x1 + 12x 2 3x 21 12x22 12 x1x2 y = 80 + 4x1 + 8x2  2x12 12x 22 12 x1x 2
The approach investigated in this dissertation is called kriging, and it is introduced in the
designs which can be used to sample the design space in Section 2.4.3. These two sections lay
the foundation for the work in Chapters 4 and 5 wherein Hypotheses 2 and 3 are tested
explicitly to determine the utility of kriging and space filling experimental designs for building
84
2.4.2 The Kriging Approach to Metamodeling
Kriging has its roots in the field of geostatistics—a hybrid discipline of mining
engineering, geology, mathematics, and statistics (cf., Cressie, 1993)—and is useful for
predicting temporally and spatially correlated data. Kriging is named after D. G. Krige, a South
African mining engineer who, in the 1950s, developed empirical methods for determining true
ore grade distributions from distributions based on sampled ore grades (Matheron, 1963).
Several texts which describe kriging and its usefulness for predicting spatially correlated data
(see, e.g., Cressie, 1993) and mining (see, e.g., Journel and Huijbregts, 1978) exist. These
metamodels are extremely flexible due to the wide range of correlation functions which can be
chosen for building the metamodel. Furthermore, depending on the choice of the correlation
function, the metamodel can either “honor the data,” providing an exact interpolation of the data,
or “smooth the data,” providing an inexact interpolation (Cressie, 1993). In this work, as in
most applications of kriging, the concern is solely on spatial prediction; it is assumed that the
These days, kriging goes by a variety of names including DACE (Design and Analysis of
Computer Experiments) modeling—the title of the inaugural paper by Sacks, et al. (1989)—and
spatial correlation metamodeling (see, e.g., Barton, 1994). There are also several types of
kriging (cf., Cressie, 1993): ordinary kriging, universal kriging, lognormal kriging, and trans
Gaussian kriging. In this dissertation, ordinary kriging is employed, following the work in, e.g.,
(Booker, et al., 1995; Koehler and Owen, 1996), and only the term kriging is used.
85
Unlike response surfaces, however, kriging models have found limited use in engineering
design applications since introduction into the literature by Sacks, et al. (1989). Consequently,
• Giunta (1997) and Giunta, et al. (1998) perform a preliminary investigation into the use
of kriging for the multidisciplinary design optimization of a High Speed Civil Transport
aircraft.
• Sasena (1998) compares and contrasts kriging and smoothing splines for approximating
noisy data.
• Schonlau, et al. (1997) use a global/local search algorithm based on kriging for shape
optimization of an automobile piston engine.
• Osio and Amon (1996) develop a multistage numerical optimization strategy based on
kriging which they demonstrate on the thermal design of embedded electronic package
which has 5 design variables.
• Booker (1996) and Booker, et al. (1996) using a kriging approach to study the
aeroelastic and dynamic response of a helicopter rotor during structural design.
86
Some researchers have also employed krigingbased strategies for numerical optimization (see,
e.g., Cox and John, 1995; Trosset and Torczon, 1997). A look at the mathematics of kriging is
offered next.
Mathematics of Kriging
Kriging postulates a combination of a polynomial model and departures of the following form:
where y(x) is the unknown function of interest, f(x) is a known polynomial function of x, and
Z(x) is the realization of a stochastic process with mean zero, variance ? 2, and nonzero
covariance. The f(x) term in Equation 2.18 is similar to the polynomial model in a response
surface, providing a “global” model of the design space. In many cases f(x) is simply taken to
be a constant term ? (cf., Koehler and Owen, 1996; Sacks, et al., 1989; Welch, et al., 1990).
Only kriging models with constant underlying global models are investigated in this work as well.
While f(x) “globally” approximates the design space, Z(x) creates “localized” deviations
so that the kriging model interpolates the ns sampled data points. The covariance matrix of Z(x)
where R is the correlation matrix, and R(xi,xj) is the correlation function between any two of the
ns sampled data points xi and xj. R is a ns x ns symmetric, positive definite matrix with ones
along the diagonal. The correlation function R(xi,xj) is specified by the user.
87
In this work, five different correlation functions are examined for use in the kriging
model, see Table 2.1. In all of the correlation functions listed in Table 2.1, ndv is the number of
design variables, ? k are the unknown correlation parameters used to fit the model, and dk = xki 
xkj which is the distance between the kth components of sample points xi and xj. The
correlation functions of Equations 2.20 and 2.21 are from (Sacks, et al., 1989); the correlation
These five correlation functions are chosen primarily because of the frequency with
which they appear in the literature; the Gaussian correlation function, Equation 2.20, is the most
popular one in use. Correlation functions with multiple parameters per dimension exist;
however, correlation functions with only one parameter per dimension are considered in this
dissertation to facilitate finding the maximum likelihood estimates (MLEs) or “best guess” of the
?k used to fit the model. As mentioned in Section 1.3.2, one of the contributions in this work is
to study the effects of these five different correlation functions on the accuracy of a kriging
88
Table 2.1 Summary of Correlation Functions
Once a correlation function has been selected, predicted estimates, yˆ (x), of the
where y is the column vector of length ns (number of sample points) which contains the values of
the response at each sample point, and f is a column vector of length ns which is filled with ones
when f(x) in Equation 2.18 is taken as a constant. In Equation 2.25, rT (x) is the correlation
vector of length ns between an untried x and the sampled data points {x1, x2, ..., xns} and is
89
Finally, the ?ˆ ?in Equation 2.25 is estimated using Equation 2.27.
When f(x) is assumed to be a constant, then ?ˆ ?is a scalar which simplifies the calculation of
The estimate of the variance, ?ˆ ?2 , from the underlying global model (not the variance of
(y ? f ?ˆ )?T R?1 (y ? f ?ˆ )?
?ˆ ?2 ? [2.28]
ns
where f is again a column vector of ones because f(x) is assumed to be a constant. The
maximum likelihood estimates (i.e., “best guesses”) for the ? k used to fit the model are found by
[n s ln ( ?ˆ 2?) ? ln R ]
? [2.29]
2
Both ?ˆ ?2 and R are functions of ? k. While any values for the ? k create an interpolative
approximation model, the “best” kriging model is found by solving the kdimensional
unconstrained nonlinear optimization problem given by Equation 2.29; this process is discussed
further in the next section. It is worth noting that in some cases using a single correlation
parameter gives sufficiently good results (see, e.g., Booker, et al., 1995; Osio and Amon, 1996;
Sacks, et al., 1989). In this work, however, a unique ? value for each dimension always is
90
considered based on past difficulties with scaling the design space to [0,1]k during the model
fitting process. The algorithms used in this dissertation to build and predict with a kriging model
Once the MLEs for each theta have been found, the final step is to validate the model.
Since a kriging model interpolates the data, residual plots and R2 values—the usual model
assessments for response surfaces (cf., Myers and Montgomery, 1995)—are meaningless
because there are no residuals. Therefore, validating the model using additional data points is
essential if they can be afforded. If additional validation points can be afforded, then the
maximum absolute error, average absolute error, and root mean square error (RMSE) for the
additional validation points can be calculated to assess model accuracy. These measures are
summarized in Table 2.2. In the table, nerror is the number of random test points used, then yi is
the actual value from the computer code/simulation, and yˆ i is the predicted value from the
approximation model.
91
Table 2.2 Error Measures for Kriging Metamodels
?
1 n error
y i ? yˆ i
avg. abs. error n error i ?1 [2.31]
?
n error
(yi ? yˆ i )2
i? 1
RMSE n error [2.32]
However, sometimes taking additional validation points is not possible due to the added
model assessment which requires no additional points is needed. One such approach is the
leaveoneout cross validation (Mitchell and Morris, 1992a). In this approach, each sample
point used to fit the model is removed one at a time, the model is rebuilt without that sample
point, and the difference between the model without the sample point and actual value at the
sample point is computed for all of the sample points. The cross validation root mean square
?
ns
( yi ? yˆ i )2
i? 1
cvrmse = [2.33]
ns
92
The MLEs for the ? k are not recomputed for each model; the initial ? k MLEs based on the full
sample set are used. Mitchell and Morris (1992a) describe an approach which facilitates cross
Before a kriging metamodel (or any metamodel for that matter) can be created, the
design space must be sampled in order to obtain data to fit the model. Hence, an important
step in any metamodeling approach is the selection of an appropriate sampling strategy, i.e., an
experimental design by which the computer analysis or simulation code is queried. In the next
Many researchers (see, e.g., Currin, et al., 1991; Sacks and Schiller, 1988) argue that
classical experimental designs, such as the central composite designs and BoxBehnken designs,
are not wellsuited for sampling deterministic computer experiments. Sacks, et al. (1989) state
that the “classical notions of experimental blocking, replication and randomization are irrelevant”
when it comes to deterministic computer experiments that have no random error; hence, designs
for deterministic computer experiments should “fill the space” as opposed to possess properties
Booker (1996) summarizes the difference between classical experimental designs and
new space filling designs well. In the classical design and analysis of physical experiments,
random variation is accounted for by spreading the sample points out in the design space and by
taking multiple data points (replicates), see Figure 2.20a. In deterministic computer
93
experiments, replication at a sample point is meaningless; therefore, the points should be chosen
to fill the design space. One approach is to minimize the integrated mean square error over the
design region (cf., Sacks, et al., 1989); the space filling design illustrated in Figure 2.20b is an
Q3. Are space filling designs better suited for building approximations of deterministic
As stated in Section 1.3.1, Hypothesis 3 is that space filling designs are better
experimental designs . In an effort to test this hypothesis, an investigation into the utility of
several classical and space filling experimental designs is conducted in Chapter 5. Eleven
different types of experimental designs investigated in this dissertation: two classical experimental
designs and nine space filling experimental designs. The different designs are described next;
94
1.0 1.0
0.5 0.5
x2 0.0 x2 0.0
0.5 0.5
1.0 1.0
1.0 0.5 0.0 0.5 1.0 1.0 0.5 0.0 0.5 1.0
x1 x1
(a) Classical design w/replicates (b) Space filling design w/o replicates
Classical experimental designs are so named because they have been developed for what are
experiments which are plagued by variability and random error (see, e.g., Box and Draper,
1987; Myers, et al., 1989; Myers and Montgomery, 1995). Among these designs, the central
composite and BoxBehnken designs are well known and easily generated; hence, they are
employed in this work to serve as a basis for comparison against the sampling capability of
space filling designs. A brief description of these two types of designs follows.
95
A central composite design (CCD) is a X2
are the most widely used experimental design for fitting Star pts
X3 Center pt
Factorial pts
secondorder response surfaces (Myers and Montgomery,
Figure 2.21 Central
1995). Different CCDs are formed by varying the distance Composite Design
from the center of the design space to the star points; in this
• ordinary central composite design (CCD)  star points are placed a distance of ±? (?
> 1) from the center with the cube points placed at ±1 from the center,
• face centered central composite (CCF) design  star points are positioned on the
faces of the cube, and
• inscribed central composite (CCI) design  star points are positioned at ±1/? from
the center with the cube points placed at ±1.
In addition, combinations of the CCD and CCF, and CCI and CCF are investigated based on
96
designs should not be used when accurate predictions at the
Figure 2.22 BoxBehnken
Design
extremes (i.e., the corners) are important. An example 13
Figure 2.22.
Numerous space filling experimental designs have been developed in an effort to provide more
efficient and effective means for sampling deterministic computer experiments. For instance,
Koehler and Owen (1996) describe several Bayesian and Frequentist types of space filling
experimental designs, including maximin and minimax designs, maximum entropy designs,
integrated mean squared error (IMSE) designs, orthogonal arrays, Latin hypercubes, scrambled
nets and randomized grids. Latin hypercube designs were introduced in (McKay, et al., 1979)
for use with computer codes and compared to random sampling and stratified sampling.
Minimax and maximin designs were developed by Johnson, et al. (1990) specifically for use
with computer experiments. Sherwy and Wynn (1987; 1988) and Currin, et al. (1991) use the
maximum entropy principle to develop designs for computer experiments. Similarly, Sacks et
al. (1989) discuss entropy designs in addition to IMSE designs and maximum mean squared
error designs for use with deterministic computer experiments. Finally, a review of several
Bayesian experimental designs for linear and nonlinear regression is given in (Chaloner and
Verdinelli, 1995).
97
Comparisons of the different types of space filling experimental designs are few; often
the novel space filling design being described is compared against Latin hypercube designs and
random sampling (see, e.g., Kalagnanam and Diwekar, 1997; Park, 1994; Salagame and
Barton, 1997), but rarely is it compared against other space filling designs. An exception are
the maximin Latin hypercubes (Morris and Mitchell, 1995) which are compared against maximin
designs (Johnson, et al., 1990) and Latin hypercubes; the authors conclude by means of an
example that maximin Latin hypercube designs are better than either maximin or Latin
hypercube designs alone. In this dissertation, one of the contributions is to compare and
contrast a wide variety of space filling designs against themselves and classical experimental
designs. Toward this end, nine space filling experimental designs are investigated: Latin
hypercubes, maximin Latin hypercubes, minimax Latin hypercubes, optimal Latin hypercubes,
Hammersley point designs, and uniform designs. An overview of each of these designs follows;
98
filling experimental design intended for use with computer Design
the OA. The orthogonal arrays used in this work are limited
Figure 2.24 Orthogonal Array
Design
to q2 runs where q is a prime power. An example nine point
99
Orthogonal arraybased Latin hypercube X2
constructed from purely algebraic means using the process Figure 2.26 Orthogonal Latin
Hypercube
described in (Ye, 1997). An example nine point orthogonal
100
1995) is used to construct these designs for varying sample Hypercube
2.27.
Figure 2.28.
101
A uniform design is a design based strictly on
X2
In addition to the maximin Latin hypercube designs from (Morris and Mitchell, 1995), a
minimax Latin hypercube design is introduced in this dissertation. Only a brief description of this
unique design is given here; a detailed description of the design an a discussion of how it is
generated are included in Appendix C. From an intuitive stand point, because prediction with
kriging relies on the spatial correlation between data points, a design which minimizes the
maximum distance between the sample points and any point in the design space should yield an
accurate predictor. Such a design is referred to as a minimax design (Johnson, et al., 1990).
While the minimax criterion ensures good coverage of the design space by minimizing the
maximum distance between points, it does not ensure good stratification of the design space
(i.e., when the sample points are projected into 1dimension, many of the points may overlap
(cf., Johnson, et al., 1990)). Meanwhile, because a Latin hypercube ensures good stratification
of the design space, combining it with the minimax criterion provides a good compromise
102
between the two much as the maximin Latin hypercubes developed by Morris and Mitchell
(1995) does. Example 9, 11, and 14 point maximin Latin hypercube designs are shown in
Figure 2.31. The specifics of the genetic algorithm used to generate these minimax Latin
X2 X2 X2
X1 X1 X1
In Chapter 5, the minimax Latin hypercube designs are compared against the other
classical and space filling experimental designs discussed in this section. A look ahead to that
Through the review of the literature which is presented in this chapter, the necessary
elements for a method to model and design a scalable product platform for a product family
have been identified by elucidating the research questions (and hypotheses) introduced in
Section 1.3.1. In the next chapter, these constitutive elements are integrated to create the
Product Platform Concept Exploration Method (PPCEM), providing a Method which facilitates
103
the synthesis and Exploration of a common Product Platform Concept which can be scaled
into an appropriate family of products. The relationship between the individual sections in this
chapter and the PPCEM developed in the next chapter are illustrated in Figure 2.32. In
particular, the market segmentation grid is revisited in Section 3.1.1 as it applies to the PPCEM.
The concept of a “conceptual noise factor” is formalized into a scale factor in Section 3.1.2
which is fundamental to the utilization of the PPCEM. Metamodeling techniques within the
PPCEM are discussed in Sections 3.1.3. Aggregation of the individual product specifications
into an appropriate comproimse DSP formulation for the product family is described in Section
3.1.4, and development of the product platform portfolio for the product family is explained in
Section 3.1.5.
In addition to the presentation of the PPCEM, the research hypotheses are revisited in
Section 3.2 in the next chapter. Supporting posits are stated for each hypothesis, and the
verification strategy for testing the hypotheses is elaborated in Section 3.3. The discussion in
Section 3.3 sets the stage for the example problems which are presented in Chapters 4 through
104
Platform
Chp 3
105
3.
CHAPTER 3
Platform
In this chapter, the elements of the previous chapters are synthesized to meet the
principal objective in this dissertation, namely, to develop the Product Platform Concept
Exploration Method (PPCEM) for designing a common scalable product platform for a product
family. An overview of the PPCEM and its associated steps and tools is given in Section 3.1
with each step of the PPCEM and its constituent elements elaborated in Sections 3.1.1 through
3.1.5; the resulting infrastructure of the PPCEM is presented in Section 3.1.6. In Section 3.2,
the research hypotheses are revisited from Section 1.3.1 and supporting posits are identified.
Section 3.3 follows with an outline of the strategy for verification and testing of the research
hypotheses. Section 3.4 concludes the chapter with a recap of what has been presented and a
look ahead to the metamodeling studies in Chapters 4 and 5 and the example problems in
Chapters 6 and 7 which are used to test the research hypotheses and demonstrate the
92
93
3.1 OVERVIEW OF THE PPCEM AND RESEARCH HYPOTHESES
As stated in Section 1.3.2, the principal contribution in this dissertation is the Product
Platform Concept Exploration Method (PPCEM) for designing a common scalable product
platform for a product family. As the name implies, the PPCEM is a Method which facilitates
the synthesis and Exploration of a common Product Platform Concept which can be scaled
into an appropriate family of products. The steps and associated tools (with relevant sections
Step 1 Market
Create Market Segmentation Grid Segmentation
Grid (§ 2.2)
Step 2
Classify Factors and Ranges Robust Design
Principles
(§ 2.3)
Step 3
Build and Validate Metamodels
Metamodeling
Techniques
Step 4 (§ 2.4)
Aggregate Product Platform Specifications
Compromise
Step 5 Decision Support
Develop Product Platform Portfolio Problem (§1.2)
94
Figure 3.1 Steps and Tools of the PPCEM
There are five steps to the PPCEM as illustrated in Figure 3.1. The input to the
PPCEM are the overall design requirements, and the output of the PPCEM is the product
platform portfolio which is described in Section 3.1.5. The tools utilized in each step of the
PPCEM are shown on the right hand side of Figure 3.1; their involvement in the various steps of
the PPCEM is elaborated further in Sections 3.1.1 through 3.1.5 wherein the implementation of
each step of the PPCEM is described. These steps prescribe how to formulate the problem
and describe how to solve it; the actual implementation of each step is liable to vary from
problem to problem.
Given the overall design requirements, Step 1 in the PPCEM is to create the market
segmentation grid as shown in Figure 3.2. As discussed in Section 2.2.1, the market
segmentation grid provides a link between management, marketing, and engineering design to
help identify and map which type of leveraging can be used to meet the overall design
requirements and realize a suitable product platform and product family. In the PPCEM, the
market segmentation grid serves as an attention directing tool to help identify potential
platform design. Examples of this step are given in Sections 6.2 and 7.1.3.
95
Overall Design
Requirements
IDENTIFY
LEVERAGING:
1. vertical
2. horizontal
3. beachead
Platform
Once the market segmentation grid has been created, Step 2 of the PPCEM is to
classify factors as illustrated in Figure 3.3. Factors are classified in the following manner:
Overall Design
Requirements
IDENTIFY
LEVERAGING:
1. vertical
2. horizontal
3. beachead
Platform
96
• Responses are performance parameters of the system; in the problem formulation, they
may be constraints or goals or both and are identified from the overall design
requirements and the market segmentation grid.
• Control factors are variables that can be freely specified by a designer; settings of the
control factors are chosen to minimize the effects of variations in the system while
achieving desired performance targets and meeting the necessary constraints. Signal
factors also are lumped within control factors because it is often difficult to know, a
priori, which design variables are control factors and can be used to minimize the
sensitivity of the design to noise variations and those which are signal factors and have
no influence on the robustness of the system.
• Noise factors are parameters over which a designer has no control or which are too
difficult or expensive to control.
• Scale factor is a factor around which a product platform is leveraged either through
vertical scaling, horizontal scaling, or a combination of the two.
Appropriate ranges for the control and noise factors are identified during this step, and
constraints and goal targets for the responses are also identified.
The relationship between different leveraging capabilities and types of scale factors
considered in this dissertation are illustrated in Figure 3.4. As discussed in Section 2.2.1, three
types of leveraging can be mapped using the market segmentation grid: (1) vertical leveraging,
(2) horizontal leveraging, and (3) a beachhead approach which is a combination of vertical and
97
As shown in Figure 3.4, the relationship between each type of scale factor and the three
98
Leveraging Scale Factors
Scaled Down
Scaled Up
MidRange
• the length of a motor to
provide varying torque
Low Cost • number of compressors
Low Performance
Platform C in an engine
Segment A Segment B Segment C
(a) Vertical
High Cost
High Performance Scale factors are:
a combination of
MidRange • parametric, conceptual,
Platform
• and/or configurational
Low Cost
Low Performance scaling factors
Segment A Segment B Segment C
(c) Beachhead
If known, an appropriate range—upper and lower limit—is identified for each scale factor
during this step of the PPCEM; otherwise, finding this range becomes part of the design
process. Examples of Step 2 are offered in Sections 6.2 and 7.3. Once the responses, control,
noise, and scale factors and corresponding constraints/targets and ranges have been identified,
99
3.1.3 Step 3  Build and Validate Metamodels
Step 3 in the PPCEM is to build and validate metamodels relating the control, noise,
and scale factors to the responses using the elements of the PPCEM shown in Figure 3.5. The
routines which are inexpensive to run. Because robust design principles are being used, these
metamodels are either functions of control, noise, and scale factors as discussed in Sections
2.3.1 and 2.4, or approximate the mean and standard deviation of each response for known
variations in the noise and scale factors. If the analytic equations or simulation routine are not
sufficiently expensive to warrant metamodeling, this step can be skipped provided that the mean
and standard deviation of each response (as a result of variation in the scale factor and any
relevant noise factors) can be computed easily. The universal motor example in Chapter 6
forgoes metamodel construction because the analyses permit the mean and standard deviation
of each response to be easily computed; however, such is not the case in Chapter 7, the design
facilitate the implementation of robust design and search for a good aircraft platform.
100
E. Metamodeling
Build metamodels
Validate metamodels
Use metamodels
D. Analysis or
Simulation E1. Model Choice
Routine
Response Kriging
Surface
The steps for building and validating the metamodels follow the traditional metamodeling
computer analysis or simulation program used to model the system is queried to obtain data.
The experimental design is used to sample the design space identified by the ranges (i.e.,
bounds) on the control, noise, and scale factors. The resulting sample data then is used to build
a metamodel (e.g., a kriging model, Section 2.4.2) for each response. Model accuracy then is
assessed through additional validation points or otherwise appropriate procedures for the
of such an approach can be found in, e.g., (Chen, et al., 1997; Koch, et al., 1997; Myers and
Montgomery, 1995). The difficulty, then, lies in defining an appropriate design space. In this
dissertation, the design space for the General Aviation aircraft example problem is known,
101
making identification of a good design space appear trivial. In reality it is not, and there is often
great difficulty finding an appropriately good design space. Identifying and quantifying a “good”
design space is not addressed in this dissertation; it is a possibility for future work (see Section
8.3).
Once the necessary metamodels have been built and validated, Step 4 in the PPCEM is
compromise DSP to model the necessary constraints and goals for the product family and
product platform based on the overall design requirements, the market segmentation developed
in Step 1, and the factor classification and ranges developed in Step 2, see Figure 3.6. It is
imperative that product constraints or goals given in the overall design requirements that are not
captured within the desired platform leveraging strategy be included in the compromise DSP.
102
F. The Compromise DSP
Find
Control Variables
Satisfy
Overall Design Constraints
Requirements Goals
{ "Mean on Target"
"Minimize Deviation" } orvalues
{ Cdk }
Bounds
Minimize
Deviation Function
A. Market Segementation Grid
IDENTIFY
LEVERAGING:
1. vertical
2. horizontal
3. beachead
Platform
x Product y
Control Platform Response
Factors Scale
s Factors
Two approaches for aggregating the product platform specifications are demonstrated
in this dissertation.
1. Separate goals for “bringing the mean on target” and “minimizing the variation” are
created (see Section 2.3.2) for the product family. This follows the implementation of
robust design principles which is traditionally used in the RCEM, except they are being
applied to a product family as opposed to a single product. The procedure is as
follows:
a. identify targets from market segmentation grid and overall design requirements for
each derivative product;
b. compute target means for the platform based by averaging individual targets;
103
c. Compute standard deviations for the platform based on individual targets by
dividing the range of each target by six, assuming a normal distribution with ±3?
variations, or by 12 , assuming a uniform distribution; and
d. create separate goals for “bringing the mean on target” and “minimizing the
variation” as necessary.
2. Design capability indices (see Section 2.3.2) to assess the capability of a family of
designs to satisfy a ranged set of design requirements. The procedure is as follows:
a. identify upper and lower requirement limits (URL and LRL, respectively) from the
market segmentation grid and overall design requirements for each derivative
product;
b. compute the mean and standard deviation as the average and standard deviation of
the individual instantiations of the product family for a given set of design variables;
and
The first approach is utilized in the universal motor example in Chapter 6 (see Section 6.3 in
particular for more details on its implementation). Meanwhile, design capability indices are
employed in the General Aviation aircraft example in Chapter 7 (see Section 7.4).
The compromise DSP is used to determine the values for the control (design) variables
which best satisfy the product family goals (“bringing the mean on target” and “minimizing the
variation” in the first; making Cdk = 1 in the second) while satisfying the necessary constraints.
Constraints can be either worse case scenario, evaluated on an individual basis, or aggregated in
a similar manner as the goals, constraining the mean and deviation of the responses or the
104
appropriate Cdk. The compromise DSP is exercised in Step 5 of the PPCEM to obtain the
Step 5 of the PPCEM is to solve the compromise DSP using the aggregate product
platform specifications to develop the product platform portfolio. This step makes use of the
metamodels created in Step 3 in conjunction with the compromise DSP and the aggregate
product specifications formulated in Step 4; design scenarios for exercising the compromise
DSP are abstracted from the overall design requirements, see Figure 3.7. The resulting “pool”
portfolio. This portfolio of solutions can provide a wealth of information about the appropriate
settings for the design variables for the product platform based on different design scenarios or
robustness considerations; hence, the solution portfolio is called the product platform
portfolio.
105
F. The Compromise DSP
Find
Control Variables
Satisfy Product
Overall Design Constraints Platform
Requirements Goals Portfolio
{ "Mean on Target"
"Minimize Deviation" }or values
{ Cdk }
Bounds
Minimize
Deviation Function E. Metamodeling
Build metamodels
Validate metamodels
Use metamodels
Response Kriging
Surface
The concept of a solution portfolio is not new to this research; it is simply a more
appropriate name for what has been previously referred to as a ranged set of specifications
(see, e.g., Lewis, et al., 1994; Simpson, et al., 1996; Simpson, et al., 1997a; Smith and
Mistree, 1994). The objective when using the PPCEM is to generate a variety of options
for product platforms; it is not necessarily used to evaluate these options or select one
from them. It facilitates generating these options with the end result being the product
platform portfolio, namely, the “pool” of solutions (i.e., design variable settings) which should
be maintained in order for the product platform to have sufficient flexibility to meet the desired
design scenarios in the event that one scenario is preferred to the next.
In addition to developing the product platform portfolio for the product family, product
variety tradeoff studies also can be performed by making use of two measures—the Non
106
Commonality Index (NCI) and the Performance Deviation Index (PDI)—which are described
as follows:
Noncommonality Index (NCI): NCI is used to assess the amount of variation between
parameter settings of each product within a product family; the smaller the variation, the
smaller NCI, and the more common the products within the product family. Computing
NCI is perhaps best illustrated through example; consider the 3 products shown in
Figure 3.8. Assume that each product is described by three design variables: x1, x2, and
x3 (if these three hypothetical products were electric motors, then x1, x2, and x3 might be
the outer radius of the motor, the length of the motor, and the number of windings in the
motor). First, the dissimilarity of each design variable settings for each product within
the family is computed as follows:
1. Compute the mean of each of the xj within the product family and the absolute value
of the difference between ? j and xj for each of the i products.
2. Normalize each difference by the range of that particular design variable: [upper
bound (ubj)  lower bound (lbj)]; this measures the relative variation in the values of
the design variables to the total range for that design variable.
3. Compute the average of the resulting normalized differences; this value is denoted
DIi in the figure and is the dissimilarity of the settings of xi for the group of products.
The scale factor around which the product family is derived is not included in this
computation. NCI is taken as a weighted sum of the individual DIi, where the weights
reflect the relative difficulty or cost associated with allowing each parameter to vary.
For an electric motor for instance, it may be easier or cheaper to allow the number of
windings (x3) to vary between different motor models but not so to allow the outer
radius to vary (x1). In this case, w1 would be much larger than w3 to reflect this within
the product family.
107
Dissimilarity of x1
2 3
Product Descriptions
1 ??
??2.6 ? 2.5 ? 2.6 ? 2.5 ? 2.6 ? 2.8 ??
??
DI1 ?
Product 1 3 ?? (3 ? 2) ??
Product 2
Product 3
µ1 DI1 ? 0.133
µ2 DI 2 ? 0.0733
Dissimilarity Index
Dissimilarity of x3
1 n ? j ? x i, j
DI j ? ? 0 100
?
n i?1 ub j ? lb j ? DI3 ?
1 ??
??22 ? 13 ? 22 ? 18 ? 22 ? 35 ??
??
3 ?? (100 ? 0) ??
j = 1, ..., # design variables
i = 1, ..., # products in family µ3 DI3 ? 0.0867
# d.v .
NCI =? ? wj DI j ? 0.55(0.133) ? 0.3(0.0733 ) ? 0.15(0.0867 ) ? 0.108
Noncommonality Index:DisIndx
j?1
Performance Deviation Index: Assuming that a market niche is defined by a set of goal
targets and constraints and that the necessary constraints are met for each individual
derivative product, then the deviation variables in the compromise DSP are a direct
measure of how well each derivative product meets its targets. The Performance
Deviation Index (PDI) for a product family thus is taken as a linear combination
(possibly weighted) of the deviation variables for each derivative product within the
product family as given by Equation 3.1:
n
PDI ? ? w i Zi [3.1]
i? 1
108
where i = {1, ..., # products in family}, and Zi is the corresponding deviation function
for each derivative product within the product family. Weightings may be used to bias
the measure for certain products within the family.
Example computations of the NCI and PDI for a family of products are demonstrated in the
General Aviation aircraft example (see Section 7.6.2) and are explained in more detail in the
NCI and PDI are, in and of themselves, ad hoc measures for a product family, similar to
the commonality indices and platform efficiency and effectiveness measures discussed in Section
2.2.2. However, having these measures for noncommonality and performance deviation for a
family of products allows product variety tradeoff studies to be performed, see Figure 3.9 and
Figure 3.10.
high
Worst Designs
High NCI
High PDI
Designs based
on Common
Product Platform
PDI
Individually
Best Designs Optimized
Low NCI Designs
Low PDI
low
low high
NCI
109
By plotting NCI and PDI for a family of designs as illustrated in Figure 3.9, regions of
good and bad product family designs can be identified; the worst designs have high NCI and
PDI, while the best have low NCI and PDI. Individually optimized designs within a product
family, where commonality is not important, are liable to have a low PDI but a high NCI for the
resulting group of products. On the other hand, a product family based on a common product
platform is liable to have a low NCI; ideally, a low PDI is desirable but may be difficult to
achieve depending on the amount of commonality desired between products within the resulting
product family.
0.5
KEY:
PPCEM
0.4 Designs
Benchmark
Designs
0.3
²PDI i ²NCI i, ²PDI i =
PDI
²PDI lost
change in NCI and
PDI by allowing i
0.2 variables to vary
b/n each design
110
NCI vs. PDI curves of the form shown in Figure 3.10 can be generated by trading off
product commonality for product performance and vice versa. By designing each product
individually, benchmark designs can be created which have a low PDI. Meanwhile, the
platform designs obtained by implementing the PPCEM have a low NCI. What is of interest to
study is the resulting ?PDIlost and the ?NCIgain to assess the tradeoff between commonality and
performance. If ?PDIlost is too large, the noncommonality of the designs can be increased
traversing the front of the envelope provides the largest (?PDIi) with the smallest (?NCIi). This
curve is generated in the General Aviation aircraft example (Section 7.6.3), and additional
By assembling the various elements of the PPCEM, the complete infrastructure of the
PPCEM is shown in Figure 3.11. As illustrated in the figure, the PPCEM consists of
“Processors” AF which are employed as the overall design specifications are transformed into
the product platform portfolio. As described in the previous sections, each step employs one or
111
F. The Compromise DSP
Find
Control Variables
Satisfy Product
Overall Design Constraints Platform
Requirements Goals Portfolio
{"Mean on Target"
"Minimize Deviation" }orvalues
{ C dk }
Bounds
Minimize
Deviation Function E. Metamodeling
A. Market Segementation Grid Build metamodels
Validate metamodels
IDENTIFY Use metamodels
LEVERAGING: D. Analysis or
1. vertical Simulation E1. Model Choice
2. horizontal Routine
3. beachead Response Kriging
Platform
Surface
Step 1  Create Market Segme ntation Grid relies on human judgment and
engineering “knowhow” as Processor A in the PPCEM to map the overall
design requirements into an appropriate market segmentation grid and identify
leveraging opportunities.
Step 2  Classify Factors and Ranges relies on the human judgment and Processor
B in the PPCEM to map the overall design requirements and market
segmentation grid into appropriate control, noise, and scale factors and identify
corresponding ranges for each. The responses being investigated also need to
be identified in this step of the process.
Step 3  Build and Validate Metamodels relies on human judgment and Processors
C, D, and E for construction and validation of the necessary metamodels.
112
Step 4  Aggregate Product Platform Specifications relies on human judgment to
formulate a compromise DSP, Processor F, using information from Processors
A and B and the overall design requirements.
Referring back to Figure 1.5, the structure of the PPCEM is very similar to the RCEM;
this is not a coincidence. In essence, the PPCEM is derived from the RCEM through a series
of modifications based on the research questions and hypotheses presented in Section 1.3.1.
As stated in Section 1.3, there are three main hypotheses in this dissertation:
113
Hypothesis 2: Kriging is a viable alternative for building metamodels of deterministic
computer analyses.
Hypothesis 3: Space filling experimental designs are better suited for building
designs.
the efficiency of the PPCEM but have ramifications beyond the PPCEM itself. To facilitate
are as follows:
SubHypothesis 1.1: The market segmentation grid can be utilized to help identify
SubHypothesis 1.2: Robust design principles can be used to facilitate the design of a
SubHypothesis 1.3: Individual targets for product variants can be aggregated into an
appropriate mean and variance and used in conjunction with robust design
114
It is upon these hypotheses that the PPCEM and the work in this dissertation are
grounded. The relationship between these hypotheses and the modifications to RCEM which
form the PPCEM are presented in the next section. This is followed in Section 3.2.2 with
As the research hypotheses are addressed, modifications to the RCEM are made and
the PPCEM thus is realized. The relationships between the research hypotheses, designated as
H1, H1.1, ..., H3, and the specific elements of the RCEM are illustrated in Figure 3.12.
Addressing the first hypothesis provides an interface with marketing, enabling the identification
of scalable product platforms; this is accomplished through the addition of a new module to the
RCEM, namely, the market segmentation grid. Scale factors, SubHypotheses 1.1, then are
identified through the use of the market segmentation grid around which a scalable product
115
Robust
Design of
Scalable Product
Platforms Platform
Portfolio
H1.2
H1.3
F. The Compromise DSP
Find
Control Variables
Satisfy
Overall Design Robust, TopLevel
Constraints
Requirements Design Specifications
Goals
"Mean on Target"
"Minimize Deviation"
“Maximize the independence”
Market Bounds
Segmentation Minimize E. Response Surface Model
Grid Deviation Function
Noise z C. Simulation
Factors Programs
Classify x Product/ (Rigorous Analysis
y Tools)
Scale Control Process Response y=f( x, z)
Factors Factors
? ˆy = f( x,? z)
k 2 l 2
? ? ˆy= ? (ŽzŽf )? ?ˆz + ? Žf ? ? ˆx i
( )
i=1 i i i=1 Žxi
B. Point Generator
D. Experiments Analyzer
Design of Experiments
PlackettBurman Eliminate unimportant factors H2
Full Factorial Design Reduce the design space to the region of
Fractional Factorial Design interest
Taguchi Orthogonal Array
Central Composite Design Plan additional experiments
etc. Kriging
H3
Space
Filling
DoE
SubHypothesis 1.2 relates to designing the actual product platform; robust design
principles are abstracted for use in product family design by aggregating product family targets
and constraints into appropriate means and variances. The resulting formulation allows robust
design principles, already embodied in the RCEM in the form of separate goals for “bringing the
mean on target” and “minimizing the variation,” to be utilized when solving the compromise
DSP. Notice that the goal for “minimize the independence” is not included in the compromise
DSP formulation because the intent is to rely solely on robust design principles to designing a
suitable product platform which can be scaled into a product family. As the research
116
hypotheses are addressed and the RCEM is correspondingly modified, the PPCEM is realized;
Figure 3.12. The intent is not to replace the current response surface and design of experiments
capabilities of the RCEM; rather, it is to augment the current capabilities with kriging and novel
experiments. The specific posits which support these claims and the research hypotheses are
There are several posits which support the research hypotheses which have been
revealed during the literature review in Chapter 2 and in the discussion in Section 1.1. Six
posits support Hypotheses 1 and SubHypotheses 1.11.3.; they are the following:
Posit 1.1: The RCEM provides an efficient and effective means for developing robust
117
Posit 1.3: Robust design principles can be used to minimize the sensitivity of a design
Posit 1.4: Robust design principles can be used effectively in the early stages of the
design process by modeling the response itself with separate goals for “bringing the
Posit 1.5: The compromise DSP is capable of effecting robust design solutions through
separate goals for “bringing the mean on target” and “minimizing variation” of noise
Posit 1.6: The market segmentation grid can be used to identify opportunities for
• Posit 1.1 is substantiated in (Chen, 1995) by explicitly testing and verifying the
efficiency and effectiveness of the RCEM for developing robust toplevel design
specifications for complex systems design.
118
• Posit 1.3, Posit 1.4, and Posit 1.5 are substantiated by the work in (Chen, et al.,
1996b); Chen and her coauthors describe a general robust design procedure which can
minimize the sensitivity of a design to variations in noise factors and/or design
parameters (Posit 1.3) by having separate goals for “bringing the mean on target” and
“minimizing the variation” (Posit 1.4) of each response in the compromise DSP (Posit
1.5). These posits are further substantiated in (Chen, 1995) as part of the development
of the RCEM.
• Posit 1.6 is substantiated by the discussion in Section 2.2.1, i.e., the market
segmentation grid can be used as an attention directing tool to identify leveraging
opportunities in product platform design (cf., Meyer, 1997; Meyer and Lehnerd, 1997);
identifying these leveraging opportunities provided the initial impetus for developing the
market segmentation grid.
These six posits help to support Hypothesis 1 and SubHypotheses 1.11.3. The strategy for
testing and verifying these hypotheses is outlined in Section 3.3. Before the verification strategy
Posit 2.1: Building an interpolative kriging model is not predicated on the assumption
Posit 2.2: Kriging provides very flexible modeling capabilities based on the wide
variety of spatial correlation functions which can be selected to model the data.
119
• Posit 2.1 is more fact than assumption; it is substantiated by, e.g., Sacks, et al. (1989);
Koehler and Owen (1996); and Cressie (1993).
• Posit 2.2 is substantiated by many researchers, most notably Sacks, et al. (1989);
Welch, et al. (1992); Cressie (1993); and Barton (1992; 1994).
Posits 2.1 and 2.2 both help to verify Hypothesis 2; the strategy for testing Hypothesis 2 is
• Posit 3.1 is taken from Sacks, et al. (1989) who state that the “classical notions of
experimental blocking, replication, and randomization are irrelevant” for deterministic
computer experiments which contain no random error. Moreover, any experimental
design text (see, e.g., Montgomery, 1991) can verify that replication, blockability, and
rotatability are developed explicitly to handle and account for random (measurement)
error in a physical experiment for which classical experimental designs have been
developed.
Since kriging (using an underlying constant model) is being advocated in this dissertation
120
Posit 3.2: Since kriging (with an underlying constant model) models rely on the spatial
correlation between data, confounding and aliasing of main effects and twofactor
• Posit 3.2 is substantiated by Sacks, et al. (1989); Currin, et al. (1991); Welch, et al.
(1990); and Barton (1992; 1994). In physical experimentation, great care is taken to
ensure that aliasing and confounding of main effects and twofactor interactions do not
occur to ensure accurate estimation of coefficients of the polynomial response surface
model (see, e.g., Montgomery, 1991).
The experimental procedure for testing Hypothesis 3 is introduced in the next section along with
the specific strategy for verification and testing of all of the other hypotheses.
The question of whether testing the proposed hypotheses really answers the research
questions is a difficult one, and it raises the issues of verification and validation as discussed by
Peplisnki (1997). According to the Concise Oxford English Dictionary (1982), to validate is to
make valid, to ratify or confirm. The root, valid, is then defined as:
• (law) sound and sufficient, executed with proper formalities (valid contract);
121
With respect to engineering design research, the intent of the validation process is to show the
research and its products to be sound, well grounded on principles of evidence, able to
“confirm”, “verify” is used in the context of establishing the correspondence of actual facts or
details with those proposed or guessed at, while “validate” is used in the context of establishing
validity by authoritative affirmation or by factual proof. The boundary between verification and
validation is thus shifting and often open to interpretation; in many cases the two words are used
interchangeably.
In this research, definitions for “verification” and “validation” are applied that, while not
inconsistent with the general uses above, are more specific and tailored for efforts in engineering
design research. In practice, the verification and validation of design methods is much more
than a debugging process. Three primary phases can be identified: firstly, problem justification;
secondly, completeness and consistency checks of the methodology, and thirdly, validation of
Ignizio (1990).) Verification then refers to the second phase of the process and is focused
primarily on internal consistency and completeness, while validation as the third phase of the
process is focused on consistency with external evidence, ideally through testing the design
method on actual case studies. This validation of performance is perhaps the area most open to
122
If what is to be validated is a closed form mathematical expression or algorithm, it can
be proven, or validated, in a traditional and formal mathematical sense. For example, the case
of showing a solution vector, x, belongs to the set of feasible solutions for a given mathematical
model is a closed problem. Alternatively, if the problem is open, if the subject is dealing with
some “heuristic,” nonprecise scheme, the issue of validation becomes one of “correctness
beyond reasonable doubt.” The validation of design methods falls into this category. In
this case it is achieved ultimately by results and usefulness and through a convincing
demonstration to (and an acceptance and ratification by) one’s peers in the field. An analogy
with mathematics and the concept of “necessary” and “sufficient” conditions can be drawn with
respect to the validation of heuristics. Heuristics are aimed toward satisfying the necessary
conditions only; it is not possible to develop an absolute proof for an open problem by
definition.
As anticipated, the operations research literature provides some useful insight into the
problem solving by heuristic programming Lin (1975) makes the following remarks:
We therefore define a valid heuristic algorithm (to solve a given problem) as any
procedure which will produce a feasible solution acceptable to the design engineer,
within limits of computing time, and consider the problem solved if we can construct
a valid heuristic procedure to solve it. We see that in the domain where a heuristic
algorithm operates, there are elements of technique, experimentation, judgment and
persuasion, as well as compromise.
123
Specific heuristic programs are justified, not because they attain an analytically
verifiable optimum solution, but rather because experimentation has proven that they
are useful in practice.
In summary, while noting that judgment is subjective and based on faith, the validation of a
heuristic, and therefore the validation of design methods, can be established if (Smith, 1992):
• the time and consumed resources are within reasonable limits, and
It is against these three issues that a verification and validation strategy is developed.
Meanwhile, verification and testing of the hypotheses has already begun by stating and
substantiating posits in support of each hypothesis. What is tested in the remainder of the
dissertation is the “intellectual leap of faith” required to jump from the posits to the hypotheses.
The relationships between the next four chapters and the individual research hypotheses are
124
H3 Utility of space filling experimental designs X X
The relationships listed in Table 3.1 are elaborated further in the next two sections. The
strategy for testing Hypothesis 1 and the related subhypotheses is outlined in Section 3.3.1.
platform for a product family. To verify this, two example problems are utilized to demonstrate
the effectiveness of the PPCEM: the design of a family of universal electric motors (Chapter 6)
and the design of a family of General Aviation aircraft (Chapter 7). These two examples have
 product family aggregated around mean and standard deviation of stack length and
separate goals for “bringing the mean on target” and “minimize the variation” are
employed to design the product platform (see Section 6.3).
Note that metamodels are not employed in this first example because mean and standard
deviation of the responses can be estimated directly from the relevant analysis equations (see
Section 6.3).
125
 horizontal scaling of a product platform (see Section 7.1.3),
 metamodels for mean and standard deviation of the GAA family to facilitate
implementation of robust design and development of the aircraft platform (see
Section 7.3), and
 design capability indices to assess quickly the capability of the family of aircraft to
satisfy the range of requirements (see Section 7.4).
The first example parallels Black & Decker’s vertical scaling strategy for its universal motors
(Lehnerd, 1987) discussed in Section 1.1.1 and is used to provide “proof of concept” that the
PPCEM works. The second example is based on a previous application of the RCEM to
develop a “common and good” set of toplevel design specifications for a family of General
Aviation aircraft (see, e.g., Simpson, 1995; Simpson, et al., 1996). The General Aviation
aircraft problem is employed in this work to demonstrate further the effectiveness of the
In each example, the product platform obtained using the PPCEM is compared to (a)
the initial baseline design to show improvement over the starting design, and (b) individually
designed, benchmark products which are aggregated into a product family to provide a
reference to compare against the PPCEM product family (i.e., design the family of products
with the PPCEM and without the PPCEM and discuss the differences in product performance,
computational expense, and usefulness). Product variety tradeoff studies are also performed for
the family of General Aviation aircraft, examining the tradeoff between commonality of the
126
aircraft and their corresponding performance for the PPCEM family and the individually
Testing SubHypothesis 1.1  The procedure for using the market segmentation grid to
identify scale factors for a product platform is shown in Figure 3.4 and described in
Section 3.1.2. Further verification of this subhypothesis requires demonstrating that
this procedure can be used to identify scale factors for a product platform. In the
universal motor example in Chapter 6, the market segmentation grid is used to identify a
vertical leveraging strategy and parametric scaling factor (stack length); in the General
Aviation aircraft example, a horizontal leveraging strategy and configurational scale
factor (number of passengers) are used.
Testing SubHypothesis 1.2  If appropriate scale factors can be identified for a product
platform (i.e., if SubHypothesis 1.1 is true), then the principles of robust design can be
employed to develop a product platform which has minimum sensitivity to variations in
the scale factor and is thus robust for the product family. Verification of this sub
hypothesis requires implementation of the approach, and the two examples provide such
a demonstration.
Testing SubHypothesis 1.3  The procedure for aggregating the individual targets of the
product variants is outlined in Section 3.1.4. As with SubHypothesis 1.1, further
verification of this subhypothesis requires demonstrating that this procedure can be
used to model and design a family of products; the approaches outlined in Section 3.1.4
are used in the two examples to illustrate both methods for aggregating product family
specifications.
127
3.3.2 Testing Hypotheses 2 and 3
initial feasibility study of the utility of kriging is presented in Chapter 4 to familiarize the reader
aerospike nozzle—is used to compare the predictive capability of a kriging model against that of
secondorder response surfaces. The specific aspect of Hypothesis 2 being tested in Section
4.2 is whether or not kriging, using an underlying constant global model in combination with a
Gaussian correlation function (one of the five being investigated in this dissertation, see section
surface.
Chapter 5 continues from where Chapter 4 leaves off. To test the utility of kriging and
space filling designs (and thus Hypotheses 2 and 3) a testbed of six engineering test problems is
created to:
• test the effect of different correlation functions on the accuracy of the kriging model for a
wide variety of engineering analysis equations (linear, quadratic, cubic, reciprocal,
exponential, etc.);
• correlate the types of functions (analysis equations) which kriging models can and
cannot approximate accurately; and
• test the effect of eleven different experimental designs on the accuracy of the resulting
kriging model.
Of the eleven experimental designs mentioned in the last bullet, two are classical designs—
central composite and BoxBehnken—and the remaining nine are space filling (see Section
128
5.1.3 and Appendices B and C for a description of each). In this manner, Hypothesis 3 is
explicitly tested by comparing the accuracy of the kriging model built from a space filling
experimental design against that of a classical experimental design. The first two bullets relate to
testing Hypothesis 2, and the particulars of that portion of the study are described in Section
The elements of the previous chapters are synthesized in this chapter to meet the
principal objective in this dissertation, namely, to develop the Product Platform Concept
Exploration Method (PPCEM) for designing common scalable product platforms for a product
family, see Figure 3.13. There are five steps to the PPCEM which prescribe how to formulate
the problem and describe how to solve it. As such, the PPCEM provides a Method which
facilitates the synthesis and Exploration of a common Product Platform Concept which can
Testing and verification of the PPCEM is outlined in the previous section and takes
Hypothesis 2 (which has implications for Step 3 of the PPCEM) commences in the next chapter
wherein an initial feasibility study of the utility of kriging is given. At the end of Chapter 4,
several questions are posed which preface the kriging/DOE study in Chapter 5. The
implications of the results of the study on metamodeling within the PPCEM (Step 3) are
129
Platform
Nozzle Design
Product Platform Concept Exploration Method
Chp 4 Chp 3
130
4.
CHAPTER 4
Nozzle Design
In this chapter, the process of testing Hypothesis 2 and establishing kriging as a viable
models through two examples. The first is a simple onedimensional example in Section 4.1
which is used to familiarize the reader with (a) the process of creating a kriging model and (b)
some of the differences between a kriging model and a secondorder response surface. The
secondorder response surface models and kriging models is conducted by means of error
analysis (Section 4.2.2), visualization (Section 4.2.3), and optimization (Section 4.2.4) These
examples establish that a simple kriging model can compete with a secondorder response
123
surface, thereby setting the stage for an extensive investigation into the utility of kriging and
124
4.1 OVERVIEW OF KRIGING MODELING AND A 1D EXAMPLE
Having presented the mathematics behind kriging in Section 2.4.2, a simple one variable
example best illustrates the difference between the approximation capabilities of a secondorder
response surface model and a kriging model. This example comes from Su and Renaud (1996)
who fabricated this example to demonstrate some of the limitations of using secondorder
response surface models, see Figure 4.1. The function is an eighthorder function given by
Equation 4.1.
9
f ( x) ? ? a i (x i ? 900)(i ?1) [4.1]
i?1
a1 = 659.23
a2 = 190.22
a3 = 17.802
a4 = 0.82691
a5 = 0.021885
a6 = 0.0003463
a7 = 3.2446 x 106
a8 = 1.6606 x 108
a9 = 3.5757 x 1011
A secondorder response surface model is fit to five sample points within the region of
the optimum (x = 932) using least squares regression. The five sample points are given in Table
4.1. The original function, the location of the five sample points, and the resulting secondorder
125
Table 4.1 Sample Points for 1D Example
No. x x (scaled) y
1 922 0.00 43.976
2 927 0.25 20.143
3 932 0.50 13.963
4 937 0.75 17.330
5 942 1.00 22.698
80
Su and Renaud (1996) Fcn
70
Sample Points
60
2nd Order RS Model
50
40
30
20
10
915 925 935 945 955 965
x
A kriging model using a constant for the global model and the Gaussian correlation
function of Equation 2.17 is fit to the same five points in order to compare a kriging model
against a secondorder response surface model. The process of fitting a kriging model is
model.
126
In order to fit a kriging model to the five sample points, the x values are scaled to [0,1]
as shown in Table 4.1, and the response values are written as a column vector, yT = {43.977,
20.143, 13.963, 17.330, 22.698}. Because a constant underlying global model is selected for
the kriging model, f is simply a column vector of ones: fT = {1, 1, 1, 1, 1}. Using a Gaussian
correlation function for the localized portion of the model, Equation 2.21, is particularized for
R(xi,xj) = ??
exp( ? ?  x i ? x j  2) i, j ? 1,2,3,4, 5;i ? j??
?? ?? [4.2]
?? 1 i? j ??
The correlation function for each sample point is then computed as follows:
i = 1, j = 1, R(x1,x1) = 1
???
i = 5, j = 5, R(x5,x5) = 1
127
1 e ?0.0625? e ?0 .25 ? e ? 0.5625?
?? e ? ? ??
?0.0625? ? 0.25 ? ?0.5625?
?? 1 e e e ??
R = ?? 1 e ? 0.0625? e ?0.25? ??
e ?0.0625? ??
?? sym 1
?? 1 ??
where ? is the unknown parameter which is used to fit the kriging model to the data.
The constant portion of the global model is now estimated using Equation 2.27 which is
In order to find the maximum likelihood estimate for ? , the variance of sample data from
the underlying constant global model must be estimated from Equation 2.28 which is repeated
(y ? f ?ˆ )?T R?1 (y ? f ?ˆ )?
?ˆ ?2 ? [4.4]
ns
where ns = 5. The MLE for ? is then found by maximizing Equation 4.5 which is the same as
[n s ln( ?ˆ 2?) ? ln  R ]
? ( ?) ? ? [4.5]
2
128
A plot of ? (? ) is given in Figure 4.2. The MLE, or “best” guess, for ? is the point which
10
0 2 4 6 8 10 12 14 16 18 20
11
? ???
12
13
? * = 6.924
14
15
16
17
18
theta
In this example, the MLE for ? is 6.924; hence, the “best” kriging model to fit these five
sample points when using a constant underlying global model and the Gaussian correlation
function is when ? = 6.924. Substituting this value into Equation 4.2, the resulting correlation
matrix is thus:
??
1 0.649 0.177 0.020 0.001??
1 0.649 0.177 0.020
?? 1 0.649 0.177??
R=
?? sym 1 0.649??
?? 1 ??
Now, new points are predicted using the scalar form of Equation 2.25 which is
where rT (x) is the correlation vector of length 5 between an untried value of x and the sampled
data points {0.00, 0.25, 0.50, 0.75, 1.00}. The general form of rT (x) is given by Equation
where R is the Gaussian correlation function. Notice that the x values for which a new y is to be
predicted are scaled to [0,1]; however, the predicted values of y are the actual values.
underlying constant global model—is shown in Figure 4.3 along with the original function, the
secondorder response surface, and the five sample points. Immediately evident from the figure
is fact that the kriging model interpolates the data points, approximating the original function
better than the secondorder response surface model which represents a least squares fit. In
this example, the interpolating capability of the kriging model allows it to predict an optimum
130
80
Su and Renaud (1996) Fcn
70
Sample Points
2nd Order RS Model
60
Kriging w/Gauss.
50
40
30
20
10
915 925 935 945 955 965
x
Figure 4.3 One Variable Example of Response Surface and Kriging Models
It is also important to notice that outside of the design space defined by the sample
points (920 = x = 945), neither model predicts as well as expected. The kriging model returns
to the underlying global model which is a constant in this example. This is typical behavior for a
kriging model; far from the design points, the kriging model returns to the underlying global
model because the influence of the sample points has “exponentially decayed away” outside of
Sixteen, evenly spaced points (not including the sample points) are taken from within the
sample range (920 = x = 945) to assess the accuracy of the two approximations. The
maximum absolute error, the average absolute error, and the root mean square error (MSE),
131
Equations 2.302.32, for the 16 validation points are listed in Table 4.2. Both raw values and
percentages of actual values are listed in the table for ease of comparison.
Based on this error analysis, the kriging model approximates the original function better
because it has a lower root MSE, average absolute error, and maximum absolute error. A
more involved example to compare further the predictive capability of secondorder response
The design of an aerospike nozzle has been selected as the preliminary test problem for
comparing the predictive capability of response surface and kriging models. The linear
aerospike rocket engine is the propulsion system proposed for the VentureStar reusable launch
vehicle (RLV) which is illustrated in Figure 4.4. The VentureStar RLV is one of the concepts
132
Figure 4.4 VentureStar RLV with Aerospike Nozzle (Korte, et al., 1997)
The aerospike rocket engine consists of a rocket thruster, cowl, aerospike nozzle, and
plug base regions as shown in Figure 4.5. The aerospike nozzle is a truncated spike or plug
nozzle that adjusts to the ambient pressure and integrates well with launch vehicles (Rao, 1961).
The flow field structure changes dramatically from low altitude to high altitude on the spike
surface and in the base region (Hagemann, et al., 1996; Mueller and Sule, 1972; Rommel, et
al., 1995). Additional flow is injected in the base region to create an aerodynamic spike
(Iacobellis, et al., 1967) which gives the aerospike nozzle its name and increases the base
133
Figure 4.5 Aerospike Components and Flow Field Characteristics
(Korte, et al., 1997)
The analysis of the nozzle involves two disciplines: aerodynamics and structures; there is
an interaction between the structural displacements of the nozzle surface and the pressures
caused by the varying aerodynamic effects. Thrust and nozzle wall pressure calculations are
made using computational fluid dynamics (CFD) analysis and are linked to a structural finite
element analysis model for determining nozzle weight and structural integrity. A mission average
engine specific impulse and engine thrust/weight ratio are calculated and used to estimate vehicle
Figure 4.6. Korte, et al. (1997) provide additional details on the aerodynamic and structural
134
Figure 4.6 Multidisciplinary Domain Decomposition for Aerospike Nozzle (Korte, et
al., 1997)
For this study, three design variables are considered: starting (thruster) angle, exit (base)
height, and (base) length as shown in Figure 4.7. The thruster angle (a) is the entrance angle of
the gas from the combustion chamber onto the nozzle surface; the base height (h) and length (l)
refer to the solid portion of the nozzle itself. A quadratic curve defines the aerospike nozzle
surface profile based on the values of thruster angle, height, and length.
135
Figure 4.7 Nozzle Geometry Design Variables (Korte, et al., 1997)
Bounds for the design variables are set to produce viable nozzle profiles from the
quadratic model based on all combinations of thruster angle, height, and length within the design
space. Secondorder response surface models and kriging models are developed and validated
for each response (thrust, weight, and GLOW) in the next section; optimization of the aerospike
nozzle using the response surface and kriging models for different objective functions is
136
4.2.1 Metamodeling of the Aerospike Nozzle Problem
The data used to fit the response surface and kriging models is obtained from a 25 point
random orthogonal array (Owen, 1992). The use of these orthogonal arrays in this preliminary
example is based, in part, on the success of the work by Booker, et al. (1995) and the
recommendations of Barton (1994). The actual sample points are illustrated in Figure 4.8 and
are scaled to fit the three dimensional design space defined by the bounds on the thruster angle
Length
Angle
Height
137
The response surface models for weight, thrust, and GLOW are fit to the 25 sample points
using ordinary least squares regression techniques and the software package JMP® (SAS,
1995). The resulting secondorder response surface models are given in the Equations 4.74.9.
The equations are scaled against the baseline design to protect the proprietary nature of some of
the data.
The R2, R2adj, and root MSE values for each of these secondorder response surface
models are summarized in Table 4.2. As evidenced by the high R2 and R2adj values and low
root MSE values, the secondorder polynomial models appear to capture a large portion of the
observed variance.
Response
Measure Weight Thrust GLOW
2
R 0.986 0.998 0.971
R2adj 0.977 0.996 0.953
root MSE 1.12% 0.01% 0.25%
138
Kriging Models for the Aerospike Nozzle Problem
The kriging models are built from the same 25 sample points used to fit the response surface
models. In this preliminary example, a constant term for the global model and a Gaussian
correlation function, Equation 2.21, for the local departures are chosen.
Initial investigations revealed that a single ? parameter was insufficient to model the data
accurately due to scaling of the design variables (a similar problem is encountered in (Giunta, et
al., 1998)). Therefore, a simple 3D exhaustive grid search with a refinable step size is used to
find the maximum likelihood estimates for the three ? parameters needed to obtain the “best”
kriging model. The resulting maximum likelihood estimates for the three ? parameters for the
weight, thrust, and GLOW models are summarized in Table 4.4; note that these values are for
MLE Response
Values Weight Thrust GLOW
? angle = 0.548 0.30 3.362
? height = 1.323 0.50 2.437
? length = 2.718 0.65 0.537
With these parameters for the Gaussian correlation function, the kriging models now are
specified fully. A new point is predicted using these ? values and the 25 sample points as shown
139
in combination with Equations 2.252.27. The accuracy of the response surface and kriging
An additional 25 randomly selected validation points are used to verify the accuracy of
the response surface and kriging models. Error is defined as the difference between the actual
response from the computer analysis, y(x), and the predicted value, yˆ (x), from the response
surface or kriging model. The maximum absolute error, the average absolute error, and the root
MSE, see Equations 2.302.32, for the 25 randomly selected validation points are summarized
in Table 4.5.
For the weight and GLOW responses, the kriging models have lower maximum
absolute errors and lower root MSEs than the response surface models; however, the average
absolute error is slightly larger for the kriging models. For thrust, the response surface models
140
are slightly better than the kriging models according to the values in the table; the maximum
absolute error and root MSE are slightly less while the average absolute errors are essentially
the same. It is not surprising that the response surface models predict thrust better; it has a very
high R2 value, 0.998, and low root MSE, 0.01%. It is reassuring to note, however, that the
kriging model, despite using only a constant term for the underlying global model, is only slightly
less accurate than the corresponding response surface model. In summary, it appears that both
models predict each response reasonably well, with the kriging models having a slight advantage
in overall accuracy because of the lower root MSE values. A graphical comparison is
presented in the next section to examine the accuracy of the response surface and kriging
models further.
comparison of the response surface and kriging models is performed to visualize differences in
the two approximation models. In Figure 4.94.10, three dimensional contour plots of thrust,
weight, and GLOW as a function of thruster angle, length, and height are given. In each figure,
the same contour levels are used for the response surface and kriging models so that the shapes
141
(a) Thrust (b) Weight
Figure 4.9 Response Surface and Kriging Models for Thrust and Weight
In Figure 4.9a, the contours of thrust for the response surface and kriging models are
very similar. As evidenced by the high R2 and low root MSE values, the response surface
models should fit the data quite well, and it is reassuring to note that the kriging models resemble
the response surface models even through the underlying global model for the kriging models is
just a constant term. This demonstrates the power and flexibility of the “local” deviations of the
The contours of the response surface and kriging models in Figure 4.9b are also very
similar, but the influence of the localized perturbations caused by the Gaussian correlation
function can be seen in the kriging model for weight. The error analysis from the previous
section indicated that the kriging model for weight is slightly more accurate than the second
order response surface model which may result from the small nonlinear localized variations in
142
The general shape of the GLOW contours is the same in Figure 4.10; however, the size
and shape of the different contours, particularly along the length axis, are quite different. The
end view along the length axis in Figure 4.10b further highlights the differences between the two
models. Notice also in Figure 4.10b that the kriging model predicts a minimum GLOW located
within the design space centered around Height = 0.8, Angle = 0, along the axis defined by 0.2
= Length = 0.8; this minimum was verified through additional experiments and is assumed to be
From the graphical and error analyses of the response surface and kriging models, it
appears that both models fit the data quite well. In the next section the accuracy of both
metamodels is put to the test. Four optimization problems are formulated and solved using each
143
of the metamodels and the efficiency and accuracy of the results are compared as a final test of
model adequacy.
The true test of the accuracy of the response surface and kriging models comes when
the approximations are used during optimization. It is paramount that any approximations used
in optimization prove reasonably accurate, lest they lead the optimization algorithm into regions
of bad designs. Trust Region approaches (see e.g., Lewis, 1996; Rodriguez, et al., 1997) and
the Model Management framework (see e.g., Alexandrov, et al., 1997; Booker, et al., 1995)
have been developed to ensure that optimization algorithms are not led astray by inaccurate
approximations. In this work, however, the focus has been on developing the approximation
models, particularly the kriging models, and not on the optimization itself.
Four different optimization problems are formulated and solved to compare the
accuracy of the response surface and kriging models, see Table 4.6: (1) maximize thrust, (2)
minimize weight, (3) minimize GLOW, and (4) maximize thrust/weight ratio. The first two
objective functions in Table 4.6 represent traditional single objective, single discipline
optimization problems. The second two objective functions are more characteristic of
tradeoffs between the aerodynamics and structures disciplines. As seen in the table, for each
objective function, constraint limits are placed on the remaining responses; for instance,
constraints are placed on the maximum allowable weight and GLOW and the minimum
144
allowable thrust/weight ratio when maximizing thrust. However, none of the constraints are
Each optimization problem is solved using: (a) the secondorder response surface
models and (b) the kriging model approximations for thrust, weight, and GLOW. The
optimization is performed using the Generalized Reduced Gradient (GRG) algorithm in OptdesX
145
(Parkinson, et al., 1998). Three different starting points are used for each objective function
(the lower, middle, and upper bounds of the design variables) to assess the average number of
analysis and gradient calls necessary to obtain the optimum design within the given design space.
The same parameters (i.e., step size, tolerance, constraint violation, etc.) are used within the
GRG algorithm for each optimization. The optimization results are summarized in Table 4.7.
Design variable and response values have been scaled as a percentage of the baseline design
146
Table 4.7 Aerospike Nozzle Optimization Results Using Metamodels
The following observations are made based on the data in Table 4.7.
147
• Average number of analysis and gradient calls: In general, the response surface
models require fewer analysis and gradient calls to achieve the optimum than the kriging
models do. This can be attributed, in part, to the fact that the response surface models
are simple secondorder polynomials; the kriging models are more complex, nonlinear
functions as evidenced in Figure 4.9 and Figure 4.10.
• Convergence rates: Although not shown in the table, optimization using the response
surface models tends to converge more quickly than when using kriging models. This
can be inferred from the number of gradient calls which is one to three calls fewer for
the response surface models than the kriging models.
148
• Optimum designs: The optimum designs obtained from the response surface and
kriging models are essentially the same for each objective function, indicating that both
approximations send the optimization algorithm in the same general direction. The
largest discrepancy is the length for the minimize GLOW optimization; response surface
models predict the optimum GLOW occurs at the upper bound on length (+1) while the
kriging models yield 0.676. This difference is evident from Figure 4.10. Furthermore, it
has been verified through additional experiments that the GLOW value obtained using
the kriging models is the actual minimum.
• Predicted optima and prediction errors: To check the accuracy of the predicted
optima, the optimum design values for angle, height, and length are used as inputs into
the original analysis codes and the percentage difference between the actual and
predicted values is computed. The prediction error is less than 5% for all cases and is
0.5% or less in three quarters of the results, indicating close agreement between the
metamodels and the actual analyses.
In summary, the response surface and kriging approximations yield comparable results
with minimal difference in predictive capability. It is worth noting that the kriging models
perform as well as the secondorder response surface models even though the global
portion of the kriging model is only a constant. This helps to verify Hypothesis 2 which
states that kriging models are a viable metamodeling technique for building
unanswered.
149
• Correlation function: A Gaussian correlation function is utilized in this example to fit
the data, but is this the best correlation function of the five being considered in this
dissertation?
• Model validation: Because kriging models interpolate the data, R2 values and residual
plots cannot be used to assess model accuracy. In this example an additional 25
validation points are employed to assess accuracy; however, other validation
approaches exist. One such approach which does not require additional validation
points is leaveoneout cross validation (Mitchell and Morris, 1992) mentioned in
Section 2.4.2. Does cross validation provide a sufficient assessment of model
accuracy?
A study of six engineering test problems is set up and performed in the next chapter to answer
these questions. In closing this chapter, a brief look ahead to that study is offered in the next
section.
In an attempt to determine the types of applications for which kriging is useful, several
engineering examples are introduced in the next chapter to serve as test problems to establish
the utility of kriging and verify Hypothesis 2. In addition to testing Hypothesis 2, several
classical and space filling experimental designs are compared and contrasted in an effort to test
150
Hypothesis 3 to determine if space filling experimental designs are better suited for building
151
5. 5
CHAPTER 5
Kriging/DOE Testbed
In this chapter, Hypotheses 2 and 3 are tested explicitly, verifying the utility of kriging
and space filling experimental designs for building metamodels of deterministic computer
analyses. A pictorial overview and specific details of the study are given in Section 5.1. Six
benchmark kriging and space filling designs and verify Hypotheses 2 and 3. In Sections
5.1.2, 5.1.3, and 5.1.4, the factors, experimental designs, and responses in the study are
explained. Analysis of variance of the data and response correlation are presented in the
precursory data analysis in Section 5.2. Section 5.3 contains the results of testing Hypothesis 2
and a discussion of the ramifications of the results; the results and discussion regarding
145
Hypothesis 3 follow in Section 5.4. A summary of the study and its relevance to the
146
5.1 OVERVIEW OF KRIGING/DOE STUDY AND PROBLEM TESTBED
expensive to run and you desire to replace it with a metamodel, a kriging one in particular.
Assume that there are k design variables which you wish to include in the metamodel. What is
best type of experimental design you should use to query the simulation to generate data to build
an accurate kriging metamodel? How many sample points should you use? What type of
correlation function should you use to obtain the best predictor? Lastly, how can you best
The objective in this study is to answer precisely these questions. Given a series of test
problems (i.e., analyses), determine the best experimental design, sample size, and correlation
function to generate the most accurate model and determine how best to validate it. Toward
this end, a testbed of six engineering examples—the design of a threebar truss, a twobar truss,
a spring, a twomember frame, a welded beam, and a pressure vessel as introduced in Section
5.1.1 and Appendix D—has been created to test the utility of kriging and space filling
experimental designs. A pictorial overview of the kriging/DOE study is given in Figure 5.1; the
Contained in these six engineering examples are a total of 26 different types of equations
which are used to test the utility of kriging at metamodeling deterministic computer analyses. If a
then Hypothesis 2 is considered to be verified. Moreover, for each example, five correlation
147
functions are used to construct five different kriging metamodels in an effort to determine which
Meanwhile, for each example several classical and space filling experimental designs are
used to construct each kriging metamodel. By analyzing the accuracy of the resulting kriging
metamodel, the experimental design which yields the most accurate predictor, on average, can
be determined. In this regard, Hypothesis 3 is tested explicitly to verify that space filling
experimental designs yield more accurate kriging metamodels than do classical experimental
designs. And while Hypothesis 2 and 3 are being tested, the usefulness of cross validation root
148
2 Variable Problems 3 Variable Problems 4 Variable Problems
Problem
Six Test
Testbed
Problems
(§5.1.1)
3bar 2bar 2member welded pressure
spring
truss truss frame beam vessel
EQN
14 5, 6, 7 814 1517 1822 2326
(126)
Factors
& Levels For 2 Variables For 3 Variables For 4 Variables
(§5.1.2) NSAMP
7, 8, 9, 10, 11, 12, 13, 14 1325 2041
(§4.1.3)
In total, 7905 kriging models are constructed: one for each correlation function
(CORFCN) for each experimental design (DOE) for each sample size (NSAMP) for each
equation (EQN) in each problem. As an example, the arrows in Figure 5.1 trace Equation 7 in
the twobar truss problem. For EQN 7, there are 15 possible experimental design (DOE)
149
choices; in this case, the minimax Latin hypercube design (mnmxl) is being considered. For this
design, there are several possible choices for NSAMP, ranging from 714, because this is a two
variable problem. Using 10 sample points as an example, at the next level there are five
correlation functions (CORFCN) which can be used to build a kriging model; the Gaussian
correlation function is highlighted in this example. Finally, three measures of model accuracy are
computed for the kriging model resulting from this particular combination of EQN, DOE,
NSAMP, and CORFCN: max. abs. error, root mean square error (RMSE), and cross
validation root mean square error (CVRMSE) which are “normalized” by the corresponding
After a precursory analysis of the data in Section 5.2, these three error measures of
model accuracy are used to test Hypotheses 2 and 3 explicitly. As shown in Figure 5.1:
Hypothesis 3 is tested in Section 5.4 by isolating the effects of experimental design (DOE)
and sample size (NSAMP) on the error measures of accuracy of the resulting kriging
model.
Six test problems were selected from the literature to provide a testbed for assessing the
utility of kriging and several different space filling experimental designs. These problems are not
150
meant to be all inclusive; rather, they are taken as representative of typical analyses encountered
in mechanical design. The analysis of these problems is simple enough not to warrant building
kriging models of the responses; however, these problems have been selected because:
a. they have been well studied and the behavior of the system and the underlying analysis
equations are known,
c. they have been used by other researchers to test their own metamodeling strategies and
algorithms.
Furthermore, the optimum solution for each problem is also known; however, a more extensive
error analysis is employed to assess the accuracy of the kriging models (see Section 5.1.4).
In the following sections, each example is described along with its pertinent constraints,
design variable bounds, and the objective function; note that a kriging model is constructed for
each constraint and objective function in each problem. The values of the parameters in the
equations (i.e., all of the letters and symbols which are not explicitly stated as being design
variables) are given in the referenced sections of Appendix D which contain the complete
151
Two Variable Problems
The two variable problems investigated are the design of a twobar truss (Figure 5.2) and of a
symmetric threebar truss (Figure 5.3). The problem formulations (objective functions,
constraints, and bounds) follow each figure. A complete description of the twobar and three
D
A1 A2 A3
Section CC’
N
H C
2P ? ?
? ?
C’
? ? x
B B P2 P1
Find: Find:
• Tube diameter, D • Cross section area, A1 = A3
• Height of the truss, H • Cross section area, A2
Satisfy: Satisfy:
• Constraints: • Constraints:
? 2E(D 2 ? T 2 ) P(B2 ? H2 )1 /2 ??1 A ??
g1(x) = ? =0 g1(x) = 20,000  ?? ? 2 ??
8(B 2 ? H 2 ) ?TDH
2
=0
??A 1 2 A1A 2 ? 2A 1 ??
P(B2 ? H 2 )1/ 2
g2(x) = ? y  =0 20, 000 2A1
? TDH g2(x) = 20,000  =0
2A1A 2 ? 2A 12
• Bounds: 20, 000A 2
g3(x) = 15,000 ? =0
0.5 in. = D = 5.0 in. 2A1A 2 ? 2A12
5.0 in. = H = 50 in. • Bounds:
0.5 in2 = A1 = A3 = 1.2 in2
Minimize: 0.0 in2 = A2 = 4.0 in2
Weight, W(x) = 2? ?DT(B2 + H2)1/2
Minimize:
For more information:
152
• see, e.g., (Schmit, 1981) Weight, W(x) = ? N( 2 2 A1 ? A 2 )
• see Appendix D, Section D.1
For more information:
• see, e.g., (Schmit, 1981)
• see Appendix D, Section D.2
153
Three Variable Problems
The three variable problems are the design of a compression spring (Figure 5.4) and two
member frame (Figure 5.5). Complete descriptions of these problems are given in Appendix D,
z U1
(1) (3)
P
L L
x (2) y
U2 U3
t h
Find: Find:
• Number of active coils, N • Frame width, d
• Mean coil diameter, D • Frame height, h
• Wire diameter, d • Frame wall thickness, t
Satisfy: Satisfy:
• Constraints: • Constraints:
g1(x) = S  8CfFmaxD/(?d3) = 0 g1(x) = (? 12 + 3?2)1/2 = 40,000
g2(x) = lmax  lf = 0 g2(x) = (? 22 + 3?2)1/2 = 40,000
g3(x) = ?pm  ? = 0
• Bounds:
g4(x) = (Fmax  Fload)/K  ?w = 0
2.5 in. = d = 10 in.
g5(x) = Dmax  D  d = 0
2.5 in. = h = 10 in.
g6(x) = C  3 = 0
0.1 in. = t = 1.0 in.
• Bounds:
Minimize:
3 = N = 30
Volume, V(x) = 2L(2dt + 2ht  4t2)
1.0 in. = D = 6.0 in.
0.2 in. = d = 0.5 in. For more information:
154
Minimize: • see (Arora, 1989)
Volume, V(x) = ?2Dd2(N + 2)/4 • see Appendix D, Section D.4
For more information:
• see, e.g., (Siddall, 1982)
• see Appendix D, Section D.3
155
Four Variable Problems
The four variable problems being investigated are the design of a welded beam, Figure 5.6, and
design of a pressure vessel, Figure 5.7. The problem formulations follow each figure.
Complete descriptions are given in Appendix D, Sections D.5 and D.6, respectively.
Th Ts
F F
R R
l L t
B
h
b
L
Find: Find:
• Weld height, h • Cylinder radius, R
• Weld length, l • Cylinder length, L
• Bar thickness, t • Shell thickness, Ts
• Bar width, b • Spherical head thickness, Th
Satisfy: Satisfy:
• Constraints: • Constraints:
g1(x) = [(?’)2 + 2?’?’’cos? + (?’’)2]1/2 = ?d g1(x) = Ts  0.0193R = 0
g2(x) = 6FL/(bt2) = 30,000 g2(x) = Th  0.00954R = 0
g3(x) =
4.013 EI? t
[1? ( )
EI
] = 6000 g3(x) = ?R2L + (4/3)?R3  1.296E6 = 0
L2 2L ?
g4(x) = 4FL3/(Et3b) = 0.25 • Bounds:
25 in. = R = 150 in.
• Bounds: 25 in. = L = 240 in.
0.125 in. = h = 2.0 in. 1.0 in. = Ts = 1.375 in.
2.0 in. = l = 10.0 in. 0.625 in. = Th = 1.0 in.
2.0 in. = t = 10.0 in.
0.125 in. = b = 2.0 in. Minimize:
F(x) = 0.6224TsRL + 1.7781ThR2 +
Minimize:
156
F(x) = (1 + c3)h2l + c4tb(L + l) 3.1661Ts2L + 19.84Ts2R
For more information: For more information:
• see (Ragsdell and Phillips, 1976) • see, e.g., (Sandgren, 1990)
• see Appendix D, Section D.5 • see Appendix D, Section D.6
157
Taken together, these six problems provide a wide variety of functions to approximate
since a kriging model is built for each objective function and constraint for each problem. In
total, there are 26 different equations contained in these six problems, ranging from simple linear
functions to reciprocal square roots; some equations even require the inversion of a finite
element matrix (see Section D.4 for the analysis of the twomember frame). With these six
problems as the testbed for verifying Hypotheses 2 and 3, the factors (and corresponding levels
The three basic factors considered in this experiment are listed in Table 5.1: CORFCN
refers to the correlation function used in the kriging model, EQN refers to the equation being
approximated, and DOE refers to the type of experimental design being utilized to sample the
equation to provide data to fit the model. The corresponding levels for each factor also are
• CORFCN has 5 levels of interest based on the correlation functions being studied (refer
to Table 2.1, Equations 2.202.24); the correlation function associated with each level
is given in the first two columns of Table 5.1.
• EQN has 26 levels based on the total number of equations (i.e., objective functions and
constraints) in the six test problems; when showing the levels for EQN, the objective
function for each problem is singled out from the constraints, see the middle two
columns of Table 5.1.
158
• DOE has 15 levels based on all of the classical and space filling experimental design
introduced in Section 2.4.3 for investigation; the acronyms and corresponding name of
each design is listed in the last two columns of Table 5.1.
Every effort is made to ensure that the observations of each factor level in the
experiment are properly balanced; however, some factors (and levels) are beyond control.
Each level of CORFCN given in Table 5.1 occurs an equal number of times in each problem;
hence, it is easy to examine the effect of the different correlation functions on the overall
accuracy of the kriging model (see Section 5.3.1). The factor EQN is used to isolate the
functions being considered and is utilized in Section 5.3.2 when the accuracy of the kriging
model is examined for each pair of problems. As such, both of these factors are relatively well
balanced in the design. The levels of DOE, however, are not wellbalanced because the fifteen
levels for DOE do not appear equally in each problem; for example, there is no BoxBehnken
159
Eqn. 2.24 Three Variable Problems hamss Hammersley
Spring Sequence
8 V(x) mnmxl Minimax Lh
914 g1(x)  g6(x) mxmnl Maximin Lh
oalhd Orthogonal
TwoMember Frame ArrayBased Lh
15 V(x) oarry Orthogonal
1617 g1(x)  g2(x) Array
oplhd Optimal Lh
Four Variable Problems rnlhd Random Lh
Welded Beam unifd Uniform Design
18 F(x) yelhd Orthogonal Lh
1922 g1(x)  g4(x)
Pressure Vessel
23 F(x)
2426 g1(x)  g3(x)
To make things even more complicated, the number of sample points within each design
depends on the type of DOE considered and the number of variables in the problem. For
instance, a CCD for the two variable problems has 22 + 2•2 + 1 = 9 points while a random
Latin hypercube can have any number of sample points. Hence, great care must be taken when
analyzing the effects of DOE because of the biasing which occurs due to unbalanced sample
sizes in the experiment. This is discussed in more detail in the next section which contains a
complete listing of which experimental designs (and corresponding sample sizes) are used in
160
5.1.3 Experimental Design Choices for Test Problems
points used. How is the number of points to be determined for a given design? For the
two types of classical designs utilized in this dissertation—CCDs and BoxBehnken designs—
the number of points essentially is fixed once the number of factors is specified. Fractional
factorial designs within a CCD are not considered for these problems because they contain so
few variables. Unlike the CCDs and BoxBehnken designs, for most space filling designs the
number of points is not dictated by the number of factors and can be any number within reason.
Therefore, in order to determine the number of points used in a space filling design, a
CCD with the same number of factors is used to determine the baseline number of points, e.g.,
for three factors, a CCD requires 15 points, and the number of points used in all space filling
designs for three factors would be selected to be as close to 15 as possible. However, because
some space filling designs can have a variable number of sample points, a variety of sample sizes
for each design are considered in order to see if fewer or slightly more points provides an
improved fit. As a guideline, an upper bound on the number of points of about 1.5 times the
number prescribed by the baseline CCD is employed. This factor of 1.5 is primarily based on
the recommendations of Giunta, et al. (1994) who found that for small problems (i.e., fewer
than about five factors) the variance of a secondorder response surface model leveled off when
the number of sample points was about 1.5 times the number of terms in the polynomial model.
This number serves as a guideline in this work despite the fact that kriging models do not
161
How important is the number of design points when picking an experimental
design? The answer is very important. In order to compare the utility of different experimental
designs properly, it is important to use the same number of sample points because a design with
more sample points is expected to provide more information, possibly resulting in a more
accurate model. Therefore, when designs do not have the same number of points, it is
impossible to determine if an improvement in model accuracy is from the design itself (i.e.,
spacing of the points in the design space) or from the number of sample points. However, in
some cases it is extremely difficult, if not impossible, to have two different designs which have
the same number of points. For instance, a three factor CCD has 15 points, a three factor Box
Behnken has 13 (since replicates are not used), and a strength 2 randomized OA has either 9,
16, or 25 points since it is restricted to q2 points where q is the number of levels and is
restricted to be a prime power. Despite these difficulties, every effort is made to make the
sample sizes overlap as much as possible from one design to the next. The experimental designs
and corresponding sample sizes for each pair of problems are described in the following
sections.
For the two variable problems (the twobar and threebar trusses), nine types of experimental
designs are considered, see Table 5.2. Of these nine types of designs, there are 51 unique
designs because each design which has a different number of points is considered a unique
design. For instance, a seven point Latin hypercube and an eight point Latin hypercube are
162
unique designs because they have different sample sizes even though they are both Latin
hypercube designs.
superscript (†) in the table. To minimize the effects of this randomness, each of these designs is
randomized three times, and the resulting error measures are averaged over all three
randomizations for that specific design to prevent a design from yielding a poor model because
of its randomly chosen levels. As a result, there a total of 71 designs which are fit for each of
the two variable problems. Finally, notice that neither BoxBehnken designs nor orthogonal
arrays are included in these problems; there is no BoxBehnken design for two factors, and a
163
nine point orthogonal array for two factors is a 3 x 3 grid, the same as a facecentered central
Eleven types of experimental designs for a total of 63 unique designs are considered (as shown
in Table 5.3) for the three variable spring and a twomember frame test problems. In all, there
are 92 total designs constructed for each three variable problem once the three randomizations
of the Latin hypercube, orthogonal Latin hypercube, orthogonal array, and orthogonal array
164
Notice that a 13 point BoxBehnken design is included in the set of designs for the three
variable problems along with two randomized orthogonal array designs: a 16 point OA and a 25
point OA. One thing to note about these designs (and the orthogonal arraybased Latin
hypercubes as well) is that the number of points in the design is limited to q2 sample points,
where q is a power of a prime number. Thus, only q = 4 and q = 5 point OAs are considered
for the three variable problems in order to maintain a fairly consistent number of points between
For the two, four variable problems, 66 unique designs from eleven types of experimental
designs are employed (see Table 5.4). Including the repetitions of the designs with random
permutations, a total of 102 designs are examined for each of these problems.
165
†
Each design is instantiated three times because it is based on a random permutation,
and the resulting error measures are averaged over all three randomizations for that
design.
Notice in Table 5.4 that only four maximin Latin hypercubes are considered: 22, 25, 26,
and 28 point designs. This is because the simulated annealing (Morris and Mitchell, 1995) used
to create these designs is not very robust in generating large four factor designs, and large four
factor designs are not listed in (Morris and Mitchell, 1992). In addition, three orthogonal arrays
are employed: 16, 25, and 32 point designs. The 16 and 25 point designs are strength 2
designs; the 32 point OA design is a strength 3 design with 2q3 points and levels 0, ..., q1. So
while there are more points with the 32 point OA design than in the 25 point OA design, the
number of unique factor levels being considered in the 32 point OA design is actually less than in
In summary, the experimental designs and corresponding levels listed in Table 5.2
through Table 5.4 are used to generate data to build kriging models for each equation in each
problem. For each kriging model, the kriging model is cross validated and the accuracy of the
kriging model is further assessed using a set of validation points which is independent of the
design and number of samples. The end result is three measures of model accuracy which
provide the responses for this study as explained in the next section.
As shown in Figure 5.1, there are three responses in the kriging/DOE study:
166
1. cross validation root mean square error (CVRMSE), see Equation 2.33 in Section
2.4.2, of the kriging model;
2. maximum absolute error (MAX), see Equation 2.30 in Section 2.4.2, of the kriging
model; and
3. root mean square error (RMSE), see Equation 2.32 in Section 2.4.2 of the kriging
model.
The CVRMSE of the kriging model is based on the leaveoneout cross validation procedure
described in Section 2.4.2; it utilizes the sample data to validate the model and does not require
assessment of model adequacy; therefore, three sets of validation points are used to compute
MAX and RMSE. The average absolute error measure, Equation 2.27, is not included in this
study since it correlates well with RMSE and provides little additional information beyond that
obtained from analysis of RMSE. The number of validation points used in each problem is
listed in Table 5.5: 1000, 1500, and 2000 validation points for the two, three, and four variable
problems, respectively.
167
Rather than randomly pick these validation points, the points are obtained from a
random Latin hypercube to ensure uniformity within the design space. The predicted values
from each kriging model are compared against the actual values from the set of validation points,
and the error measures MAX and RMSE are computed. These measures are then
“normalized” as a percentage of the sample range, for the particular design under investigation,
in order to compare responses with different magnitudes. A precursory analysis of the data is
In total, there are 11535 kriging models constructed as shown in Table 5.6 for the six
test problems—one kriging model for each equation for each design for each test problem. For
each of these models, there are three measures of model accuracy: MAX, RMSE, and
CVRMSE; hence, there are 34605 data points in the resulting data set.
Problem No. of No. of No. of Total No. of No. of No. of No. of Total
Name Variables Responses Unique DOE DOE CORFCN Models Models
2bar 2 3 51 71 5 765 1005
3bar 2 4 51 71 5 1020 1340
2mem 3 3 63 92 5 945 1380
spring 3 7 63 92 5 2205 3220
press 4 4 66 102 5 1320 2040
weld 4 5 66 102 5 1650 2550
Grand Totals 7905 11535
168
To facilitate analysis of the data set, the error measures of the designs which are
replicated—the orthogonal arrays, random Latin hypercubes, OAbased Latin hypercubes, and
orthogonal Latin hypercubes—are averaged to reduce the data set to 7905 models. However,
not all of these 7905 models are good, i.e., many contain outliers which bias the results, and
potential outliers must be removed. The cause of the outliers can be attributed to incomplete
convergence of the numerical optimization used to fit the model or singularities in the data set
which occur during model fitting, numerical roundoff error, or bad data resulting from
transferring data from file to file, program to program, and computer to computer.
Hence, the data set is culled to remove any potential outliers. Rather than first fit the
model and remove potential outliers based on the residuals, the data is culled based on (a)
potential RMSE outliers, (b) potential MAX outliers, and (c) potential CVRMSE outliers since
it is known that many outliers exist due to singularities in the data set which occur during model
fitting. The process is described in detail in Appendix E; density plots are included in Appendix
E to show the distribution of the resulting data for the two, three, and four variable problems.
In this manner, the data set is reduced from 7905 models to 7578. This constitutes a
reduction of about 4% which is considered reasonable given the magnitude of the study and the
potential for errors. From this point forward, any reference to “the data set” refers to the final
culled data set with all of the potential outliers removed and not to the original data set unless
explicitly specified.
169
Analysis of variance is performed in the next section to determine which factors have a
significant effect on the accuracy of the resulting kriging model. This is followed in Section 5.2.2
determine which factors have a significant effect on the accuracy of the resulting kriging model
(see, e.g., (Chambers, et al., 1992; Montgomery, 1991) for more on ANOVA). The software
package SPlus4 (MathSoft, 1997) is used to analyze the data. The ANOVA is performed
separately for each pair of two, three, and four variable problems for all three error measures.
Furthermore, because of the size of the data set, only main effects and twofactor interactions
can be studied. The ANOVA results are given in Section E.2, and a summary of the ANOVA
results are given in Table 5.7. In the table, the factor main effects and twofactor interaction
effects are listed in the first column of the table; a colon between factors (e.g.,
CORFCN:NSAMP) indicates a twofactor interaction. The abbreviations “sig” and “not sig”
are used to indicate whether or not the effect is significant. For instance, all of the main effects
and twofactor interactions except CORFCN:NSAMP are significant for RMSE.RANGE and
170
Table 5.7 Summary of ANOVA Results for Kriging/DOE Study
As can be seen in Table 5.7, the majority of the effects are significant on all of the error
measures. It is not surprising to see that the main effects of the factors DOE, CORFCN,
NSAMP, and EQN are significant for all responses for all of the problems. Likewise, the
interaction between DOE and NSAMP is significant for all RMSE.RANGE and
MAX.RANGE values. The interaction between CORFCN and NSAMP is not significant in
the majority of cases since it is unlikely that these two factors would interact to provide a more
accurate model. The interaction between DOE and CORFCN is significant in all but one case
which is interesting to note. In summary, it appears that there are many significant interactions
and main effects to examine. Observations regarding many of these interactions can be inferred
from the appropriate graphs; however, the commentary in Sections 5.3 and 5.4 focuses
171
The two most important measures of model accuracy in this study are considered to be
RMSE and MAX. Why are these two particular measures the most important? RMSE is
used to gauge the overall accuracy of the model, and MAX is used to gauge the local accuracy
of the model. Ideally, RMSE and MAX would be zero, indicating that the metamodel predicts
the underlying analysis or model exactly; however, this is rarely the case. Therefore, the lower
the value of either error measure, the more accurate the model.
design application because high values of RMSE can lead an optimization algorithm into a region
of bad design and high values of MAX prevent the optimization algorithm from finding the true
optimum solution. To see if the two measures are correlated, a plot of RMSE.RANGE versus
MAX.RANGE for the data set is given in Figure 5.8. Here and henceforth, the acronyms
RMSE.RANGE and MAX.RANGE are used to refer to the values of RMSE and MAX when
172
6
4
max.range
Since the data is widely scattered in Figure 5.8, the two error measures do correlate
well. Models with low RMSE.RANGE values tend to have low MAX.RANGE values, but
models with moderate RMSE.RANGE values have any of a variety of MAX.RANGE values.
conclusions.
the wide scattering of the data, RMSE.RANGE and CVRMSE.RANGE are not correlated
either. This means that the cross validation root mean square error is not a sufficient
measure of model accuracy because root mean square error provides the best possible
173
assessment of overall model accuracy. If CVRMSE.RANGE and RMSE.RANGE had been
correlated, then CVRMSE alone could be computed to assess model accuracy without having
1.0
0.8
cvrmse.range
0.6
0.4
0.2
0.0
Figure 5.9, there is a wide scattering of the data, and it appears that MAX.RANGE and
CVRMSE.RANGE are not well correlated either. Hence, there is no need to examine
CVRMSE.RANGE further because it does not provide a good assessment of model accuracy
since it does not correlate well with either MAX.RANGE or RMSE.RANGE. As stated
174
earlier, this finding is unfortunate because it means that additional validation points must be
taken in order to assess the accuracy of a kriging model properly; cross validating the
1.0
0.8
cvrmse.range
0.6
0.4
0.2
0.0
0 1 2 3 4 5 6
max.range
Using only RMSE.RANGE and MAX.RANGE, the error of the resulting kriging
models can now be assessed by isolating a single factor (or pair of factors). The process for
analyzing the data in order to interpret specific results is identified at the beginning of each
section when Hypotheses 2 and 3 are tested. Hypothesis 2 is tested first in the next section,
In order to test this hypothesis, two factors are isolated to analyze the results further, namely,
CORFCN and EQN. Both factors were found to have a significant effect on the accuracy of
the resulting kriging in the ANOVA in Section 5.2.1. The effect of CORFCN on
RMSE.RANGE and MAX.RANGE is investigated in the next section. The effect of EQN on
RMSE.RANGE and MAX.RANGE is discussed in Section 5.3.2. Keep in mind that all of
these results are based strictly on averages of the data at a given level of a particular variable; it
is assumed that biasing due to unbalanced numbers of observations at each level is negligible
The effect of correlation function on model accuracy was found to be significant in the
ANOVA in Section 5.2.1, but it is uncertain which correlation function yields the best results on
average. Therefore, the effect of CORFCN on RMSE.RANGE aggregated over all the
problems and for each pair of problems is shown in Figure 5.11. The average (mean) of
RMSE.RANGE for each factor level is plotted on the vertical axis in the figure. Meanwhile, the
176
vertical bars within the figure are used for grouping purposes, showing the range of effects of the
different levels of the factor being considered (in this case, CORFCN) for each problem group
as indicated on the xaxis. The numbers 1, 2, 3, 4, and 5 in the figure indicate the level of
correlation function as described in the key in the figure. The horizontal dashed lines which
cross each vertical bar indicate the group average of RMSE.RANGE for that particular
grouping; for instance, the mean RMSE.RANGE for all of the problems is about 0.062. The
arrows are used to indicate the effect a particular level of CORFCN has on RMSE.RANGE;
the same holds true regardless of the factor being considered. For example, in Figure 5.11 the
average effect of CORFCN = 1 in the two variable problems is slightly less than 0.08 while the
average effect of CORFCN = 4 in the same problems is slightly greater than 0.06. Finally,
lower values of RMSE.RANGE (and MAX.RANGE) are better; so, the lower the arrow of a
particular level on the vertical line, the more accurate is the resulting kriging model.
177
Key:
1 1 = Exponential
0.08 2 = Gaussian
3 = Cubic
1
1
mean of rmse.range
0.07
3 2
1
5
4
5 3
0.06
4 3 4
5
2
2 4
3
5
0.05
2
all problems 2 variable problems 3 variable problems 4 variable problems
Factor  CORFCN
Some observations regarding Figure 5.11 are as follows. The exponential correlation
function (CORFCN = 1) repeatedly is the worst. Overall, the Gaussian correlation function
(CORFCN = 2) provides the lowest RMSE.RANGE on average and also for the three and
four variable problems as well. The linear Matérn (CORFCN = 4) yields the lowest average
RMSE.RANGE for the two variable problems but yields comparable results to the piecewise
correlation function otherwise. The piecewise cubic (CORFCN = 3) and quadratic Matérn
(CORFCN = 5) correlation functions generally yield worse results than the Gaussian correlation
178
The effect of CORFCN on MAX.RANGE is shown in Figure 5.12. As in Figure 5.11,
the exponential correlation function (CORFCN = 1) repeatedly is the worst but does
surprisingly well in the four variable case. Overall, the Gaussian correlation function (CORFCN
= 2) provides the lowest MAX.RANGE on average and for the three and four variable
problems as well.
Key:
1 = Exponential
4
0.7
2 = Gaussian 3
3 = Cubic 5
2 1
mean of max.range
0.6
1 1
4
0.5
5 3
2
1
4
0.4
3 5
2 5
3
4 2
As seen in Figure 5.12, the linear Matérn correlation function (CORFCN = 4) yields
the best MAX.RANGE for the two variable problems and the worst for the four variable
problems with on average results otherwise. The piecewise cubic (CORFCN = 3) and
179
quadratic Matérn (CORFCN = 5) correlation functions yield comparable results, falling
somewhere in the middle of the spectrum in each problem and performing slightly better than
average overall.
correlation function to use for building kriging models. On average, it provides the lowest
RMSE.RANGE and MAX.RANGE, yielding the most accurate kriging models. Furthermore,
it also yields the best results in the three and four variable problems when averaged over all
designs, sample sizes, and equations; in the two variable problems its performance is average
but not far behind the linear Matérn correlation function (CORFCN = 4).
In order to determine which types of equations are fit best, the factor EQN is used to
isolate which equations are well fit by the kriging models, thus explicitly testing Hypothesis 2. A
plot of the resulting RMSE.RANGE of the two, three, and four variable problems for each level
of the factor EQN is shown in Figure 5.13; the effect of each level of EQN on the mean of
RMSE.RANGE is averaged over all DOE, NSAMP, and CORFCN for each problem. For
clarity, dashed lines are used to indicate the 5% and 10% RMSE.RANGE values. If a 5% level
of model accuracy is used as a cutoff point, then 14 out of the 26 equations in this study are
accurately modeled by kriging. If that cutoff is raised to 10%, then 20 out of the 26 equations
180
0.15
18
19
21
4 2
mean of rmse.range
10 16
10% 11 12
17
6
5%
7
9 22
8
15
20
5 14 26 23
1 13 24 25
0.0
fewer kriging models meet a 10% cutoff point. In Figure 5.14, only nine of the 26 equations
fall within the 10% level of accuracy. If the level of accuracy is allowed to drop to 20%, which
is quite high, then five more equations may be considered modeled accurately by kriging (EQN
= 7, 8, 9, 20, and 22). The mean values for MAX.RANGE for Equations 18, 19, and 21 are
beyond the scale of the chart. For convenience, the equations and corresponding level of EQN
are listed in Table 5.8 which summarizes which equations are fit well and which are not based
181
3 18 19
21
1.0
0.8
16 1210
11
mean of max.range
17
0.6
4
2 6
0.4
0.2
7 22
8 9
20
10% 14 15 23
5 26
1
0.0
13 24 25
The types of equations which are well fit by kriging are noted in Table 5.8. At the 5%
level of RMSE.RANGE the linear combinations of the design variables (EQN = 1, 5, 8, 13, 15,
24, and 25) and most reciprocal equations (EQN = 7, 9, 20, and 22) are modeled well. Some
higherorder equations are also modeled well (EQN = 23 and 26). At the 10% level, all of the
equations in the three variable problems are modeled well (EQN = 817). At this level, the
equations based on the finite element model of the twomember frame (EQN = 16 and 17) also
are accurately represented by the kriging models. Looking at MAX.RANGE, however, the
182
majority of the equations which meet the 10% cutoff are linear combinations of the design
variables (e.g., EQN = 1, 5, 13, 24, and 25) which may involve higherorder terms or
183
Table 5.8 Summary of Equations Accurately Modeled by Kriging
6 g1(x) = ? no yes no
8(B 2 ? H 2 ) ?TDH
P(B ? H )
2 2 1/ 2
184
3.1661Ts2L + 19.84Ts2R
24 g1(x) = Ts  0.0193R yes yes yes
25 g2(x) = Th  0.00954R yes yes yes
2 3
26 g3(x) = ?R L + (4/3)?R  1.296E6 yes yes yes
Which types of functions are not modeled well by kriging? Based on the data in
Table 5.8, the equations which are not modeled well by kriging are the equations involving
reciprocals of combinations of the design variables in the threebar truss problem (EQN = 2, 3,
and 4) and twobar truss problems (EQN = 6 and 7), and the majority of the welded beam
equations which include shear stress calculations, a cosine term which is a function of the design
variables, and a variety of reciprocals and square roots of terms which are functions of the
design variables. In addition, the finite element equations for the twomember frame (EQN =
16 and 17) are not modeled well at the 5% level accuracy or at the 10% level of accuracy of
MAX.RANGE. It is also interesting to note that the objective function of the welded beam
problem (EQN = 18) is one of the equations approximated worst by the kriging. This is rather
surprising considering it is very similar to the objective function of the pressure vessel problem
(EQN = 23) which is modeled well in all cases. Perhaps these differences are due to the size of
the design space as opposed to the equations themselves; very few approximation methods will
work well if the points are sparsely scattered throughout the design space which may be the
of equations. Using a 5% level of accuracy, over half (14 out of the 26) of the equations
studied are accurately modeled over the entire design space as measured by RMSE.RANGE; if
185
a 10% level of accuracy is used instead, then over 3/4 of the equations (20 out of 26) are
accurately modeled. Unfortunately, only nine of the 26 meet the 10% level of accuracy in
important from a design standpoint since accuracy over the entire design space is more
important during design space search than the maximum discrepancy at any one given point. As
Hypothesis 3: Space filling experimental designs are better suited for building
In order to test this hypothesis, the data set is analyzed by isolating the factor DOE which was
found to have a significant effect on the accuracy of the resulting kriging in the ANOVA in
Section 5.2.1. However, as the same number of sample sizes are not used in each design as
discussed in Section 5.1.3, the results also must be conditioned on sample size (NSAMP) for a
fair comparison between designs. Consider, for instance, the combined CCD + CCF (ccdaf)
design in the two variable problems which has 13 sample points. It cannot be concluded that
the combined CCD + CCF is the best by averaging over all designs because its effect is biased
by the fact that it has 13 sample points which is at the upper end of the number of points in the
186
two variable problems and is therefore expected to yield good results because of the large
number of sample points. Meanwhile, the effects of all of the other designs with variable
numbers of points (i.e., unifd, hamss, mxmnl, mnmxl, and oplhd) are averaged over all sample
sizes where the smaller the sample size, the less accurate the model, and the worse the effect of
these designs. The results for the two, three, and four variable problems are discussed in
Sections 5.4.1, 5.4.2, and 5.4.3, respectively, by conditioning on both design type and sample
size. As stated previously, keep in mind that the results are based on averaging the data at a
given level of a particular variable; it is assumed that biasing due to unbalanced numbers of
observations at each level is negligible since such a large data set is being used.
For the two variable problems, the classical designs (CCI, CCF, and CCD) each utilize
nine sample points, and the combined CCF + CCI and CCF + CCD each have 13 points.
Hence, the average effect of each design which has nine points and each design which has 13
points on RMSE.RANGE is shown in Figure 5.15. The average effect on MAX.RANGE of all
Looking first at the nine point designs in Figure 5.15, it is surprising to note that both the
CCD and CCI designs perform well with the minimax Latin hypercube (mnmxl) design yielding
the best RMSE.RANGE values; recall that the minimax Latin hypercube designs are unique to
this research (see Appendix C). The orthogonal Latin hypercubes (yelhd), maximin Latin
hypercubes (mxmnl), and the uniform designs (unifd) also perform well. The worst designs are
187
the Hammersley sampling sequence design, the CCF design and the OAbased Latin
In the 13 point designs, the uniform design (unifd) yields the best results with the
maximin and minimax Latin hypercube designs giving equally good results which are only slightly
worse than that of the uniform design. The random Latin hypercubes (rnlhd) continue to give
average results with the combined CCD + CCF (ccdaf) and CCI + CCF (cciaf) designs giving
slightly better results but results which are still about 1% worse than the maximin and minimax
Latin hypercubes. The optimal Latin hypercube designs are the worst with the Hammersley
sampling sequence designs showing good improvement with the extra four sample points.
0.12
hamss
ccfac
oplhd
0.10
oalhd
mean of rmse.range
0.08
rnlhd
0.06
oplhd hamss
mxmnl mnmxl
unifd
NSAMP = 9 NSAMP = 13
Factor  DOE
188
Figure 5.15 Effect of 9 and 13 Point DOE on RMSE.RANGE
Turning to the results of MAX.RANGE in Figure 5.16, the classical designs perform
quite well with the CCI design (ccins) being the best in the nine point designs and the combined
CCD + CCF (ccdaf) being the best in the 13 point designs. The nine point minimax Latin
hypercubes (mnmxl) and OAbased Latin hypercubes (oalhd) yield comparable results to the
CCD and CCF designs. The remaining space filling designs all fair worse than the CCD with
the Hammersley sampling sequence giving the worst MAX.RANGE. In the 13 point designs,
the combined CCI + CCF (cciaf) is the second best design with the maximin Latin hypercube
(mxmnl) coming in a close third. The uniform, optimal Latin hypercube, minimax Latin
hypercube, and random Latin hypercube designs are slightly worse than the average, and the
189
hamss
0.8
mean of max.range
0.6
oplhd
mxmnl unifd hamss
0.4
rnlhd
yelhd
oplhd rnlhd
ccdes mnmxl unifd mnmxl
ccfac
0.2
oalhd mxmnl
cciaf
ccins
ccdaf
NSAMP = 9 NSAMP = 13
Factor  DOE
Based on these results, the space filling designs do best in terms of RMSE.RANGE
while the classical designs yield the lowest MAX.RANGE. If RMSE.RANGE is taken as the
more important of the two measures of error, than the space filling DOE are better than the
classical DOE for the two variable problems considered in this dissertation. The results for the
In the three variable problems, there are four values of NSAMP which must be
considered due to differences in prescribed sample sizes. The BoxBehnken design has 13
points; the classic CCD, CCF, and CCI designs have 15; the CCI + CCF has 21; and the
190
CCD + CCF has 23. The effects of these designs on RMSE.RANGE is shown in Figure 5.17;
hamss rnlhd
0.09
mnmxl ccdes
hamss rnlhd
0.08
oplhd
mean of rmse.range
unifd
mxmnl
0.07
rnlhd
0.06
mnmxl hamss
oplhd
unifd mnmxl
unifd rnlhd
0.05
Figure 5.17 Effect of 13, 15, 21, and 23 Point DOE on RMSE.RANGE
In Figure 5.17, the BoxBehnken (bxbnk) design dominates the 13 point designs with a
RMSE.RANGE of about 5%. All of the space filling designs perform quite poorly in fact with
RMSE.RANGE values of about 8% or worse; it appears that these designs do not fare well
when relatively few sample points are taken in the design space. A similar observation can be
made regarding the 15 point designs also. The maximin Latin hypercube (mxmnl) yields the best
result, but the CCI and CCF designs are both almost as good. The minimax Latin hypercube
191
(mnmxl) design yields average results with the uniform and optimal Latin hypercube design
(oplhd) fairing slightly better but not as good as the CCI and CCF. The CCD is the worse
design with the random Latin hypercube (rnlhd) and the Hammersley sampling sequence
In the 21 and 23 point designs in Figure 5.17, the optimal Latin hypercube design
(oplhd) yields the lowest RMSE.RANGE with the random Latin hypercube design (rnlhd)
yielding the worst. The combined CCI + CCF (cciaf) is the second best 21 point design,
followed closely by the minimax Latin hypercube (mnmxl). The uniform design (unifd) gives an
average result in the 21 point case but is the second best design in the 23 point case; the best
design is the combined CCD + CCF (ccdaf). The optimal Latin hypercube design (oplhd) is
the worst of the 23 point designs with the minimax Latin hypercube (mnmxl) and random Latin
hypercube designs (rnlhd) yielding results which are worse than the average.
In Figure 5.18, the effects of these different designs on MAX.RANGE are plotted. The
classical experimental designs consistently provided the lowest MAX.RANGE when averaging
over all other factors. The space filling designs do not perform well in any case and yield
particularly poor results in the 13 and 21 point designs. The minimax Latin hypercube (mnmxl)
design is no exception, giving near average results in 15, 21, and 23 point designs and the next
As with the two variable problems, the space filling designs offer better results if
RMSE.RANGE is considered while the classical designs are better when it comes to
192
MAX.RANGE for the problems considered in this dissertation. The four variable problems are
hamss oplhd
ccdes
rnlhd mnmxl
hamss
unifd hamss
hamss
0.6
rnlhd
mean of max.range
unifd
unifd
oplhd unifd
oplhd
0.2
mxmnl
bxbnk ccins cciaf
ccfac ccdaf
For the four variable problems, there are two sample sizes to examine: NSAMP = 25
points and NSAMP = 33 points. Figure 5.19 contains the effects of DOE on RMSE.RANGE
for these sample sizes, and Figure 5.20 contains the effects on of DOE on MAX.RANGE for
these sample sizes. As seen in the figures, there are twelve designs which use 25 sample points
193
and only six with 33. The combined CCD + CCF is not considered because it is the only
Looking first at Figure 5.19, the minimax Latin hypercube (mnmxl) design introduced in
this dissertation yields the best results on average. The uniform design (unifd) is a close second
in the 25 point case, and the random Latin hypercube design (rnlhd) is a close second in the 33
point case. Of the classical 25 point designs, the BoxBehnken (bxbnk) design performs slightly
better than average while the CCD (ccdes), CCF (ccfac), and CCI (ccins) designs all do worse
than average. Finally, the Hammersley sampling sequence (hamss) designs perform poorly at
194
0.08 ccdes = 0.14
hamss hamss
oalhd
ccfac
0.07
ccins
oplhd
0.06
bxbnk cciaf
mean of rmse.range
mxmnl
0.05
rnlhd
0.04
oarry yelhd
0.03
oplhd
unifd
mnmxl
rnlhd
0.02
mnmxl
NSAMP = 25 NSAMP = 33
Factor  DOE
In Figure 5.20, the effects of the 25 and 33 point DOE on MAX.RANGE are plotted.
Unlike the two and three variable problems, the space filling designs yield the best
MAX.RANGE for the four variable problems. The randomized 25 point orthogonal (oarry)
produces the lowest MAX.RANGE with the 25 point minimax Latin hypercube (mnmxl) a close
second. The classical BoxBehnken (bxbnk), CCF (ccfac), and CCI (ccins) designs and the
space filling uniform design (unifd) and optimal Latin hypercube design (oplhd) all yield
comparable results which are only slightly worse than either the minimax Latin hypercube or the
randomized orthogonal array. The 25 point orthogonal arraybased Latin hypercube (oalhd)
195
and the maximin Latin hypercube (mxmnl) produce results which are close to the average effect.
The Hammersley sampling sequence (hamss) designs yield the worst MAX.RANGE in both the
25 and 33 point designs; the 25 point CCD (ccdes) does not fair much better than the
hamss
1.2
hamss ccdes
1.0
mean of max.range
0.8
rnlhd
0.6
yelhd oplhd
oalhd
0.4
mxmnl
NSAMP = 25 NSAMP = 33
Factor  DOE
In the 33 point designs, the minimax Latin hypercube (mnmxl) yields the lowest
MAX.RANGE on average. The combined CCI + CCF (cciaf) and random Latin hypercube
designs (rnlhd) give comparable results which are slightly worse than the minimax Latin
196
hypercube. Finally, the 33 point orthogonal Latin hypercubes (yelhd) and optimal Latin
The space filling designs yield lower RMSE.RANGE values for all of the two, three,
and four variable problems considered in this dissertation. The classical experimental designs
yield lower MAX.RANGE values for the two and three variable problems but do not perform
as well as the space filling designs in the four variable problems. In small dimensions, i.e., two
and three variables, the classical designs spread the points out equally well in the design space
regardless if they are “space filling” designs or not. However, based on the observed trends in
the data, it appears that as the number of design variables increases, the space filling designs
perform better and better in terms of the two error measurements used in this study. As their
name implies, the space filling designs do a better job at spreading out points in the design space
and thus filling the space as the number of variables increases. Hence, Hypothesis 3 is verified
because the space filling designs do perform better than the classical designs in terms of
Furthermore, the larger the number of design variables and the more sample points, the better is
the accuracy of the resulting kriging model from a space filling design.
Some additional comments about particular space filling designs are as follows.
• The minimax Latin hypercube designs introduced in this dissertation perform quite well
in these problems. With the exception of the 13 point minimax Latin hypercube design
197
for the three variable problems, these designs consistently are among the best of the
designs in terms of its effect on RMSE.RANGE and MAX.RANGE.
• The Hammersley sampling sequence designs perform poorly in all of these problems.
The impetus for the Hammersley designs, though, are to provide good stratification of a
kdimensional space (Kalagnanam and Diwekar, 1997); as such, they are designed to
perform well in large design space which may explain why they perform so poorly in
these relatively small problems.
• The random Latin hypercube designs provide average results as might be expected
because these designs simply rely on a random scattering of points in the design space.
By imposing additional considerations on these designs to “control” the randomization of
the points, the performance of these designs can be improved. For instance, the
orthogonal Latin hypercubes (Ye, 1997), the maximin Latin hypercubes (Morris and
Mitchell, 1995), the (IMSE) optimal Latin hypercubes (Park, 1994), and the
orthogonalarray based Latin hypercubes (Tang, 1993) typically yield a more accurate
kriging model than does the basic random Latin hypercube. This observation is not
new; rather, it supports the claims which the creators of these designs when they
introduced them.
• The uniform designs perform surprisingly well, considering they are based solely on
numbertheoretic reasoning (Fang and Wang, 1994). Regardless, the importance of
these designs lies in that fact that uniformly spreading the points out the design space
yields an accurate kriging model; this is a new observation which is obvious but not well
documented in the literature.
Hypotheses 2 and 3 have now been verified as a result of this study. In the next
section, the relevance of these results with regard to the PPCEM are discussed.
198
5.5 A LOOK BACK AND A LOOK AHEAD
In this chapter, 7905 kriging metamodels of six engineering test problems have been
constructed and validated using a variety of correlation functions and experimental designs to
test and verify Hypotheses 2 and 3. In closing this chapter, recall the questions posed at the
• What is best type of experimental design you should use to query the simulation to
generate data to build an accurate kriging metamodel? For problems containing
only two variables, either classical or space filling designs yield good results; however,
as the size of the design space increases (i.e., number of variables increases), space
filling experimental design tend to yield more accurate kriging metamodels on average
since they tend to spread the points out well in the design space. In particular, the
minimax Latin hypercube design, uniform designs, and orthogonal arrays yield good
results. Random Latin hypercubes also provide good results, provided orthogonality or
optimality (e.g., IMSE) are imposed to control the randomization. Finally, of the
designs considered, Hammersley point designs are not recommended unless numerous
sample points can be afforded.
• How many sample points should you use? The interaction between sample size and
experimental design type is examined in Section E.3 because this impact does not
directly impact testing Hypotheses 2 or 3 since the analysis is conditioned on sample
size. In general, the more sample points which can be afforded, the more accurate the
resulting model. However, as discussed in Section E.3, a recommendation on the
number of sample points cannot be made at this time because a wide enough spread of
points was not investigated.
199
• What type of correlation function should you use to obtain the best predictor?
Based on the results in Section 5.3.1, the Gaussian correlation function yields the most
accurate predictor on average of the five studied.
• Lastly, how can you best validate the metamodel once you have constructed it?
As discussed in Section 5.2.2, cross validation root mean square error is not a sufficient
measure of model accuracy since it does not correlate well with either root mean square
error or max. error. One possible explanation of this is that because the sample sizes
are relatively small, an insufficient number of points is available to crossvalidate the
model properly. If more points were available, then cross validation error may yield a
reasonable assessment of model accuracy; however, this has not been tested. In light of
this result, then, it is imperative that additional sample points be taken to validate a
kriging model.
These results have a direct bearing on the metamodeling capabilities within the Product Platform
Concept Exploration Method (PPCEM) as depicted in Figure 5.21. In the event that
within the context of product platform design, then the best correlation function to select—if
kriging metamodels are to be utilized—is the Gaussian correlation function, and the best
design to use is a space filling experimental design if the problem has more than two variables (if
the problem only has two variables, then a classical experimental design will suffice). Also, it is
recommended to take as many sample points as possible, but keep in mind that additional
sample points are needed to validate the model since cross validation does not appear to
200
Family of Universal
Kriging/DOE Testbed Electric Motors
§5.3 §5.2
Chp 6
Platform
MDO Example
Product Platform Concept Exploration Method
Chp 3
In the next chapter, the focus returns to the PPCEM, and the process of testing and
electric motors is offered as “proof of concept” that the PPCEM works and that it is effective at
facilitating the design and development of a scalable product platform for a product family.
201
202
6.
CHAPTER 6
implemented to verify its use for designing a family of universal motors around a common
scalable product platform. An overview of the universal motor problem is presented in Section
6.1; a schematic of a typical universal motor is given in Section 6.1.1, and a practical
mathematical model for universal motors is derived in Section 6.1.2. Section 6.2 contains the
implementation of Steps 1 and 2 of the PPCEM. A market segmentation grid is created for the
problem and relevant factors and responses for the universal motor platform are identified.
Section 6.3 follows with the implementation of Step 4 of the PPCEM by aggregating the
universal motor specifications and formulating a compromise DSP; Step 3 of the PPCEM—
building metamodels—is not utilized in this example because analytical expressions for mean and
standard deviation of the responses are derived separately. Section 6.4 contains the
development of the actual universal motor platform; ramifications of the resulting universal motor
190
platform and product family are analyzed in Section 6.5. Through this example problem,
Hypothesis 1  All but Step 3 of the PPCEM is employed in this chapter to design a family
of universal motors based on a scalable product platform. The success of the method
as discussed in Section 6.5 provides an initial “proof of concept” for the method and
hence Hypothesis 1.
SubHypothesis 1.1  The market segmentation grid is utilized in Section 6.2 to help
identify the stack length as the scale factor around which the motor family is vertically
scaled to achieve the desired platform leveraging strategy; this supports SubHypothesis
1.1.
SubHypothesis 1.2  The scale factor for the family of universal motors is taken as the
stack length, following the footsteps of the example by Black & Decker (Lehnerd,
1987). Robust design principles are used in this example to develop a universal motor
platform—defined by seven design variables—which is insensitive to variations in the
scale factor and is thus good for a family of motors based on different instantiations of
the stack length (the scale factor). The success of this implementation helps to support
SubHypothesis 1.2.
SubHypothesis 1.3  Robust design principles of “bringing the mean on target” and
“minimizing the deviation” are utilized in this example to aggregate individual targets and
constraints and to facilitate the design of the family of motors. Combining this
formulation with the compromise DSP allows a family of motors to be designed around
a common, scalable product platform, verifying SubHypothesis 1.3.
Despite all of the work in the previous two chapters, Hypotheses 2 and 3 are not tested
in this example. Analytic expressions for mean and standard deviation of the response are
derived from the analysis equations themselves and used directly in the compromise DSP for the
191
PPCEM. In concluding the chapter, a brief look ahead to the General Aviation aircraft example
192
6.1 OVERVIEW OF THE UNIVERSAL MOTOR PROBLEM
Universal electric motors are so named for their capability to function on both direct
current (DC) and alternating current (AC). Universal motors also deliver more torque for a
given current than any other kind of AC motor (Chapman, 1991). The high performance
characteristics and flexibility of universal motors understandably have led to a wide range of
applications, especially in household use where they are found in electric drills, saws, blenders,
vacuum cleaners, and sewing machines, to name a few examples (Martin, 1986).
In addition, many companies manufacture several products which use universal motors;
for example several companies offer a complete line of power tools, whereas several others
offer a line of kitchen appliances or yard care tools (cf., Lehnerd, 1987). For these companies,
it has already become common practice to utilize a family of universal motors of similar physical
1987). The advantages of this approach included increased modularity with decreased
manufacturing time and inventory costs. For example, Black & Decker developed a family of
universal motors for its power tools in the 1970s in response to a need to redesign their tools as
In this chapter the task is to identify a set of common physical dimensions for a
hypothetical family of universal motors to satisfy a range of performance needs, providing initial
“proof of concept” for the PPCEM. To begin, a physical description and schematic of the
193
universal motor is offered in the next section. In Section 6.1.2, relevant analyses for modeling
6.1.1 Physical Description, Schematic, and Nomenclature for the Universal Motor
Problem
A universal motor is composed of an armature and a field which are also referred to as
the rotor and stator, respectively, see Figure 6.1. The motor depicted in the figure has two field
poles, an attached cooling fan, and laminations in both the armature and the field. Laminating
the metal in both the armature and field greatly reduces certain kinds of power losses (cf.,
Nasar, 1987).
The armature consists of metal shaft about which wire is wrapped longitudinally around
two or more metal slats, or armature poles, as many as thousands of times. The field consists of
a hollow metal cylinder within which the armature rotates. The field also has wire wrapped
longitudinally around interior metal slats, or field poles, as many as hundreds of times.
194
Figure 6.1 Schematic of a Universal Motor
(adapted from G.S.Electric, 1997)
For a universal motor, the wraps of wire around the armature and the field are wired in
series, which means that the same current is applied to both sets of wire. As current passes
through the field windings, a large magnetic field is generated, which passes through the metal of
the field, across an air gap between the field and the armature, then through the armature
windings, through the shaft of the armature, across another air gap, and back into the metal of
However when the magnetic field passes though the armature windings, which are
themselves carrying current, the magnetic field exerts a force on the current carrying wires,
which is in the direction of the cross product of the vector direction of the current in the
armature windings and the vector direction of the magnetic field. Because of the geometry of
the windings, current on one side of the armature always is passing in the opposite direction to
the current on the other side of the armature. Thus, the force exerted by the magnetic field on
one side of the armature is opposite to the force exerted on the other side of the armature.
Thereby a net torque is exerted on the armature, causing the armature to spin within the field.
The reader is referred to (Chapman, 1991) or any physics text book (e.g., Tipple, 1991) to
learn more about how an electric motor operates. The nomenclature for the universal electric
195
a Number of current paths on the armature Nc Number of turns of wire on the armature
Aa Area between a pole and the armature [mm2] Ns Number of turns of wire on the field, per
A wa Crosssectional area of the wires on the pole
armature [mm2] ? Rotational speed [rad/sec]
A wf Crosssectional area of the wires on the ? Resistivity of copper [Ohms/m]
field [mm2] ? copper Density of copper [kg/m3]
B Magnetic field strength (generated by the ? steel Density of steel [kg/m3]
current in the field windings) [Tesla, T] p armature Number of poles on the armature
? Magnetic flux [Webers, Wb] p field Number of poles on the field
? Magnetomotive force [Ampere?turns] P Gross Power Output [W]
H Magnetizing intensity [Ampere?turns/m] ro Outer radius of the stator [m]
I Electric current [Amperes] Ra Resistance of armature windings [Ohms]
K Motor constant [n.m.u.] Rs Resistance in the field windings [Ohms]
lr Diameter of armature [m] ? Total reluctance of the magnetic circuit
lg Length of air gap [m] [Ampere?turns/m]
lc Mean path length within the stator [m] ?s Reluctance of the stator [Ampere?turns/m]
L Stack length [m] ?a Reluctance of one air gap [Ampere?turns/m]
? steel Relative permeability of steel [n.m.u.] ?r Reluctance of the armature
?o Permeability of free space [Henrys/m] [Ampere?turns/m]
? air Relative permeability of air [n.m.u.] t Thickness of the stator [m]
m Plex of the armature winding [n.m.u.] T Torque [Nm]
M Mass [kg] Vt Terminal voltage [Volts, V]
? Efficiency [n.m.u.] Z Number of conductors on the armature
A universal motor is the same as a direct current (DC) series motor; however, in order
to minimize certain kinds of power losses within the core of the motor when operating on AC
power, a universal motor is constructed with slightly thinner laminations in both the field and the
armature and less field windings. However, the governing electromagnetic equations for the
operation of a series DC motor and a universal motor running on DC current are identical
current is only slightly less than the performance of the same motor running on DC current, see
Figure 6.2. This discrepancy in performance is due to losses caused by the inherent oscillation
in alternating current (AC); for an overview of the extra losses associated with AC operation,
196
Figure 6.2 Comparison of the TorqueSpeed Characteristics of a Universal Motor
Rated at 1/4 Hp and 8000 rpm when Operating on AC and DC Power Supplies (Martin,
1986)
These extra losses incurred in AC operation of a universal motor are difficult, if not
impossible, to model analytically; thus, complicated finite element analyses are becoming more
popular for modeling motor behavior under AC current. Since such a detailed analysis is
beyond the scope of this work, the derived model for the performance of the universal motor is
for DC operation for which simple analytical expressions are known or can be derived.
Moreover, several texts indicate that the performance of universal motors under AC and DC
conditions is quite comparable and include diagrams such as the one reproduced in Figure 6.2
(see, e.g., (Chapman, 1991; Martin, 1986; Shultz, 1992; Unnewehr, 1983); Shultz (1992)
197
states that “Universal motors...will operate either on DC or AC up to 60 Hz. Their
performance will be essentially the same when operated on DC or AC at 60 Hz.” The sample
torquespeed curves in Figure 6.2 graphically illustrate this, showing that for one specific motor,
the performance characteristics between AC and DC operation do not deviate significantly until
well past the fullload torque of the motor. For this work, all motors are designed for operation
at fullload torque. Thus, it is assumed that designing a universal motor under DC conditions
The model takes as input the design variables {Nc, Ns, Awa, Awf, ro, t, lgap, I, Vt, L} and
returns as output the power (P), torque (T), mass (M), and efficiency (? ) of the motor. To
formulate the model, it is necessary to derive equations for P, T, M, and ? as functions of the
design variables. The equations are based primarily on those given in (Chapman, 1991) and
Power
The basic equation for power output of a motor is the input power minus losses:
where the input power is the product of the voltage and the current,
198
• at the interface between the brushes and the armature (brush losses),
• in heating up the core and copper wires which adversely effects the magnetic
properties of the core and the current carrying ability of the wires (thermal losses),
and
Simple analytic expressions only exist for the copper losses and the brush losses. Stray losses
usually are assumed to be no more than one percent, and thus can be neglected. Mechanical
however, these variables are beyond the scope of the motor model itself. Hence mechanical
losses are neglected. Core losses, especially those incurred by eddy currents, can be minimized
by the use of thin laminations in the stator and rotor; assuming this is done, the core losses can
be assumed to be small and thus can be neglected. Thermal losses are in general nonnegligible,
but are highly dependent upon the external cooling scheme (e.g., cooling fan and fins on the
housing) applied to the motor. Because an effective cooling scheme can keep the motor from
running too hot, and as the setup of the cooling configuration is beyond the scope of this model,
thermal losses are neglected. The combined effects of all the aforementioned neglected losses
will, however, decrease the output power and efficiency from the predicted value from the
model. Nevertheless, the following equations serve as a sufficiently accurate model for the DC
199
operation of a universal motor. Consequently, the general equation for power losses reduces
from,
to a more manageable:
where
and
Pbrush = ? I [6.6]
where ? is typically 2 volts. Substituting these expressions into the power equation yields:
However, Ra and Rs, the resistances of the armature and field windings, can be specified further
as functions of the design variables. The resistances Ra and Rs can be computed directly from
the general equation that the resistance of any wire is given by:
200
Assuming that each wrap (i.e., turn) of wire on the armature is approximately the shape of a
rectangle with length L (the stack length of the motor) and width lr (the diameter of the armature)
then in terms of the physical dimensions of the motor, lr can be expressed as two times the
radius of the armature, which is just the outer radius of the stator minus the thickness of the
The total length of wire on the armature is the stack length, L, times the total number of wraps
Similarly, assuming that each wrap of wire on the field is approximately the shape of a rectangle
with length L (the stack length of the motor) and width double the inner radius of the stator (ro
201
However the purpose of the field windings is to create a magnetic field across the armature, thus
requiring two field poles, one for the “North” end of the magnetic field and one for the “South”
end. Thus, pfield is 2, and Equation 6.12 becomes Equation 6.13 which is:
Now that Ra and Rs are expressed in terms of the design variables in Equations 6.11 and 6.13,
Efficiency
The equation for efficiency can be computed directly from the equation for power. The basic
equation for efficiency, expressed as a decimal and not a percentage, is given by:
? = P/P in [6.14]
Mass
For the purpose of estimating the mass of the motor, it is modeled as a solid steel cylinder with
length L and radius lr/2 for the armature and a hollow steel cylinder with length L, outer radius ro
and inner radius (rot) for the stator. The mass of the windings on both the armature and the
field are also included, where the length of each winding is the same as those assumed for the
derivation of the power equation, see Equation 6.10. Thus the equation for mass is of the form:
202
where:
Using Equations 6.166.18 for Mstator, Marmature, and Mwindings, the mass of the motor, Equation
Torque
The last equation to derive is an equation for torque. In general, the torque of a DC motor is
given by:
T = K?I [6.19]
where K is a motor constant, ? is the magnetic flux, and I is the current. For a DC motor, K is
computed as:
(Z)( parmature )
K= [6.20]
2? a
Z = 2Nc [6.21]
assuming a simplex (m = 1) wave winding on the armature. Since the number of armature poles
parmature = 2 [6.23]
2Nc (2) N c
K= = [6.24]
2 ?( 2) ?
The derivation of the flux term, ?, is significantly more complicated. To begin, consider
the idealized DC motor shown in Figure 6.3a with its corresponding magnetic circuit shown in
Figure 6.3b. As shown in the figure, N is the number of turns on the stator (which is equal to
2Ns for the model being derived), I is the current, A is the crosssectional area of the stator, lr is
the diameter of the armature, lg is the gap length, and lc is the mean magnetic path length in the
stator.
204
(a) Physical model (b) Magnetic circuit
In general the equation for flux through a magnetic circuit is simply the magnetomotive
?
?? [6.25]
?
where the magnetomotive force, ? , is simply the number of turns around one pole of the field
? = N sI [6.26]
The total reluctance, ? , is calculated from the magnetic circuit shown in Figure 6.3b.
For a magnetic circuit, reluctances in series add just like resistors in series in an electric circuit;
therefore, the total reluctance in the idealized DC motor is the sum of the reluctances of the
205
? = ? s + ? r + 2? a [6.27]
Length
? = [6.28]
(Permeability )(Area cross? sec tion)
When permeability, ? , is expressed as the relative permeability of the material times the
permeability of free space, ? o, the reluctance of the stator, rotor, and air gaps are:
lc lr la
? s= ,? r= ,? a= [6.29]
? steel? o A s ? steel? o A r ? air ? oA a
In order to approximate more closely a universal motor for this example, the idealized
universal motor. The resulting model geometry is shown in Figure 6.4a and is described by the
outer radius of the stator, ro, the thickness of the stator, t, the diameter of the armature, lr, the
length of the air gap, lgap, and the stack length, L. The resulting magnetic circuit is shown in
Figure 6.4b; notice that the magnetic circuit for the idealized DC motor and the magnetic circuit
for a universal motor are different, because in a universal motor there are two paths which the
magnetic flux can take around the stator, i.e., clockwise and counterclockwise. These two
paths are in parallel and thus are included in the magnetic circuit as two parallel flux paths.
Reluctances in parallel in a magnetic circuit act like resistors in parallel in an electric circuit, so
that the combined reluctance of two identical reluctances in parallel is simply one half the
206
lc lr la
? s= ,? r= ,? a= [6.30]
2? steel? oA s ? steel? o A r ? air ? oA a
In Equation 6.30, the mean magnetic path length in the stator, lc, is taken to be one half
The crosssectional area of the stator, As, is taken to be the thickness of the stator times the
As = (t)(L) [6.32]
207
The crosssectional area of the armature is taken to be approximately the diameter of the
At = (lr)(L) [6.33]
The crosssectional area of the air gap is the length of the air gap times the stack length:
Aa = (lgap)(L) [6.34]
The last expression needed for the calculation of reluctance is the relative permeability
of the stator and the armature. For the purposes of this model, both the stator and the armature
are assumed to be made of steel with the relative permeability versus magnetizing intensity curve
Figure 6.5 Relative Permeability Versus Magnetizing Intensity for a Typical Piece of
Steel (Chapman, 1991)
208
The curve is divided into three regions, and each section is fit with an appropriate
numerical expression in order to include the curve shown in Figure 6.5 in the model. The curve
N cI
H= [6.36]
l c ? l r ? 2l gap
The relative permeability of air, ? air, is taken as unity, and the permeability of free space is a
constant, ? o = 4 ? 107. Now with expressions for K, ? , ? , ? s, ? r, ? a, lc, lr, As, Ar, Aa, and
This completes the mathematical model for the universal motor, and the PPCEM now
can be implemented to design a family of universal motors around a common product platform.
The initial steps of the PPCEM, Steps 1 and 2, are outlined in the next section.
With a given set of performance requirements and the model derived in Section 6.1.3,
the first step in implementing the PPCEM is to create the market segmentation grid to identify
and map which type of leveraging can be used to meet the overall design requirements and
209
realize the desired product platform and product family. The market segmentation grid shown in
Figure 6.6 depicts the desired leveraging strategy for this universal motor example. The goal is
to design a motor platform which can be leveraged vertically for different market segments
which are defined by the torque needs of each market, following in footsteps of the Black &
Decker universal motor example from (Lehnerd, 1987) which was discussed in Section 1.1.1.
In this specific example, ten instantiations of the motor are to be considered; moreover, in order
to reduce cost, size, and weight, it is supposed the best motor is the one that satisfies its
performance requirements with the least overall mass and greatest efficiency. Standardized
interfaces will ensure horizontal leveraging across market segments; however, only vertical
High End
Vertical Scaling
Functional
parametric
MidRange
scale factor:
torque =
f(length)
Low End
210
Having created the market segmentation grid and identified an appropriate leveraging
strategy and scale factor, Step 2 in the PPCEM is to classify the factors of interest within the
universal motor problem. The design variables (i.e., control factors) and corresponding ranges
3. Crosssectional area of the wire used on the armature (0.01 = Awa = 1.0 mm2)
4. Crosssectional area of the wire used on the field poles (0.01 = Awf = 1.0 mm2)
The terminal voltage, Vt, is fixed at 115 volts to correspond to standard household
voltage, and the length of the air gap, lgap, is set to 0.7 mm which is taken to be the minimum
possible air gap length. The minimum air gap length is fixed because the performance equations
derived in Section 6.1.3 indicate that minimizing the air gap length maximizes torque and
Following in the footsteps of the Black & Decker example, the stack length, L, is the
scale factor for the product family primarily because of its importance in the torque equation,
Equation 6.19, derived in Section 6.1.3, i.e., torque is directly proportional to stack length. To
increase torque across the platform, stack length of the motors is increased while keeping the
other physical parameters (e.g., the outer radius and the thickness) unchanged. Furthermore, it
211
is assumed that the greatest manufacturing costs savings can be achieved by exploiting the fact
that only the stack length of the motors varies while still providing a variety of torque and power
ratings. The initial range of interest for stack length is taken to be 1 to 20 centimeters; specific
instantiations are computed in Step 5 so as to meet the desired torque requirements for each
platform derivative.
There are a total of six responses (i.e., goals and constraints) which are of interest for
each motor. The following constraint values—Table 6.2—and goal targets—Table 6.3—are
Constraints Value
Magnetizing intensity, H H < 5000
Feasible geometry ro > t
Power of each motor, P P = 300 W
Efficiency of each motor, ? ? = 0.15
Mass of each motor, M M = 2.0 kg
The constraint on magnetizing intensity ensures that the magnetic flux within the motor
does not exceed the physical flux carrying capacity of the steel (Chapman, 1991). The
constraint on feasible geometry ensures that the thickness of the stator does not exceed the
radius of the stator, since the thickness is measured from the outside of the motor inward, as
indicated in Figure 6.4a. The desired power for each motor is 300 W which is treated as an
equality constraint to ensure that design variable settings are selected to match this requirement
212
exactly. A minimum allowable efficiency of 15% and a maximum allowable mass of 2.0 kg are
assumed to define a feasible motor. The efficiency and mass goal target for each motor are
listed in Table 6.2 along with the desired torque requirement for each motor.
Goal
Motor Torque [Nm] Mass [kg] Efficiency
1 0.05 0.50 0.70
2 0.10 ? ?
3 0.125 ? ?
4 0.15 ? ?
5 0.20 ? ?
6 0.25 ? ?
7 0.30 ? ?
8 0.35 ? ?
9 0.40 ? ?
10 0.50 ? ?
For the purpose of illustration, the relationship between the design variables, the scale
factor, and the responses, are shown in the PDiagram in Figure 6.7.
X = Control Factors
# wire turns on armature Y = Responses
# wire turns on field pole Universal Power
Afield wire Torque
Motor
Mass
Aarmature wire Model Efficiency
Motor radius Feasible Geometry
Stator thickness Magnetizing Intensity
Current drawn S = stack length
213
This concludes Steps 1 and 2 of the PPCEM. Step 3 is not utilized in this particular
example since it is possible to derive expressions for mean and variance of each motor due to
scaling the stack length as described in the next section. The next step, then, is Step 4 which is
The corresponding compromise DSP formulation for the universal motor product
platform is listed in Figure 6.8. In summary, there are nine design variables, seven constraints,
and two objectives. The two objectives (minimize mass to its target and maximize efficiency to
its target) are assumed to have equal importance to the design, and are thus weighted equally in
214
Given:
? Parametric (horizontal) scale factor: stack length
? Universal motor model analysis equations, Section 6.1.2
Find:
? The system variables, x:
• Number of turns on the armature, Nc • Thickness of the stator, t
• Number of turns on each pole on the • Current drawn by the motor, I
field, Ns • Radius of the motor, r
• Crosssectional area of the wire on the • Mean of stack length, ? L
armature, Awa • Standard deviation of stack
• Crosssectional area of the wire on the length, ? L
field, Awf
Satisfy:
? The system constraints:
• Magnetizing intensity, H: Hmax = 5000
• Feasible geometry: t < ro
• Power output, P: P = 300 W
• Motor efficiency, ? : ? = 0.15
• Mass, M: M = 2.0 kg
? Aggregated torque requirements:
• Mean torque, ? T : ?T = 0.2425 Nm
• Standard deviation torque, ? T : ? T = 0.13675 Nm
Minimize:
? Mean mass, target: M = 0.50 kg
Maximize:
? Mean efficiency, target: ? = 0.70
Figure 6.8 Universal Motor Product Platform Compromise DSP Formulation for Use
with OptdesX
215
The aggregated mean torque, ? T , and standard deviation, ? T are calculated as the
sample mean and standard deviation of the set of torque requirements {0.05, 0.1, 0.125, 0.15,
0.2, 0.25, 0.3, 0.35, 0.4, 0.5} Nm assuming a uniform distribution. Power, efficiency, and
mass for the family are assumed to be uniformly distributed with respective means and standard
deviations because it is assumed that the distribution of the demand for the motors is uniform.
The mean power, mean efficiency, and mean mass are calculated as the power,
efficiency, and mass, respectively, for the mean length. The standard deviation of torque is
approximated using a firstorder Taylor series expansion, assuming that the standard deviation is
?T
? T ?Ý? ? [6.37]
?? L L
Now that the compromise DSP for the family of universal motors is formulated, Step 5 of the
For this example problem, the compromise DSP formulated in Section 6.3 is solved
using the Generalized Reduced Gradient (GRG) algorithm in OptdesX. For a thorough
explanation of OptdesX and the GRG algorithm, see, e.g., (Parkinson, et al., 1998). The
OptdesX software package is used instead of DSIDES in this example since implementation of
216
Note that in order to develop the product portfolio, the compromise DSP is formulated
with goals for mean torque, ? T , and standard deviation of torque, ? T , which ensures that the
product portfolio will be able to be instantiated for all ten values of torque within the range of the
scale factor specified by the mean and standard deviation, ? L and ? L, for the stack length. Also
note that the constraint on magnetizing intensity is formulated to ensure that the entire product
family will meet the constraint on magnetizing intensity individually. This is accomplished by
computing a maximum magnetizing intensity which represents the magnetizing intensity for the
largest instantiation of the product family and is simply evaluated at the upper bound of current.
The compromise DSP in Figure 6.8 is solved using three different starting points in OptdesX:
the lower, middle, and upper bounds of the design variables. The best design variable settings
and responses for the motor platform are listed in Table 6.3. The values for the number of
armature turns and field turns have been rounded to the nearest integer.
217
For the purpose of verifying the solution itself, convergence plots for mean mass and
mean efficiency are presented in Figure 6.10 for high, middle, and low starting points used in
Figure 6.9 Convergence Plots for the Universal Motor Product Platform
To develop the individual motors within the scaled product family using the product platform
specifications from Table 6.4, the compromise DSP given in Figure 6.8 is modified such that Nc,
Ns, Awa, Awf, r, and t are held constant at the values listed in Table 6.4, and only the current, I,
and stack length, L, are allowed to vary to meet the original set of torque requirements.
Because the mean and standard deviation for stack length have been found for the product
platform, the initial range of interest for stack length now can be discarded in favor of the range
for the product platform. Using the assumption that length is uniformly distributed, the minimum
218
?L  3 ?L = L = ?L  3 ?L [6.38]
Substituting the values for mean and standard deviation of stack length shown in Table 6.4, the
new lower and upper bounds of interest for stack length are as follows:
0.057 = L = 5.18 cm
Note that because individual torque goals are being set, the goal for standard deviation of
torque is eliminated in the modified compromise DSP formulation. Also, the constraint on
magnetizing intensity is no longer imposed on any maximum magnetizing intensity but rather on
the individual magnetizing intensity and is evaluated at the current for each motor.
The product platform is instantiated by selecting appropriate values for the scale factor
(stack length) within the range specified by the mean and standard deviation in Table 6.4 for
each desired set of torque and power requirements. The current also is being allowed to vary
since it is a dependent variable in the system, i.e., it is the amount of current which is drawn by
the motor such that the given torque and power requirements are met for a given motor
geometry. In terms of the principles of robust design, the values shown in Table 6.4 for the
product portfolio are found such that the goal for mean power is on target, while varying the
current allows the standard deviation of power across the instantiated product family to be zero.
The modified compromise DSP for the product family is shown in Figure 6.10 and is again
solved using OptdesX while starting from three different starting points.
219
Given:
? Configuration scale factor = stack length
? Universal motor model equations
? Platform settings for Nc, Ns, Awa, Awf, r, and t (Table 6.4)
Find:
? The system variables, x:
• Stack length, L • Current drawn by the motor, I
Satisfy:
? The system constraints:
• Magnetizing intensity, H: Hmax = 5000
• Feasible geometry: t < ro
• Power output, P: P = 300 W
• Motor efficiency, ? : ? = 0.15
• Mass, M: M = 2.0 kg
? Individual torque requirements:
• Torque, T: T = {0.05, 0.1, 0.125, 0.15, 0.2, 0.25,
0.3, 0.35, 0.4, 0.5} Nm
? The bounds on the system variables:
0.1 = I = 6.0 A
0.057 = L = 5.18 cm
Minimize:
? Mass, target: M = 0.50 kg
Maximize:
? Efficiency, target: ? = 0.70
Figure 6.10 Compromise DSP Formulation for Instantiating the PPCEM Platform for
Use with OptdesX
The resulting values for current and stack length of each motor (PPCEM platform
instantiation) are listed in Table 6.5 along with the corresponding response values. Notice that
the stack length varies from 0.865 cm to 2.95 cm in order to meet the desired torque and
power requirements. The resulting current drawn by the system ranges from 3.39 Amps to
220
5.82 Amps which is slightly high but acceptable for a motor with such a large torque. Finally,
notice that only three motors meet the desired efficiency target of 70%, and these are all at the
lowend, and only one motor achieves the mass target of 0.5 kg.
It is uncertain whether the failure to achieve the desired mass and efficiency targets is a
property of the system itself or a result of using the PPCEM. Therefore, a family of individually
designed universal motors is developed in the next section to provide a benchmark for
comparison. The differences between this family of benchmark motors and the PPCEM
221
6.5 RAMIFICATIONS OF THE RESULTS OF THE ELECTRIC MOTOR
EXAMPLE PROBLEM
In order to generate a family of benchmark motors to compare with the PPCEM family
of motors, the compromise DSP presented in Figure 6.10 is modified such that Nc, Ns, Awa,
Awf, r, and t are all design variables variables in addition to I and L. The resulting compromise
DSP is shown in Figure 6.12. This compromise DSP is solved using OptdesX for each of the
ten power and torque ratings. Three different starting points—lower, middle, and upper
Given:
? Universal motor model analysis equations, Section 6.1.2
Find:
? The system variables, x:
• Number of turns on the armature, Nc • Thickness of the stator, t
• Number of turns on each pole on the • Current drawn by the motor, I
field, Ns • Radius of the motor, r
• Crosssectional area of the wire on the • Mean of stack length, ? L
armature, Awa • Standard deviation of stack
• Crosssectional area of the wire on the length, ? L
field, Awf
Satisfy:
? The system constraints:
• Magnetizing intensity, H: Hmax = 5000
• Feasible geometry: t < ro
• Power output, P: P = 300 W
• Motor efficiency, ? : ? = 0.15
• Mass, M: M = 2.0 kg
222
? Individual torque requirement:
• Torque T = {0.05, 0.1, 0.125, 0.15, 0.2, 0.25,
0.3, 0.35, 0.4, 0.5} Nm
? The bounds on the system variables:
100 = Nc = 1500 turns 1.0 = ro = 10.0 cm
1 = Ns = 500 turns 0.5 = t = 10.0 mm
0.01 = Awa = 1.0 mm2 0.1 = I = 6.0 A
0.01 = Awf = 1.0 mm2 0.057 = L = 5.18 m
Minimize:
? Mean mass, target: M = 0.50 kg
Maximize:
? Mean efficiency, target: ? = 0.70
Figure 6.11 Compromise DSP Formulation for Benchmark Universal Motor Family for
Use with OptdesX
The resulting design variable settings and responses for each benchmark motor are
summarized in Table 6.6. Compared to the PPCEM solutions listed in Table 6.5, the number of
armature turns, Nc, is generally lower than the PPCEM platform specification and the number of
field turns, Ns, is slightly higher. The crosssectional area of the field wire, Awf, is lower than
the PPCEM platform specification; however, the PPCEM platform value for Awa, armature wire
crosssectional area, is contained within the range observed for the benchmark motors.
Similarly, the ranges for motor radius, r, and thickness, t, for the benchmark motors both span
the values of the PPCEM platform specifications. These motors draw less current—a maximum
of 4.71 Amps—compared to the PPCEM family of motors which draw as much as 5.82 Amps
for the equivalent motor. Finally, note that the range of stack lengths of the benchmark motors
are comparable to the range of stack lengths found using the PPCEM.
223
Table 6.6 Benchmark Universal Motor Specifications
Regarding the performance of each motor, the desired torque and power requirements
are achieved by each motor; moreover, more of the benchmark motors achieve the mass and
efficiency targets of 0.5 kg and 70%. Unlike the PPCEM family of motors, four of the
benchmark motors achieves the efficiency target of 70%, and four of the motors achieves the
mass target of 0.5 kg. A closer comparison of the performance of the individual motors is
For the purpose of validating the solutions themselves, convergence plots for mass and
efficiency for the 0.25 Nm benchmark motor torque are presented in Figure 6.13 for the high,
middle, and low starting points. Fairly good convergence is observed in each graph; however,
the final value of efficiency from the high starting point is slightly worse than the final values of
efficiency from the low and middle starting points. Therefore, in situtations where all three
points do not converge to the same final value, only the best solution is reported.
224
(a) Mass (b) Efficiency
6.5.2 Comparison between the Benchmark Universal Motor Family and the PPCEM
Motor Family
compare against the performance of the PPCEM family of universal motors found in Section
6.4. As shown in Table 6.5 and Table 6.6, both families meet their goals for both power and
torque; however, their responses for efficiency and mass differ. The efficiency and mass of each
motor within the benchmark family and the PPCEM family are repeated in Table 6.7 along with
the percentage difference of each response from the benchmark to the PPCEM. For efficiency,
a positive change denotes an improvement from the benchmark to the PPCEM; meanwhile, a
negative change denotes an improvement in the mass. Finally, note that a motor which has
achieved its target mass (0.5 kg) and efficiency (70%) is considered to be equivalent to a motor
with a mass which is lower than the target or an efficiency which is higher than the target.
225
Table 6.7 Comparison of the Responses between the Benchmark Motor Family and
the PPCEM Motor Family
Three of the PPCEM motors have equivalent efficiency ratings to the corresponding
benchmark motor which produces the same torque; however, only the motor with the smallest
torque (Motor #1) is considered to have a mass equivalent to its corresponding benchmark
motor. As tallied at the bottom of Table 6.7, the PPCEM motors lose 7% in efficiency and
weigh 9% more than the family of benchmark motors, on average. Therefore, while the
family of PPCEM motors based on a common product platform scaled in stack length
are able to achieve the desired range of torque and power requirements, they lose, on
engineering designers and managers to decide if the savings (in inventory, manufacturing, etc.)
from having a family motors based a common platform and scaled only in stack length
226
outweighs the sacrifice in mass and efficiency. Meanwhile, an attempt to improve the
performance of the PPCEM family of motors in relation to the benchmark family of motors is
While investigating this example, it was learned that Black & Decker varies more than
just stack length when it scales its universal motors to meet a variety of power ratings. In
addition to increasing the stack length of the motor, they also allow the number of turns in the
field and armature and the crosssectional area of the wires in the field and armature to vary
from one motor to the next. Careful inspection of any two motors from one of their power tool
lines (say, corded drills) reveals that this is indeed the case.
The question then becomes: how well do the PPCEM instantiations perform if the
number of field and armature turns (N s and Nc) and the crosssectional area of the field
and armature wires (A wf and Awa) are allowed to vary in conjunction with varying the
stack length (and current)? The results obtained by solving a new set of compromise DSPs
for each universal motor are listed in Table 6.8. These solutions are obtained by modifying the
compromise DSP in Figure 6.10 to allow Nc, Ns, Awf, and Awa to vary from their platform
227
Table 6.8 New PPCEM Universal Motor Instantiations with Varying Numbers of
Turns, Wire CrossSectional Areas, and Stack Lengths
Recall that the target for efficiency is 70%, and the target for mass is 0.5 kg. So as
discussed previously, even if a particular motor weighs less than 0.5 kg or has an efficiency
greater than 70%, it is still considered to be equivalent to a motor which is exactly 0.5 kg or has
70% efficiency. With this in mind, the new family of PPCEM motors (allowing the numbers of
turns and wire crosssectional areas to vary along with stack length and current) and the family
Table 6.9. In both families of motors, the necessary torque and power requirements have been
met, and the two sets of motors are compared solely on their respective efficiencies and masses.
The result is that four of the ten motors are equivalent (identical) since they achieve the targets
for mass and efficiency, and the remaining six motors vary by less than 2%. The highest torque
motor in this new PPCEM family is slightly less efficient (2.7%) than the corresponding
228
benchmark motor, but it weighs less (4.5%). This tradeoff is really negligible since more wire
can be wrapped around the field or armature to improve the efficiency with only slight increase
in mass. Consequently, by allowing the numbers of wire turns and the wire cross
sectional areas to vary while also scaling the stack length, the resulting family of motors
benchmark motors. This is a very important observation because it indicates that the PPCEM
can be used to obtain a family of motors which sacrifices minimal performance even though the
motors are based, for the most part, on a common platform specification.
Table 6.9 Comparison of Benchmark Designs and New PPCEM Instantiations with
Varying Numbers of Turns, Wire CrossSectional Areas, and Stack Lengths
From an engineering standpoint, does it make sense to let Nc, Ns, Awf, and Awa
vary along with the stack length (and current)? From a manufacturing perspective, it makes
229
perfect sense to allow the number of turns of wire in the armature and field (Nc and Ns,
respectively) to vary since it costs little extra to wrap more (or fewer) turns when the motor is
being produced. From a inventory perspective, however, it would appear that allowing Awf and
Awa to vary is not cost effective since it requires that multiple wire types (i.e., varying cross
sectional areas) must be kept in stock in order to produce the family of motors. The justification
for allowing Awf and Awa to vary is as follows. As the stack length of the motor increases (with
everything else being held constant), the torque on the motor increases; however, the power
output actually decreases because the copper losses, given by Equation 6.5, increase as since
One way to compensate for this loss in power is to allow more current to be drawn as
is the case in this example. What is typically done, however, to compensate for this decrease in
power (as the stack length is increased) is to decrease the number of field and armature turns
while simultaneously increasing the field and armature wire crosssectional areas. This lowers
the resistances Ra and Rs and reduces copper losses without having to draw additional current in
order to maintain the desired output power. An added benefit of this approach is that the
rotational speed of the motor will also increase. In reality, the operating speed of the motor is a
very important design consideration since power and torque are related through the equation:
P = T? [6.39]
where P is power, T is torque, and ? is the rotational speed of the motor. The speed of the
motor has been neglected in this example since it is fixed once power and torque have been
230
specified. Based on Equation 6.39, for a fixed power output, as torque increases, the rotational
speed of the motor must decrease. In many cases however, as power increases and torque
increases, maintaining a consistent operating speed for the motor is often desired. The
additional inventory costs incurred by stocking a wider variety of wire sizes is offset by this
combination of effects.
But what if Awf and Awa were held fixed and only Nc and Ns were allowed to vary
along with stack length (and current)? The answer is summarized in Table 6.10 wherein the
compromise DSP for the family of motors for instantiating the PPCEM platform, Figure 6.10,
has been modified a third time to allow only Nc, Ns, L and I to vary from the platform
specifications. Comparison of the data in Table 6.10 with Table 6.8 (the PPCEM instantiations
with Awf and Awa varying also) reveals that both families of motors are nearly identical in terms
231
Table 6.10 Third PPCEM Universal Motor Family with Varying Numbers of Turns
and Stack Lengths
The mass and efficiency of the four families of motors (the benchmark, the PPCEM
varying only L, the PPCEM varying L, Nc, and Ns, and the PPCEM varying L, Nc, Ns, Awf, and
Awa) are summarized in Table 6.11 to facilitate comparison. The percentage difference (%
Diff.) listed in the table is a comparison of each PPCEM instantiation against the corresponding
benchmark motor, i.e., the performance characteristics of a motor which has been individually
designed and optimized. As stated previously, motors which achieve their respective mass and
efficiency targets of 0.50 kg and 70.0% are considered equivalent solutions even if the motor
has a lower mass or higher efficiency. In this regard, some observations based on the data in
• As more variables are allowed to vary from the platform specifications, the better the
performance of the individual motors; the tradeoff is that less and less is common
between the motors within the product family. It then becomes a decision of the
232
engineering designers and management to evaluate the tradeoffs between commonality
and performance to determine the best family to pursue. This reinforces the statement
that the PPCEM facilitates generating these options but is not necessarily used to select
the best one since it requires information which is beyond the scope of this investigation.
Table 6.11 Efficiency and Mass of Benchmark Motors and PPCEM Motor Platform
Families
233
• In this example, the PPCEM instantiations when Nc, Ns, Awf, and Awa are allowed to
vary in addition to the stack length yields an equivalent family of motors to the family of
individually designed benchmark motors. Varying only Nc and Ns in the PPCEM family
while holding Awf and Awa fixed at the platform specification also yields a good family of
motors with minimal sacrifice in performance (mass and efficiency) when compared to
the benchmark family of motors.
In light of these observations, are the solutions obtained from the PPCEM useful?
The answer is undoubtedly yes. The initial family of motors obtained using the PPCEM meets
the range of torque and power requirements which have been specified for the product family.
However, because these motors are based on a common product platform and vary only in
stack length (and current), the motors lose, on average, 10% in both efficiency and mass for the
effort to reflect a more realistic set of motors, by allowing the number of turns in the armature
and field to vary in addition to the stack length, the family of motors obtained using the PPCEM
are essentially identical to the equivalent family of benchmark motors. The necessary torque
and power requirements are met with minimal sacrifice in performance (mass and efficiency). If
the wire crosssectional areas of the wire in the field and armature are allowed to vary in
addition to the number of wire turns in each and the stack length, then the family of motors
obtained using the PPCEM are identical, for all intents and purposes, to the corresponding
family of benchmark motors. Thus, the PPCEM has greatly facilitated generating a variety of
options which the engineering designers and managers can select from based on what is best for
the company.
234
Are the time and resources consumed within reasonable limits? In general, fewer
analysis calls are required to obtain the PPCEM family of motors than the benchmark family of
motors. To obtain the benchmark motor family, ten optimization problems must be solved
where each optimization involves finding the best settings of eight design variables. For the
PPCEM platform instantiations, the initial family of motors requires solving one optimization
problem to find the values of nine design variables (which includes mean and standard deviation
of stack length for the platform) followed by solving ten optimization problems involving as few
as two (current, I, and stack length, L) and as many as six (current, I, stack length, L, #
armature turns, Nc, # field turns, Ns, crosssectional areas of the field wire, Awf, and armature
wire, Awa) design variables where the size of the subsequent optimization problems is dependent
on the number of design variables which are being instantiated for each motor from the platform
design.
optimize the motor platform and individual motors, each iteration of the optimization requires
one analysis call to evaluate the current iterate and two evaluation calls per design variable to
estimate the gradient to determine the next iterate. For the family of benchmark motors, the
235
where d.v. is an abbreviation for design variable, and n is the average number of iterations
required to solve each optimization. For the PPCEM family of motors, the number of analysis
??
??1 analysis ?? ??
?? 2 analyses ???9 d.v. ???+ (10 motors)•
(m iterations) ?? ?
?? iteration ?? ??(d.v.)(iteration) ??
?? ??
??
where m is the number of iterations required to find the PPCEM platform design, and k is the
number of iterations required to find each instantiation of the PPCEM platform. On average, n
˜ 10, m ˜ 12, and k ˜ 5; therefore, the average number of analysis calls required to obtain the
benchmark motor designs is 170•10 = 1700 while the average number of analysis calls required
to find the PPCEM motor designs is 19•12 + 50•5 = 478, a difference of 1222 analyses. So
even if as many as six design variables are allowed to vary between PPCEM instantiations from
the product platform, then by replacing (2 d.v.) in Equation 6.41 with (6 d.v.), it would still only
require about 19•12 + 130•5 = 878 analysis calls which is slightly more than half the analysis
calls required to find the benchmark motor designs, and this estimate does not even take into
consideration the fact that each optimization is solved from three different starting point. So by
using the PPCEM to first find a common motor platform design and then scaling the
platform in the stack length, an equivalently good family of motors can be obtained with
fewer analysis calls than if each motor were designed individually. Plus, there is the
236
added benefit of the family of motors found using the PPCEM have more in common
Is the work grounded in reality? The problem formulation has been developed from
an electric motor for a 3/8” variable speed, reversible, corded Black & Decker drill, model
#7190. The drill is rated at 288 W and 1200 rpm, drawing 3.5 Amps of current and is at the
lowend of their product line. The gear reduction on the motor is estimated to be 10:1;
therefore, the operating speed of the motor itself is 12,000 rpm. Using Equation 6.39, the
operating motor torque is computed as being 0.23 Nm. Assuming an input voltage of 115 V,
the input power is 402.5 W when drawing 3.5 Amps of current (see Equation 6.2). Since the
output power is 288 W, the efficiency of the motor is computed using Equation 6.14 and is
71.6%. The mass of the motor is 0.496 kg. Consequently, the target values of 300 W power,
70% efficiency, 0.5 kg, and 0.05 Nm to 0.5 Nm of torque are built around the performance
ratings for this motor, and the motor from this drill is taken as the midrange motor in a family of
The pertinent motor specifications (design variable settings) for this torque and power
237
It is difficult to count the number of armature turns in the actual motor; the best guess is around
750 turns. Can the analytical model be used to predict the performance of the actual
motor given these specifications? Unfortunately, the analytical model developed in this
chapter cannot be used to predict the performance to the actual motor given these
specifications. There are two discrepancies which arise between the model and the actual
motor. First, the real motor from the drill is not a true universal motor since it appears to be
designed for AC use only (as stated on the exterior of the box). Second, the number of poles
on the armature is twelve; in a real universal motor, the number of poles in the armature is
typically two which is an important assumption used when deriving the torque equations
(Equations 6.196.24) for the motor. However, these specifications can still be used to gauge
In general, the values for stator thickness, motor radius, stack length, and current are in
close agreement with the values obtained using the PPCEM, see Table 6.5, Table 6.6, Table
6.8, and Table 6.10. The number of field turns is on the low end while the number of armature
turns is on the high end compared to this actual motor. Finally, the wire crosssectional areas in
the actual motor are slightly smaller than the values obtained using the analytical model for the
motor. These discrepancies are discussed in more detail in the context of the limitations and
What are the limitations of the analytical model and problem formulation
developed in this chapter? There are two noteworthy shortcomings to the analytical model
238
and problem formulation presented in this chapter for the family of universal electric motors.
First, the speed of the motor has not been taken into consideration in the problem formulation as
discussed previously. By specifying power and torque requirements for each motor, the
resulting rotational speed of the motor is fixed through Equation 6.40. It is important to ensure
that as the torque of the motor increases that the power also increases so that the operating
speed of the motor does not decrease significantly. For purposes of this demonstration, this is
not a major concern; however, a more realistic representation of the motor problem formulation
The second notable shortcoming of the analytical model relates to the large numbers of
armature turns in each motor and the related discrepancies between wire crosssections and
number of field turns. Can 1062 turns of 0.241 mm2 wire be packed into a cylindrical volume
with a radius of ˜ 2.0 cm (the motor radius minus the thickness of the PPCEM platform listed in
Table 6.4) and a length of 2.62 cm? The answer depends on how tightly wires can be packed
around the armature and how much steel is used within the poles of armature. The complexity
of such an analysis was considered to be beyond the scope of this example; however, it is
recommended to include these space considerations in future studies in order to improve the
fidelity of the model. Furthermore, decreasing the number of armature turns is liable to increase
the required number of field turns (which are considered to be low given that the Black &
Decker motor has 135 turns) in order to maintain sufficient magnetic flux through the motor.
Placing space constraints on the amount of wire in the field and armature should also have the
239
effect of decreasing Awf and Awa in addition to making the number of field and armature
Finally, do the benefits of the work outweigh the cost? The answer to this last
question is also affirmative. Regardless of whether the family of benchmark motors or the
PPCEM family was being designed, the analytical model would still have been constructed. The
only addition to the model required to use the PPCEM is deriving an expression for the
standard deviation of torque based on variations in the motor stack length. This is achieved by
means of a firstorder Taylor series approximation, Equation 6.37, which requires taking the
derivative of the torque equation, Equation 6.19, with respect to stack length. Once this is
accomplished, using the PPCEM yields a family of motors with high commonality and negligible
benchmark motors. Thus, the PPCEM facilitates the exploration of product platform concepts
which can be scaled into an appropriate family of products, providing initial “proof of concept”
that the PPCEM does what it is intended to do. A look ahead to the next example to verify
The design of a family of universal electric motors has been utilized to demonstrate how
Furthermore, the application of the market segmentation grid to help identify and map vertical
leveraging of a product platform for a wide range of performance/price tiers within a given
240
market segment is illustrated in this example. From the simple analytical model derived in
Section 6.1.3, it has also been shown in this example that for smallscale problems such as this
one, Step 3 of the PPCEM—building metamodels—is not always necessary provided that
analytical expressions exist, or can be derived, to relate variations in the scale factor (the stack
length of the motor in this example) to variations in product performance (goals and constraints).
As evidenced by the discussion in the previous section comparing the individually designed
benchmark motors and the PPCEM platform motors, the PPCEM has been implemented to
design a family of universal motors (based on a scalable product platform) which is capable of
meeting a wide range of torque requirements with minimal compromise in efficiency and mass.
Demonstrated: Demonstrate:
• vertical leveraging • horizontal leveraging
• parametric scale • configurational scale
factor: stack length factor: # passengers
• no metamodels • kriging metamodels
• robust design to facilitate robust
implementation: design and concept
separate goals for exploration
“bringing mean on • robust design
target” and “minimize implementation:
the variation” Family of Universal Family of General design capability
• OptdesX solver Electric Motors Aviation Aircraft indices
Test: • use DSIDES
• H1, SH1.1, SH1.2, Test:
and SH1.3 Chp 6 Chp 7 • H1, SH1.1, SH1.2,
SH1.3, and H2
Platform
241
As shown in Figure 6.13, in Chapter 7 the PPCEM is applied to a larger, more
complex problem, namely, the design of a family of General Aviation aircraft. In the General
Aviation aircraft example, all of the steps in the PPCEM are implemented, including
further Hypotheses 1, its related subhypotheses, and Hypothesis 2. The details of the General
Aviation aircraft example are discussed at the beginning of the next chapter.
242
7.
CHAPTER 7
In this chapter, the PPCEM is applied in full to the design of a family of General
Aviation aircraft (GAA) for final verification of the method. The layout of this chapter parallels
that of the universal motor case study in the previous chapter. Motivation and background for
the GAA are given in Section 7.1, along with Step 1 of the PPCEM, namely, creation of an
appropriate market segmentation grid for the family of GAA to accommodate the problem
requirements. Based on the desired horizontal leveraging strategy, the scale factor for the
problem is taken as the number of passengers on the aircraft as explained in Section 7.2; the
control factors and responses for the family of GAA also are described in Section 7.2 as part of
Step 2 of the PPCEM. Kriging metamodels then are created for the mean and standard
deviation of each response for the family of GAA in Section 7.3. After validating the accuracy
of the metamodels, a compromise DSP for the family of aircraft is formulated in Section 7.4 and
234
exercised in Section 7.5 to develop the GAA platform portfolio. Ramifications of the results
and comparison of the PPCEM solutions against individually designed benchmark aircraft are
discussed in Section 7.6. In addition, a product variety tradeoff study is performed, making use
of the noncommonality indices (NCI) and performance deviation indices (PDI) to examine the
As shown in Table 7.1, all but Hypothesis 3 are verified further in this chapter. As
mentioned in the preceding paragraph, the market segmentation grid is used to identify a
horizontal leveraging strategy for the GAA product family providing further verification of Sub
Hypothesis 1.1. The use of design capability indices is demonstrated to aggregate the product
family specifications (SH1.3) and facilitate the development of a product platform which is
robust (SH1.2) to variations in the number of passengers on the aircraft, the scale factor.
Furthermore, kriging metamodels are exploited in this chapter to facilitate the implementation of
robust design within the PPCEM, providing further support for Hypothesis 2. Space filling
designs are utilized to create these metamodels; however, only one type of design is used—an
235
SH1.3 Aggregating product family X X
specifications
H2 Utility of kriging for metamodeling X
deterministic computer experiments
A summary of the findings and lessons learned in this example are summarized at the
end of the chapter in Section 7.7. The objective in the summary is to describe how Hypothesis
1 has been verified further through this example. A brief look ahead to Chapter 8 is given in this
last section.
236
7.1 STEP 1: DEVELOPMENT OF THE MARKET SEGMENTATION GRID
Before developing the market segmentation grid, an overview of the General Aviation
aircraft example is given in the next section. This is followed in Section 7.1.2 with a brief
overview of the phases of aircraft design. The market segmentation grid for the General
Aviation aircraft example is presented in Section 7.1.3 along with the baseline design which
What is a General Aviation aircraft? The term General Aviation encompasses all
flights except military operations and commercial carriers. Its potential buyers form a diverse
group that include weekend and recreational pilots, training pilots and instructors, traveling
business executives and even small commercial operators. Satisfying a group with such diverse
needs and economic potential poses a constant challenge for the General Aviation industry
because it is impossible to satisfy all of the market needs with a single aircraft. The present
financial and legal pressures endured by the General Aviation sector makes small production
runs of specialized models unprofitable. As a result, many General Aviation aircraft are no
longer being produced, and the few remaining models are beyond the financial capability of all
In an effort to revitalize the General Aviation sector, the National Aeronautics and
Space Administration (NASA) and the Federal Aviation Administration (FAA) recently
237
sponsored a General Aviation Design Competition (NASA and FAA, 1994). For this work, a
One solution to the GAA crisis is to develop a family of aircraft which can be adapted
easily to satisfy distinct groups of customer demands. Therefore, the purpose in this example is
a family of aircraft scaled around the two, four, and six seater configurations
using the PPCEM. This family of General Aviation aircraft must be capable of
price and operating cost while meeting desired technical and economic
considerations.
In order to realize this objective and demonstrate the application of the PPCEM, a brief
overview of aircraft design is given in the next section. This is followed in Section 7.1.3 with the
development of the market segmentation grid—Step 1 of the PPCEM—for the family of GAA.
How does one go about designing an aircraft? Aircraft design traditionally is divided
into three phases, namely, conceptual, preliminary, and detailed design as illustrated in Figure
238
7.1. If manufacturing design is considered as a part of the design process, design for production
can be added as a fourth phase. The first two phases of aircraft design, the conceptual and
preliminary phases, are sometimes combined and called advanced design or synthesis in the
aerospace industry, while followon phases are called project design or analysis. More detailed
descriptions of the decisions made in each phase and the disciplines involved in aircraft design
can be found in, e.g., (Bond and Ricci, 1992). Specifically, the efforts in this example are
directed toward utilizing the computer within the traditional conceptual phase of aircraft design
Conceptual Design Phase: In this phase, the general size and configuration of the aircraft
is determined. Parametric trade studies are conducted using preliminary estimates of
aerodynamics and weights to determine the best wing loading, wing sweep, aspect ratio,
thickness ratio, and general wingbodytail configuration. Different engines are
considered and the thrust loading is varied to obtain the best match of airframe and
engine. The first look at cost and manufacturing possibilities is made at this time. The
feasibility of the design to accomplish a given set of mission requirements is established,
but the details of the configuration are subject to change.
239
Platform
Detailed Production
Design Baseline
In general, in the early stages of aircraft design, the aircraft concept is synthesized at the
level. Toplevel design specifications are used as the starting point for the preliminary design at
the subsystem level, and form the basis for the specifications (functional properties) that are
developed during the preliminary design phase. These toplevel design specifications include
variables such as aspect ratio, thickness ratio, and wingbodytail configuration. The toplevel
240
design specifications can be continuous (e.g., aspect ratio = 711) or they can be discrete
design concepts (e.g., single or twinengine, single, number of propeller blades, high or low
wing, and fixed or retractable landing gear). The settings of the toplevel design specifications
define the conceptual baseline which becomes the configuration input for preliminary design,
where the system is decomposed for more sophisticated analysis by discipline, subsystem, or
component. The reader is referred to (Koch, 1997) for more discussion on the resulting
Several synthesis and analysis programs have been created to facilitate the conceptual
and preliminary design of aircraft and hence the development of these toplevel design
specifications. One such program is entitled FLOPS (FLight OPtimization System (McCullers,
preliminary design and evaluation of advanced aircraft concepts. Another program is called
GASP (General Aviation Synthesis Program (NASA, 1978)). GASP is a computer program
which performs tasks specifically associated with the conceptual design of General Aviation
aircraft; consequently, it has been selected as the synthesis program for use in this example.
What is GASP and how does it work? GASP is a synthesis and analysis computer
program which facilitates parametric studies of small aircraft. GASP specializes in small fixed
wing aircraft employing propulsion systems varying from a single piston engine with fixed pitch
GASP contains an overall control module and six technology submodules which perform the
241
various independent studies required in the design of General Aviation or small transport type
• Aerodynamics Module: Lift and drag coefficients, lift curve slope computation due to
aspect ratio, sweep angle, Mach number, and induced drag are determined.
• Weight and Balance Module: Weight trend coefficients for gross weight, payload,
and aircraft geometry are used to estimate the center of gravity, travel of the aircraft,
fuel tank size, and compartment weight.
• Mission Performance Module: All of the mission segments such as taxi, take off,
climb, cruise and landing are analyzed including the total range. The program also
calculates the best rate of climb, high speed climb, and other characteristics.
• Economics Module: Manufacturing and operating costs are estimated based on the
date of inception of the program (i.e., in 1970's dollars).
Input variables for GASP are general indicators of aircraft type, size, and performance.
The numerical output from GASP includes many aircraft design characteristics such as range,
direct operating cost, maximum cruise speed, and lifttodrag ratio. For conceptual design of an
aircraft, GASP is used to find appropriate settings for the toplevel design specifications that
satisfy the design requirements. By utilizing GASP as the simulation package within the
242
PPCEM, the PPCEM can be used to develop a set of toplevel design specifications for a
suitable aircraft platform which is good for the entire family of aircraft as shown in Figure 7.1.
The first step in the PPCEM is to develop the market segmentation grid which is accomplished
7.1.3 The Market Segmentation Grid for the GAA Example Problem
As stated at the beginning of this section, the objective in this example is to design a
family of GAA around the two, four, and six seater configurations. The market segmentation
grid shown in Figure 7.2 depicts a potential leveraging strategy for the GAA example. The goal
is to design a low end aircraft platform which can be leveraged across three different market
segments which are defined by the capacity of the aircraft (i.e., two people, four people, and six
people) similar to the Boeing 747 series of aircraft (Rothwell and Gardiner, 1990). Each
aircraft could eventually be vertically scaled through the addition and removal of features to
increase its performance and attractiveness to a customer base willing to pay a higher price. At
243
High End
MidRange
The baseline configuration is derived from an existing General Aviation aircraft, namely,
the Beechcraft Bonanza B36TC. The Bonanza is a fourtosix seat, singleengine business and
utility aircraft as illustrated in Figure 7.3 and is one of the most popular GAA sold in recent
25,000 ft with a speed of 200 knots and a minimum range of 956 nautical miles (at 79% of
power). Furthermore, Bonanza’s mission and performance characteristics are close to those
specified in the GAA competition (NASA and FAA, 1994). Taking the Bonanza as the starting
point, several calculations can be performed to determine the GASP input data, specifically for
aerodynamics, engine performance, and stability control parameters, according to the mission
specifications when used in GASP are summarized in Table 7.2, and the corresponding top
level design specifications for this baseline aircraft are listed in Table 7.3 where dimensions in
245
Table 7.3 Baseline Model Specifications
Span 38.3 ft
Mean Chord 5.09 ft
1/4 Chord Sweep 4.0°
Taper Ratio 0.46
Root Thickness 0.15
Wing Loading 20.5 lb/ft2
Wing Fuel Volume 180 gal
Horizontal Tail Aspect Ratio 5.08
Area 45.2 ft 2
Span 15.15 ft
Mean Chord 3.09 ft
Thickness 0.09
Vertical Tail Aspect Ratio 1.07
Area 20.8 ft 2
Span 4.71 ft
Mean Chord 4.61 ft
Thickness 0.07
Engine Power 350 HP turbocharged
Static Thrust/Wt 0.339
Activity Factor 110
Propeller Diameter 6.30 ft
# of Blades 3
The flight mission for the family of GAA in this example is illustrated in Figure 7.4. As
specified in the GAA competition guidelines (NASA and FAA, 1994), a General Aviation
246
aircraft is required to fly at 150300 kts (Mach 0.24 to 0.48) for a range of 8001000 nautical
miles. The mission profile shown in Figure 7.4 has a (baseline) cruise speed of Mach 0.31 (˜
200 kts) and a range of 900 n.m. (nautical miles). In the diagram, FAR 23 represents Part 23
of the Federal Aviation Requirement which designates acceptable noise levels during aircraft
Based on the GAA market segmentation grid in Figure 7.2, the scale factor for the
around the number of people on the aircraft; hence, the number of passengers is the scale factor
in this problem. The effect of the number of passengers on the length of the fuselage and sizing
of the aircraft is discussed more in the next section wherein the targets and requirements for
each aircraft and the design variables for this example are described.
247
7.2 STEP 2: GAA FACTOR CLASSIFICATION
Having created the market segmentation grid and identified an appropriate leveraging
strategy and scale factor, the next step in the PPCEM is to classify the factors within the GAA
problem. The general configuration of each aircraft has been fixed at three propeller blades,
high wing position, and retractable landing gear based on previous work (Simpson, 1995). The
design variables (i.e., control factors) and corresponding ranges of interest in this study are as
follows:
1. Cruise speed, CSPD ? [Mach 0.24, Mach 0.48]; baseline is Mach 0.31
A brief description of the importance and effects of each of these variables is included in Section
F.1.
There are a total of nine responses (i.e., requirements and goals) which are of interest
for each aircraft: takeoff noise, direct operating cost, ride roughness, empty weight and fuel
weight, purchase price, and maximum cruise speed, flight range and lift/drag ratio. In general, it
• lower direct operating cost, purchase price, empty weight and fuel weight to their
targets;
248
• raise maximum cruise speed, flight range, and lift/drag ratio to their targets; and
• meet constraints on the maximum takeoff noise, ride roughness, direct operating cost,
empty weight, and fuel weight, and minimum flight range.
The constraint values and target values for the goals employed in this example are listed in Table
7.4 and Table 7.5, respectively. As such, these constraints and targets define each market
niche for each of the three aircraft. As shown in Table 7.4 the constraint values for takeoff
noise, direct operating cost, ride roughness aircraft empty weight, and range are the same for
each aircraft within the family; only the fuel weight constraint varies for each aircraft, allowing
Table 7.4 Constraints for the Two, Four, and Six Seater GAA
The goal targets which define each market niche are listed in Table 7.5. A compromise
DSP is used to determine the settings of the six design variables which lower fuel weight, empty
weight, direct operating cost, and purchase price to their targets or below while raising
maximum lift/drag, cruise speed, and range to their targets. The compromise DSP formulation
249
Table 7.5 Goal Targets for the Two, Four, and Six Seater GAA
Based on the leveraging strategy shown in Figure 7.2, the number of people in the
aircraft is taken as the scale factor in the design process, ranging from a minimum of 2 to a
maximum of 6. Furthermore, it is assumed that the demand for the aircraft is uniform; therefore,
the scale factor—the number of passengers—is assumed to be uniformly distributed and so are
the corresponding responses. Taking the number of passengers as a scale factor, the length of
the central portion of the fuselage of the aircraft is scaled automatically within GASP to
accommodate the necessary number of passengers (plus one pilot). Because the length of the
aircraft is fixed once the number of people is specified, the mean and variance of the
scale factor are known in this example unlike in the universal motor example.
Based on this factor classification scheme, the Pdiagram for the GAA example
250
Y = Responses
Takeoff Noise
X = Control Factors Ride Roughness
Cruise speed
Empty Weight
Aspect Ratio
Fuel Weight
Propeller Diameter GASP Purchase Price
Wing Loading
Direct Operating Cost
Engine Activity Factor
Maximum Range
Seat Width Maximum Speed
S = # passengers Maximum Lift/drag
As shown in the figure, there are six control factors (design variables), one scale factor (the
number of passengers), and nine responses (constraints and goals). The process of constructing
kriging metamodels which relate the control and scale factors to each of the responses is
The next step in the PPCEM is to build and validate metamodels of the
analysis/simulation routine, i.e., GASP. In particular, robustness models are constructed for
each of the nine responses, yielding a total of 18 metamodels: one metamodel for the mean and
a variance of each response. Why use metamodels in the GAA example? The impetus is
twofold. First, GASP provides a “blackbox” type analysis for sizing an aircraft. The
computation time for GASP is about 45 seconds which does not necessarily warrant the use of
metamodels; however, after multiplying this number by three—the number of aircraft in the
family—and considering the number of design scenarios that are to be considered, the
computational expense adds up quickly. Moreover, it is difficult to estimate the mean and
251
variance of each response for the family of aircraft without the metamodels. It is much more
efficient to build metamodels for the mean and deviation of each response and use them to
search the design space than it is to use GASP directly in the search for a good platform design.
The product array approach is employed to build kriging metamodels of the mean
deviation of each response to variations in the number of passengers in each aircraft. This
approach is illustrated in Figure 7.6. The outer array is based on a randomized orthogonal array
of 64 points (n = 64). The use of the randomized orthogonal array is based, primarily, on ease
of generation and available sample sizes; it is also based, in part, on its performance in the
kriging/DOE study in Chapter 5 even though a six variable test problem was not utilized in the
study. To compare the sample size, a halffraction CCD for six factors would contain 45
points, and a fullfraction CCD would contain 77 points. The kriging models employ the
Gaussian correlation function, Equation 2.16, because this correlation function yielded the
lowest RMSE and max. error, on average, in the kriging/DOE study in Chapter 5 (see Section
5.3 in particular).
252
Platform
Scale Factor(s)
Responses
??noise??
Inner Array
Sample ??wemp??
?? doc ??
# PAX
design space
1
3
5
??rough??
j ? ??wfuel ??
??purch ??
1
2
3
# cspd ar dprp wl af ws
??range??
yj,1,1 yj,1,3 yj,1,5 µj,1 ? j,1 ??vcrmx??
1 0.24 7.0 5.9 20 85 17 ??
ldmax ??
2 0.24 7.0 5.0 19 91 20 yj,2,1 yj,2,3 yj,2,5 µj,2 ? j,2
Outer Array
• • • •
• • • •
• • • • Kriging models for each response
? ˆy ? f(cspd, ar, dprp, wl,af, ws)
n 0.48 11.0 5.0 20 85 17 yj,n,1 yj,n,3 yj,n,5 µj,n ? j,n j
? yˆ ? f(cspd, ar, dprp, wl,af, ws)
j
Figure 7.6 Product Array Approach for Constructing GAA Kriging Models
Because there is only one scale factor which has three possible settings, the inner array
shown in Figure 7.6 simply contains three runs, one for each possible value of the scale factor.
Hence, GASP is executed 3n times in order to build the kriging models for the mean and
deviation of the GAA responses. Notice that the variable PAX—the number of passengers—
varies from 1 to 5 in the figure. This is because the total number of people on the aircraft is
equal to the number of passengers plus 1 pilot; varying PAX from one to five is the same as
varying the number of people on the aircraft from two to six, allowing a family of aircraft to be
After varying the number of passengers for each combination of the design variables as
specified by the outer array, the mean and standard deviation of each response are computed
for each run using Equations 7.1 and 7.2 which are as follows:
253
y j,i,1 ? y j,i ,3 ? y j,i ,5
• Mean: ? y j ,i ? , j = {noise, wemp, ..., ldmax}, i = {1, 2, ..., n} [7.1]
3
y j,i,1 ? y j,i,5
• Std. Dev.: ? y j,i ? , j = {noise, wemp, ..., ldmax}, i = {1, 2, ..., n} [7.2]
12
Computation of the standard deviation assumes a uniform distribution of the response because
the number of passengers is assumed to vary uniformly over the design space. As an example,
the mean and standard deviation of the direct operating cost, DOC, for the 3rd experimental
It is in this manner that the means and deviations for each response for each experimental run in
the outer array are computed for a given experimental design. Kriging metamodels then are
constructed for the mean and deviation of each response, resulting in 18 metamodels. The
kriging algorithm described in Section A.2.1 is used to fit the model; the fitted values (MLE
estimates) for the “best” kriging model for each response are listed in Section F.2.
A set of 1000 validation points from a random Latin hypercube are used to assess the
accuracy of the GAA kriging models. The maximum error and root mean square error (RMSE)
based on the set of validation points for the kriging models based on the 64 point orthogonal
254
array are summarized in Table 7.6; both raw values and percentages (of the sample range) are
listed.
With the exception of the maximum error for ? DOC , all of the kriging metamodels appear
sufficiently accurate for this study; maximum errors are about 4% or less, and RMSEs are 1%
or less. Despite the large maximum error for ? DOC , the RMSE for ? DOC is sufficiently low
255
are used throughout the rest of the GAA example. Thus, Step 3 of the PPCEM is complete,
and the compromise DSP for the family of aircraft is formulated in the next section as Step 4 in
the PPCEM.
In the universal motor example, separate goals for “bringing the mean on target” and
“minimizing the deviation” for variations in the stack length are used. In this example, the
compromise DSP for the family of GAA employs design capability indices (Cdk) to assess the
ranged set of design requirements. Design capability indices are formulated for both the
constraints and goals for the family of GAA as defined by the constraints and target values listed
( 75 ? ? noise)
• NOISE = 75 = URL C dk,noise ? Cdu ,noise ? [7.5]
3? noise
(80 ? ? doc )
• DOC < 80 = URL C dk,doc ? C du,doc ? [7.6]
3? doc
(2.0 ? ? rough )
• ROUGH = 2 = URL C dk, rough ? Cdu, rough ? [7.7]
3? rough
(2200 ? ? wemp )
• WEMP = 2200 = URL C dk,wemp ? C du,wemp ? [7.8]
3? wemp
256
(450 ? ? wfuel )
• WFUEL = 450 = URL C dk,wfuel ? C du,wfuel ? [7.9]
3? wfuel
(? range ? 2000)
• RANGE = 2000 = LRL C dk, range ? Cdl,range ? [7.10]
3? range
• WFUEL:
• WEMP:
• PURCH:
(60 ? ? doc )
• DOC: C dk,doc ? C du,doc ? [7.14]
3? doc
257
(? ld max ? 17)
• LDMAX: C dk,ld max ? Cdl, ldmax ? [7.15]
3? ld max
(? vcrmx ? 200)
• VCRMX: C dk,vcrmx ? C dl, vcrmx ? [7.16]
3? vcrmx
(? range ? 2500)
• RANGE: C dk, range ? Cdl,range ? [7.17]
3? range
The resulting compromise DSP for the GAA product platform using these Cdk
formulations is given in Figure 7.7. There are six design variables, six constraints, and seven
goals. Of the seven goals, three are related to the economic performance of the aircraft—
empty weight (WEMP), purchase price (PURCH), and direct operating cost (DOC)—and the
remaining four are related to the technical performance of the aircraft: fuel weight (WFUEL),
maximum lift/drag (LDMAX), maximum cruise speed (VCRMX), and maximum flight range
(RANGE).
258
Given:
o Baseline aircraft configuration and mission profile
o Configuration scale factor = # passengers (where total # seats = # passengers + 1 pilot)
o Kriging models for mean and standard deviation of each response
Find:
o The system variables, x:
• cruise speed, CSPD • wing loading, WL
• wing aspect ratio, AR • engine activity factor, AF
• propeller diameter, DPRP • seat width, WS
o The values of the deviation variables associated with G(x):
• fuel weight Cdk, d1, d1+ • maximum lift/drag Cdk, d5, d5+
 +
• empty weight Cdk, d2 , d2 • maximum speed Cdk, d6, d6+
• direct operating cost Cdk, d3, d3+ • maximum range Cdk, d7, d7+
• purchase price Cdk, d4, d4+
Satisfy:
o The system constraints, C(x), based on kriging models:
• NOISE Cdk greater than 1: Cdk,noise(x) = 1 [7.5]
• DOC Cdk greater than 1: Cdk,doc(x) = 1 [7.6]
• ROUGH Cdk greater than 1: Cdk,rough(x) = 1 [7.7]
• WEMP Cdk greater than 1: Cdk,wemp(x) = 1 [7.8]
• WFUEL Cdk greater than 1: Cdk,wfuel (x) = 1 [7.9]
• RANGE Cdk greater than 1: Cdk,range(x) = 1 [7.10]
o The system goals, G(x), based on kriging models:
• WFUEL Cdk greater than 1: Cdk,noise(x) + d1  d1+ = 1.0 [7.11]
• WEMP Cdk greater than 1: Cdk,wemp(x) + d2  d2+ = 1.0 [7.12]
• DOC Cdk greater than 1: Cdk,doc(x) + d3  d3+ = 1.0 [7.13]
• PURCH Cdk greater than 1: Cdk,purch(x) + d4  d4+ = 1.0 [7.14]
• LDMAX Cdk greater than 1: Cdk,ldmax(x) + d5  d5+ = 1.0 [7.15]
• VCRMX Cdk greater than 1: Cdk,vcrmx(x) + d6  d6+ = 1.0 [7.16]
• RANGE Cdk greater than 1: Cdk,range(x) + d7  d7+ = 1.0 [7.17]
o Constraints on deviation variables: di • di+ = 0 and di, di+ = 0.
o The bounds on the system variables:
0.24 M = CSPD = 0.48 M 19 lb/ft2 = WL = 25 lb/ft 2
7 = AR = 11 85 = AF = 110
5.0 ft = DPRP = 5.96 ft 14.0 in = WS = 20.0 in
259
Minimize:
o The sum of the deviation variables associated with:
• fuel weight Cdk, d1 • maximum lift/drag Cdk, d5

• empty weight Cdk, d2 • maximum speed Cdk, d6
• direct operating cost Cdk, d3 • maximum range Cdk, d7
• purchase price Cdk, d4
Z = { f1(d1), f2(d2), f3(d3), f4(d4), f5(d5), f6(d6), f7(d7) }
Based on this GAA compromise DSP, the initial baseline design is infeasible in two
regards. First, the propeller diameter is too great as explained in Section 7.1 At 6.3 ft, the
speed of the propeller tip is above sonic speed, violating a tipspeed constraint which is not
explicitly modeled in the GAA compromise DSP; thus, the range for the propeller diameter is
set at 55.96 ft so that this constraint is always met. Second, the DOC violates the $80/hr
constraint which has been selected. The baseline design still represents a good design;
however, the GAA compromise DSP is being used to improve it as discussed in the next
section wherein the product platform portfolio is developed in Step 5 of the PPCEM.
In order to develop the GAA platform portfolio for the family of GAA to meet the
constraints and goals set forth in Section 7.2, three design scenarios are investigated (see Table
7.7).
Overall Tradeoff Study: All of the goals are weighted equally in an effort to develop a
platform that simultaneously meets both economic and performance requirements as
best as possible.
260
Economic Tradeoff Study: Economic related goals (C dk’s for empty weight, purchase
price, and direct operating cost) are given top priority to find a platform which meets all
of the economic requirements as best as possible; satisfying performance goals is
second priority.
Performance Tradeoff Study: Performance related goals (C dk’s for fuel weight, max.
lift/drag, max. speed, and max. range) are placed at the first priority level to develop a
platform that satisfies all of the performance requirements as best as possible;
meanwhile, economic goals are given second priority.
The corresponding deviation function formulations for each scenario are listed in Table 7.7.
Deviation Function
Scenario PLEV1 PLEV2 Note:
 
(d1 + d2 + d3  d 1 drives Cdkwfuel to 1
d 2 drives Cdkwemp to 1
1. Overall Tradeoff + d4 + d5 + d6
d 3 drives Cdkpurch to 1
+ d7)/7
(d2 + d3 + d4 (d1 + d5 + d6 d 4 drives Cdkdoc to 1
2. Economic Tradeoff d 5 drives Cdkldmax to 1
)/3 + d7)/4
(d1 + d5 + d6 (d2 + d3 + d4 d 6 drives Cdkvcrmx to 1
3. Performance Tradeoff d 7 drives Cdkrange to 1
+ d7)/4 )/3
Three starting points are used when solving the GAA product platform compromise
DSP for each scenario: the lower, middle, upper bounds of the design variables; in a situation
where all three starting points do not converge to the same solution, the design with the lowest
deviation function value is taken as the best design (the reader is referred to the convergence
studies in Section 7.6.1). The resulting product platform specifications obtained by solving the
261
compromise DSP in Figure 7.7 are given in the next section. The individual instantiations of the
aircraft within the family based on the kriging metamodels then are discussed in Section 7.5.2.
7.5.1 Results of the GAA Compromise DSP for the Family of Aircraft
The resulting product platform specifications for each design scenario are summarized in
Table 7.8. Recall that the target values for each Cdk is 1; values above one indicate that the
family of GAA has met the desired URL or LRL while values below one indicate that the targets
have not been met for that particular requirement. All solutions are feasible, and the values for
Cdk,rough and Cdk,noise have not been included because they have no bearing on the deviation
function (other than to make the solution infeasible). The Cdk values for the initial baseline
design have also been included in the table for the sake of comparison. The PPCEM based
family has an unfair advantage because the baseline aircraft (the Beechcraft Bonanza B36TC
presented in Section 7.1.3) is a six seater aircraft and, as such, is not expect to perform well
when scaled down to fit fewer passengers; however, it still provides a reference point to
Baseline Scenario
Design 1 2 3
Des. Var.
CSPD [Mach] 0.31 0.244 0.242 0.291
AR 7.88 8.00 8.09 7.62
DPRP [ft] 6.3 5.13 5.19 5.55
WL [lb/ft2] 20.5 22.45 22.63 22.48
AF 110 89.60 89.40 85.63
262
WS [in] 20 18.60 18.72 18.70
Goals
Cdkwfuel P* 0.640 1.164 1.236 1.156
Cdkwemp E 0.074 0.810 0.903 0.806
Cdkdoc E 670.476 1.588 1.312 26.270
Cdkpurch E 2.557 0.733 0.449 0.070
Cdkldmax P 3.230 4.474 4.427 4.964
Cdkvcrmx P 4.397 4.303 3.702 2.017
Cdkrange P 4.157 0.577 0.672 0.429
Dev. Fcn.
PLEV1 2.036 0.986 2.388
PLEV2 2.950 9.4556
*
P indicates Cdk is related to performance; E to economics—economic
goals rank first in Scenario 2; performance goals rank first in Scenario 3
Compared to the initial baseline design, the PPCEM designs have a lower cruise speed,
propeller diameter, engine activity factor, and seat width. Meanwhile, the wing loading is slightly
larger in general; and the aspect ratio fluctuates around the baseline value. Comparing the
design variables for Scenario 1 and 2, there is negligible difference. This indicates that in the
overall tradeoff study, the economic goals tend to dominate the solution despite all goals being
equally weighted. In an effort to achieve better performance in Scenario 3 (at the sacrifice of
the economic goal achievement), the cruise speed is slightly higher, the propeller diameter is
slightly larger, and the aspect ratio and engine activity factor are slightly lower for this scenario
than either Scenario 1 or 2. Thus, in order to maintain sufficient flexibility to achieve all the
design considerations in all three scenarios, the resulting product platform is taken as follows:
• Cruise speed = Mach 0.242 or Mach 0.291 (if performance is first priority)
263
• Propeller diameter = 5.34 ± 0.2 ft
These values comprise the range of values that cruise speed, aspect ratio, etc. should be
allowed to take in order to meet the goals as best as possible in any of the three design
scenarios. It is these values which define the GAA product platform around which the family of
Before instantiating the individual aircraft to examine how well they perform given these
specifications, notice in Table 7.8 that very few Cdk goals achieve their target of 1; only Cdk,wfuel
is consistently larger than 1, indicating that the family of GAA are capable of meeting the
specified fuel weight targets. The empty weight Cdk and purchase price Cdk are the second best
with Cdk,range performing well in Scenarios 1 and 3. All Cdk values are improved over the
baseline design except for Cdk,ldmax which has decreased slightly. In Scenario 2, the economic
Cdk’s for direct operating cost and empty weight improve slightly but at the expense of a slight
decrease in the purchase price Cdk when compared to the value obtained in Scenario 1. The
big tradeoff between the economic and performance goals in Scenarios 2 and 3 is best seen in
Cdk,doc. In all three scenarios, the family of GAA is far from achieving its target of $60/hr for the
direct operating cost as indicated by the low values for Cdk,doc; however, in Scenario 3 when
achieving performance goals is given a higher priority than economic goals, Cdk,doc is even
264
worse, indicating the compromise between a family of aircraft that has performs well versus one
that is economical.
To study these compromises further, five more design scenarios are formulated (see
Section F.4) to determine whether it is the Cdk formulation that is performing poorly or that the
targets are difficult to achieve. The results are listed separately in Section F.4 and discussed
therein. The end result of examining all these design scenarios is learning that significant
tradeoffs are occurring in Scenarios 1, 2, and 3 where the economic and performance Cdk goals
are equally weighted at different priority levels. Only when a particular Cdk is given first priority
(i.e., placed at PLEV1) in the GAA product platform compromise DSP can the target (C dk = 1)
be achieved. Any other time, the solutions from the GAA product platform compromise DSP
represent the best possible compromise which can be obtained for a particular design scenario,
poor Cdk value or not. Furthermore, the deviation function values shown in Table 7.8 are not
much value in and of themselves because they are based on how well the Cdk achieve their
target of 1. Recall that Cdk is only a means to end, i.e., to generate a family of aircraft
which satisfies the given ranged set of requirements as well as possible. What is important,
however, is the resulting aircraft which come from instantiating the PPCEM aircraft platform to
Unlike in the universal motor example, instantiation of the individual aircraft within the
GAA product family only requires specifying the number of passengers on the plane, not solving
265
another compromise DSP to find the best stack length to meet a particular torque requirement.
The individual constraints and goals for each aircraft must be formulated first, however, based
on the specifications given in Section 7.2. Based on the requirements in Table 7.4, the
where Ci,wfuel = {450 lbs, 475 lbs, 500 lbs} and i = {1, 3, 5} passengers. Meanwhile, the
individual goals based on the targets in Table 7.5 for each aircraft are given by:
266
where Ti,wfuel = {450 lbs, 400 lbs, 350 lbs}, Ti,wemp = {1900 lbs, 1950 lbs, 2000 lbs}, Ti,purch =
{$41000, $42000, $43000} and i = {1, 3, 5} passengers. Based on these goals, the deviation
function for each aircraft is a combination of: d1+, d2+, d3+, d4+, d5, d6, and d7 because it is
desired to lower fuel weight, empty weight, direct operating cost, and purchase price to their
targets and to raise maximum lift/drag, cruise speed, and range to theirs. The resulting deviation
function formulations for each aircraft for each scenario is listed in Table 7.9. These deviation
functions are identical to those listed in Table 7.7 except that di+ and di are for the individual
goals of each aircraft and not the Cdk for the family (which only uses di to raise Cdk to its target
Deviation Function
Scenario PLEV1 PLEV2 Note:
+
(d1 + d2 ++ d 1+ lowers fuel weight to target
d 2+ lowers empty weight to target
1. Overall tradeoff d3+ + d4+ + d5
d 3+ lowers direct oper. cost to
+ d6 + d7)/7 target
(d2+ + d3+ + (d1+ + d5 + d6 d 4+ lowers purchase price to target
2. Economic tradeoff d 5 raises max. lift/drag to target
d4+)/3 + d7)/4

(d1+ + d5 + d6 (d2+ + d3+ + d 6 raises max. speed to target
3. Performance tradeoff d 7 raises max. range to target
+ d7)/4 d4+)/3
The instantiations of the two, four, and six seater GAA for the family of aircraft based
on the PPCEM platform values are summarized in Table 7.10. These response values are
obtained by evaluating the kriging metamodels at the design variable values listed in Table 7.8
for each scenario. Based on the low deviation function values listed in Table 7.10, it appears
267
that the PPCEM based family of aircraft perform reasonably well on an individual basis. The
targets for fuel weight and empty weight are met in all cases. Despite the poor showing of
Cdk,doc, the DOC values for the individual aircraft in Scenarios 1 and 3 are within $2/hr of the
target of $60/hr; meanwhile, the DOC values are near their maximum permitted value ($80/hr)
in Scenario 3 when economics takes second priority to performance. The purchase price goals
of {$41000, $42000, $43000} are within $1000 or less of being met in all cases. The
maximum lift/drag ratio (LDMAX) and cruise speeds (VCRMX) do not meet their targets of 17
or 200 very well. The maximum range target (2500 n.m.) is met in Scenario 1 and by all but the
six seater in Scenario 3; the range values for Scenario 2 are slightly below the target.
Response
Design No. of WFUEL WEMP DOC PURCH LDMAX VCRMX RANGE Dev Fcn
Scenario Seats [lbs] [lbs] [$/hr] [$] [kts] [nm] PLEV1 PLEV2
2 447.70 1892.32 61.05 42078.2 16.16 193.56 2542.7 0.018
1 4 409.19 1929.50 62.31 42601.3 15.80 190.17 2517.3 0.028
6 376.65 1959.37 62.54 43138.7 15.68 188.87 2502.8 0.036
2 445.06 1895.36 60.52 42221.0 16.16 194.82 2497.3 0.013 0.019
2 4 405.92 1932.81 61.83 42749.0 15.81 191.51 2462.0 0.016 0.036
6 373.48 1962.93 62.20 43296.0 15.68 190.20 2444.0 0.015 0.054
2 446.36 1891.95 78.16 42402.6 16.04 198.16 2543.1 0.016 0.112
3 4 406.63 1929.39 79.30 42972.1 15.73 195.16 2509.0 0.029 0.115
6 377.05 1959.14 78.28 43462.1 15.56 193.72 2482.9 0.050 0.105
So what does all this mean in terms of designing a scalable platform for a product
family? Considerable improvement has been made over the initial baseline design, but has a
268
good family of aircraft been designed? Answers to this question are offered in the next
section in which verification and the implications of the results are discussed.
To verify the results obtained from implementing the PPCEM, the following questions
are addressed.
Verify compromise DSP solutions  What do the convergence histories look like for
each scenario? Is the best solution being obtained?
Verify kriging predictions  How does the predicted performance of the individual
aircraft based on the kriging models compare to the actual performance in
GASP?
Verify PPCEM family  How does the family of aircraft based on the PPCEM
compare to the aggregate group of individually designed benchmark aircraft?
Convergence histories of the GAA compromise DSP solution for Scenario 1 for the
family of GAA is illustrated in Figure 7.8. As seen in the figure, all three starting points converge
to approximately the same solution, indicating that the best possible solution has likely been
obtained. The initial deviation function for the high starting point is quite large (~15) while the
269
initial design based on the middle starting point is slightly infeasible; hence, the jump in iteration 2
16
14 Low
Mid
12
Hi
10
PLEV1
6
4
2
0
0 5 10 15
Iterations
The convergence histories for the PLEV1 and PLEV2 for Scenarios 2 and 3 are
illustrated in Figure 7.9. Similar to Figure 7.8, the three starting points for Scenarios 2 and 3
yield a wide range of initial deviation function values, but the model tends to converge at the
same or nearly similar solutions. This trend holds true at both priority levels in both scenarios.
270
45 5
40 4.5
35 4
3.5
30
3
25
PLEV1
2.5
PLEV1
20
2
15 1.5
10 1
5 0.5
0 0
0 2 4 6 8 10 12 14 16 0 5 10 15 20 25
Iterations Iterations
4.5 40
4 35
3.5 30
3
25
PLEV2
PLEV2
2.5
20
2
15
1.5
1 10
0.5 5
0 0
0 2 4 6 8 10 12 14 16 0 5 10 15 20 25
Iterations Iterations
7.9a) and PLEV2 in Scenario 3 (Figure 7.9d) and between PLEV1 in Scenario 3 (Figure 7.9b)
and PLEV2 in Scenario 2 (Figure 7.9c) since the same goals are equally weighted at different
levels in these scenarios. Comparing these graphs reveals the true nature of the tradeoffs that
occur between the economic goals and the performance goals. When the economic goals are
placed at the first priority level in Scenario 2, a much lower value for the deviation function is
capable of being achieved compared to when they are placed at the second priority level as in
271
Scenario 3. The same holds true for the performance goals in the first priority level in Scenario
Previously, the performance of the individual aircraft have been based on predictions
from kriging metamodels (see Table 7.10). Therefore, the performance of the individual aircraft
is evaluated directly in GASP as opposed to being estimated from the kriging metamodels. The
results are summarized in Table 7.11 and can be compared directly to the previous values listed
in Table 7.10. The resulting approximation error between the kriging model predictions and the
The errors are expressed as a percent of the actual value obtained from GASP; a
positive error indicates overprediction of the response, and a negative error indicates under
prediction. The maximum error occurs for RANGE of the six seater GAA (= 3.42%). In
general, the kriging models overpredict PURCH, LDMAX, VCRMX, and RANGE and
272
underpredict WFUEL and DOC. The average percentage error for each response also is
listed in the table; values range from 0.62% to a high of 2.52%. In summary, then, it appears
that the kriging metamodel predictions are quite accurate based on the error analysis in Table
7.12.
For further verification of the PPCEM aircraft, individual (benchmark) aircraft are
designed using GASP and DSIDES directly to compare to the aircraft obtained through the
implementation of the PPCEM. The compromise DSP for these benchmark aircraft is shown in
Figure 7.10 and is derived from Equations 7.167.28 for the individual constraints and goals
listed in Table 7.4 and Table 7.5, respectively. As in the individual instantiations of the PPCEM
platform, the deviation variables of interest are: d1+, d2+, d3+, d4+, d5, d6, and d7 because it is
273
desired to lower fuel weight, empty weight, direct operating cost, and purchase price to their
targets and to raise maximum lift/drag, cruise speed, and range to theirs.
274
Given:
o Baseline aircraft configuration and mission profile
o General Aviation Synthesis Program (GASP)
Find:
o The system variables, x:
• cruise speed, CSPD • wing loading, WL
• wing aspect ratio, AR • engine activity factor, AF
• propeller diameter, DPRP • seat width, WS
o The values of the deviation variables associated with G(x):
• fuel weight Cdk, d1, d1+ • maximum lift/drag Cdk, d5, d5+
• empty weight Cdk, d2, d2+ • maximum speed Cdk, d6, d6+
 +
• direct operating cost Cdk, d3 , d3 • maximum range Cdk, d7, d7+
• purchase price Cdk, d4, d4+
Satisfy:
o The system constraints, C(x), based on kriging models:
• noise [dbA]: NOISE(x) = 75 dbA [7.18]
• direct operating cost [$/hr]: DOC(x) = $80/hr [7.19]
• ride roughness: ROUGH(x) = 2.0 [7.20]
• aircraft empty weight [lbs]: WEMP(x) = 2200 lbs [7.21]
• aircraft fuel weight [lbs]: WFUEL(x) = Ci,wfuel [7.22]
• maximum flight range [nm]: RANGE(x) = 2200 nm [7.23]
o The system goals, G(x), based on kriging models:
• aircraft fuel weight [lbs]: WFUEL(x)/Ti,wfuel + d1  d1+ = 1.0 [7.24]
• aircraft empty weight [lbs]: WEMP(x)/Ti,wemp + d2  d2+ = 1.0 [7.25]
• direct operating cost [$/hr]: DOC(x)/60 + d3  d3+ = 1.0 [7.26]
• purchase price [$]: PURCH(x)/Ti,purch + d4  d4+ = 1.0 [7.27]
• maximum lift/drag: LDMAX(x)/17 + d5  d5+ = 1.0 [7.28]
• maximum cruise speed [kts]: VCRMX(x)/200 + d6  d6+ = 1.0 [7.29]
• maximum range [nm]: RANGE(x)/2500 + d7  d7+ = 1.0 [7.30]
o Constraints on deviation variables: di • di+ = 0 and di, di+ = 0.
o The bounds on the system variables:
0.24 M = CSPD = 0.48 M 19 lb/ft2 = WL = 25 lb/ft 2
7 = AR = 11 85 = AF = 110
5.0 ft = DPRP = 5.96 ft 14.0 in = WS = 20.0 in
Minimize:
275
o The sum of the deviation variables associated with:
• fuel weight, d1+ • maximum lift/drag ratio, d5
• empty weight, d2+ • maximum speed, d6
+
• direct operating cost, d3 • maximum range, d7
• purchase price, d4+
Z = { f1(d1+), f2(d2+), f3(d3+), f4(d4+), f5(d5), f6(d6), f7(d7) }
particularized with the appropriate targets and constraints and solved: Ci,wfuel = {450 lbs, 475
lbs, 500 lbs}, Ti,wfuel = {450 lbs, 400 lbs, 350 lbs}, Ti,wemp = {1900 lbs, 1950 lbs, 2000 lbs},
Ti,purch = {$41000, $42000, $43000} and i = {1, 3, 5} passengers. The same three design
scenarios are used when designing each benchmark aircraft, see Table 7.13. All three scenarios
are tradeoff studies: Scenario 1 is an overall tradeoff with all goals weighted equally; Scenario 2
has the economic goals weight equally at the first priority level (PLEV1), and the performance
goals weighted equally at the second priority level (PLEV2); and Scenario 3 is the reverse of
Scenario 2 with performance goals being ranked first and economics second. The deviation
function formulations for each scenario for the benchmark aircraft are listed in the table. Notice
that a combination of di+ and di are being used in the deviation function and not just di. This is
because it is desired to lower fuel weight, empty weight, direct operating cost, and purchase
price to their targets and to raise maximum lift/drag, cruise speed, and range to theirs. (With the
Cdk formulation, the only concern is to minimize di in order to ensure that Cdk = 1.)
276
Table 7.13 Design Scenarios for Designing GAA Benchmark Aircraft
Deviation Function
Scenario PLEV1 PLEV2 Note:
+ + +
(d1 + d2 + d 1 lowers fuel weight to target
d 2+ lowers empty weight to target
1. Overall tradeoff d3+ + d4+ + d5
d 3+ lowers direct oper. cost to
+ d6 + d7)/7 target
(d2+ + d3+ + (d1+ + d5 + d6 d 4+ lowers purchase price to target
2. Economic tradeoff d 5 raises max. lift/drag to target
d4+)/3 + d7)/4

(d1+ + d5 + d6 (d2+ + d3+ + d 6 raises max. speed to target
3. Performance tradeoff d 7 raises max. range to target
+ d7)/4 d4+)/3
As before, three starting points—lower, middle, and upper values—are used when
designing each aircraft for each scenario; the best design(s) is then taken as the one with the
lowest deviation function value. Convergence plots for each aircraft for each scenario are listed
separately in Section F.5 and are similar to those observed for the PPCEM solutions in 7.6.1.
The final settings of the design variables for each aircraft for each of these three design scenarios
are listed in Table 7.14 through Table 7.16 for Scenarios 13, respectively. Each set of results
is discussed in turn and plotted graphically with the corresponding PPCEM instantiations for a
quick comparison of the results. The results for Scenario 1 are listed in Table 7.14.
277
WS [in] ? 18.60 ? 18.04 18.35 19.45
Responses
WFUEL [lbs] 449.43 413.8 388.49 449.67 408.35 349.97
WEMP [lbs] 1887.15 1921.71 1946.59 1888.23 1926.33 1986.58
DOC [$/hr] 61.98 63.31 63.85 61.6 63.9 64.02
PURCH [$] 41817 42374.5 42827 41607.8 42240 43262.9
LDMAX 15.89 15.61 15.53 16.54 15.8 16.4
VCRMX [kts] 190.83 188.47 187.61 187.47 184.58 181.94
RANGE [nm] 2491 2436 2420 2536 2672 2466
Dev. Fcn.
PLEV1 0.0240 0.0377 0.0506 0.0187 0.0341 0.0303
As can be seen in the table, there is little variation between the design variable settings
for the benchmark aircraft even though they have been designed individually. The benchmark
aircraft share common settings for the cruise speed and propeller diameter despite being
individually designed. Aspect ratio and wing loading only vary slightly between each aircraft,
and the difference in seat widths (WS) for each aircraft is less than 1.5 in. The engine activity
factor varies the most of the six design variables. It is interesting to note that the PPCEM design
variable values are quite close to the benchmark designs. Seat width, aspect ratio, and engine
activity factor all are contained within the range of settings for the benchmark aircraft. The
PPCEM values for cruise speed and propeller diameter are only slightly larger than the
corresponding values which are shared between all three benchmark aircraft.
Despite the similarity of the design variable settings for the two families of aircraft, only
the 4 seater benchmark and PPCEM aircraft have similar deviation function values. The two
and six seater aircraft from the PPCEM are both slightly worse than the benchmark designs as a
result of having a common set of design variables for all three aircraft. To see why this is and to
278
facilitate comparison of the performance of the two families of aircraft (the one based on the
PPCEM and the group of benchmark aircraft), plots of the individual goal achievements for
each aircraft are given in Figure 7.11 for Scenario 1. The idea of using a “spider” or
“snowflake” plot to show goal achievement comes from Sandgren (1989). In the spider plot,
goal deviation values are plotted on the axes of the web; the closer a mark is on its axis to the
origin, the better that particular goal has been achieved. In this manner, the shape of the
polygon formed by connecting the deviation values for each design can be used to compare
designs quickly. In other words, the two seater aircraft from the PPCEM platform and the
benchmark design can be quickly compared by plotting their goal achievement on the same
spider plot as is done in Figure 7.11 for all three aircraft which comprise the GAA family.
279
SCENARIO 1  Overall Tradeoff Study
2 Seater 4 Seater
WEMP WEMP
0.08 0.10
0.06 0.08
RANGE DOC RANGE 0.06 DOC
0.04
0.04
0.02 0.02
0.00 0.00
VCRMX VCRMX
PURCH PURCH
6 Seater WEMP
Deviation Functions: PLEV1
0.15
Benchmark PPCEM Family
RANGE 0.10 DOC 2 Seater 0.0187 0.0240
4 Seater 0.0341 0.0377
0.05
6 Seater 0.0303 0.0506
0.00
VCRMX
PURCH
PPCEM Family
Benchmark
LDMAX WFUEL
Figure 7.11 Graphical Comparison of Benchmark Aircraft and PPCEM Family for
Scenario 1
In the overall tradeoff study (Scenario 1) shown in Figure 7.11 all seven goals are
• In the two seater aircraft, the achievement of WEMP, DOC, PURCH, WFUEL, and
RANGE appear virtually equal. The PPCEM aircraft exhibits slightly better
achievement of the VCRMX target; however, the benchmark design has better
280
LDMAX than the PPCEM which can account for the difference between the deviation
functions for these aircraft.
• In the four seater aircraft, the PPCEM solutions perform slightly better at DOC and
VCRMX, but slightly worse with RANGE, WFUEL, and LDMAX. Both aircraft
designs achieve the target for empty weight (WEMP).
• In the six seater aircraft, both designs achieve the WEMP and PURCH targets. DOC
achievement is essentially equal for both aircraft. The PPCEM designs yield slightly
better VCRMX than the benchmark aircraft; however, the benchmark design
outperforms the PPCEM design in WFUEL, LDMAX, and RANGE. It appears that
the difference in achievement in LDMAX and WFUEL account for the large
discrepancy in the two deviations functions for the six seater aircraft because the
achievement of the other goals are comparable for both aircraft.
The results for Scenario 2 are summarized in Table 7.15. As seen in Scenario 1, the
cruise speeds for the PPCEM aircraft and the benchmark aircraft are essentially the same. The
aspect ratio for the PPCEM aircraft is contained within the range of the benchmark designs but
is on the low end. The propeller diameter for the PPCEM aircraft is slightly higher than the
benchmark aircraft, which have nearly identical propeller diameters again despite being designed
individually. The wing loading for the PPCEM aircraft is about 2 lb/ft2 lower than that of the
benchmark designs, whose value, only vary by about 0.7 lb/ft2 between all three aircraft. The
engine activity factors for the benchmark aircraft vary from 85 to a high of 109; the PPCEM
aircraft have a value of 89.4 which falls within the range of the benchmark designs. Finally, the
seat widths for the benchmark designs are lower than for the PPCEM aircraft and converging
281
Table 7.15 Individual PPCEM and Benchmark Aircraft for Scenario 2
The resulting deviation functions for the two families of aircraft are comparable to the
benchmark designs having consistently lower PLEV1 (deviation function value at priority level
1) but slightly larger PLEV2 than the PPCEM aircraft. In the preemptive (i.e., lexicographic)
case, however, having lower PLEV2 values does not matter unless PLEV1 values are the same.
The first level deviation function value for the four seater benchmark design is zero, indicating
that the design is capable of meeting all of its designated targets. The 2 and 6 seater benchmark
designs also both fare well at achieving their targets, having PLEV1 values of 0.01 and 0.001,
respectively. The PPCEM aircraft, on the other hand, have PLEV1 values which are slightly
worse, and the four seater PPCEM design does not achieve all of its targets as did the
282
benchmark design. A look at the discrepancies between the goal achievement of the two
families of aircraft can be seen in the spider plot for the Scenario 2 shown in Figure 7.12.
Figure 7.12 Graphical Comparison of Benchmark Aircraft and PPCEM Family for
Design Scenario 2, Priority Level 1 Only
In Figure 7.12 the results of the economic tradeoff study (Scenario 2) are illustrated.
Only the three economic related goals which are considered in the first priority level are shown:
empty weight (WEMP), direct operating cost (DOC), and purchase price (PURCH). Some
283
• Both sets of aircraft achieve the desired targets for empty weight.
• The purchase price (PURCH) for the seater aircraft from the PPCEM is slightly lower
than that of the benchmark aircraft; however, the purchase price for the four seater
PPCEM design is slightly higher than its comparative benchmark. Both six seater
aircraft do equally well at achieving the target.
• The DOC target achievement for the PPCEM solutions are higher for all three aircraft
than the individually designed benchmark aircraft; the inability to achieve the DOC
target is the main cause for the large discrepancy in the deviation function value
(PLEV1) for the PPCEM aircraft.
Finally, the results for Scenario 3 are summarized in Table 7.16. Notice that the
PPCEM cruise speed values are larger than all three benchmark designs while the aspect ratio is
less. The propeller diameter again is slightly larger for the PPCEM designs than for the
benchmark aircraft. The wing loading for both families of aircraft are comparable, but the
engine activity factor for the PPCEM tends to be on the lower end of the benchmark aircraft
setting. The seat width for the PPCEM is within the range of seat widths found for the
benchmark designs and is nearly identical to that of the four seater benchmark aircraft with the
two seater being slightly smaller and the six seater slightly larger.
284
AF ? 85.63 ? 101.11 95.86 85
WS [in] ? 18.70 ? 18.22 18.88 19.42
Responses
WFUEL [lbs] 450.92 415.34 389.84 449.25 399.57 350.17
WEMP [lbs] 1887 1921.7 1946.82 1888.8 1937.72 1986.37
DOC [$/hr] 77.43 79.13 79.76 68.03 72.92 64.19
PURCH [$] 42150 42727 43190.3 41871 42699.8 43237.2
LDMAX 15.79 15.52 15.44 16.32 16.05 16.44
VCRMX [kts] 195.53 193.32 192.46 190.62 187.99 181.68
RANGE [nm] 2499 2451 2437 2494 2497 2478
Dev. Fcn.
PLEV1 0.0240 0.0446 0.0671 0.0223 0.0293 0.0335
PLEV2 0.1062 0.1120 0.1112 0.0517 0.0772 0.0251
The deviation function values at priority level 1 (PLEV1) for the two families of aircraft
exhibit similar trends to those seen previously except this time it is the two seater aircraft which
have comparable goal achievement of their first level goals, not the four seater aircraft as in the
previous scenario. The PPCEM 4 seater aircraft deviation function (PLEV1) is about 1.5 that
of the benchmark design while the six seater PPCEM is about twice that of the benchmark.
Unlike in Scenario 2, however, the benchmark designs also have lower PLEV2 compared to
the PPCEM designs. To see the discrepancy between the individual goal achievement at the
first priority level, the deviation variables for the four performance goals which are considered at
the first priority level—fuel weight (WFUEL), maximum lift to drag ratio (LDMAX), maximum
cruise speed (VCRMX), and maximum flight range (RANGE)—are plotted in Figure 7.13.
• The fuel weight target is met by all three benchmark aircraft while the PPCEM aircraft
do not achieve their target. In fact, the PPCEM aircraft exhibit increasingly worse
285
achievement of the target as the aircraft is scaled to accommodate more passengers
which can account for the increase in PLEV1 for the four and six seater PPCEM
aircraft.
• Neither family of aircraft achieves the target for maximum lift/drag ratio well; however,
the benchmark aircraft consistently perform better.
• All three of the PPCEM aircraft do better at achieving the target for maximum cruise
speed than do the individually designed benchmark aircraft.
• The PPCEM aircraft have only slightly worse RANGE achievement than the benchmark
aircraft.
286
SCENARIO 3  Performance Tradeoff Study
VCRMX VCRMX
0.10
Benchmark PPCEM Family
0.05 2 Seater 0.0223 0.0240
RANGE LDMAX 4 Seater 0.0293 0.0446
0.00 6 Seater 0.0335 0.0671
PPCEM Family
Benchmark
VCRMX
Figure 7.13 Graphical Comparison of Benchmark Aircraft and PPCEM Family for
Design Scenario 3, Priority Level 1 Only
In summary, in order to improve the commonality of the aircraft within the GAA
product family, the overall performance of the individual aircraft within the product family
decreases. This decrease in performance, however, varies from aircraft to aircraft and scenario
to scenario. The question that the designers/managers are now faced with is: how much
287
product platform as possible? Ideally, minimal performance would have to be sacrificed to
increase commonality between derivative products, but it appears that a tradeoff does exist as
one might expect. In reality, however, it would not be known how much performance was
being sacrificed by designing a common platform for the product family because benchmark
designs would not necessarily exist (unless this was a redesign process). Toward this end, a
To assess the tradeoff between product commonality and product performance within
the family of GAA, a product variety tradeoff study is performed using the PDI and NCI
measures described in Section 3.1.5. Currently, there are two points on the PDI vs. NCI
graph: the family of aircraft based on the PPCEM solutions and the group (family) of benchmark
aircraft which have been individually designed. What is interesting to study is the effect of
allowing one or more design variables to vary in the PPCEM for each aircraft while holding the
remaining variables constant at the platform values found using the PPCEM. In this manner, the
PPCEM facilitates generating a variety of alternatives for the product platform and
corresponding product family. By allowing one or more variables to vary between aircraft, the
performance of the individual aircraft within the resulting product family can be improved such
that there is minimal tradeoff between product commonality and performance. Before this
tradeoff can be assessed, however, the relative importance of the design variables is needed in
288
The weightings in NCI used in this study are based on rank ordering the design
variables with regard to relative ease/cost with which they can be allowed to vary—the more
costly it is to allow that variable to change, the more important it is to have that variable stay the
same across derivative products. For this example, the weightings listed in Table 7.17 are used.
Cruise speed (CSPD) is the easiest/cheapest variable to allow to vary between designs because
it is easy to vary the cruise speed throughout the mission without having to make any
modifications to the aircraft; meanwhile, seat width (WS) is the most expensive to allow to vary
because it is costly not to have the same fuselage width (fuselage width being directly
proportional to seat width) for all of the aircraft within the GAA family. These weights are
derived from a pairwise comparison of the design variables; the justification for the pairwise
comparison and computation of the rank ordering and relative importance are explained in
Section F.4.1.
289
For this product variety study, two design scenarios are considered: the economic
tradeoff study (Scenario 2) and the performance tradeoff study (Scenario 3) listed in Table
7.13. For these two scenarios, the individual PPCEM and benchmark aircraft are listed in
Table 7.15 and Table 7.16 for Scenarios 2 and 3, respectively. The resulting PDI and NCI for
each group of aircraft based on these design variable values are computed and listed in Table
7.18 and Table 7.20 for Scenarios 2 and 3, respectively; remember that only the first priority
level is used when computing PDI, and the weightings used in the NCI are the relative
importances listed in Table 7.17 and do not include the variation in the scale factor.
Knowing the two extremes of the PDI vs. NCI curve, it is possible to work
“backward” along the curve from the PPCEM solutions toward the benchmark designs by
allowing one or multiple design variables to vary between each aircraft while holding the others
1. Starting with the individual PPCEM aircraft, vary one variable at a time for each aircraft;
for instance, hold {AF, AR, CSPD, DPRP, WL} at the settings prescribed by the
PPCEM platform and vary WS for each aircraft to improve the performance of that
aircraft as much as possible. This entails solving a compromise DSP for each aircraft,
with the PPCEM value for WS taken as the starting point in DSIDES. All six variables
are allowed to vary oneatatime from the PPCEM platform values, solving a
compromise DSP for each aircraft for each variable. NCI and PDI are then computed
for each of the six resulting aircraft families, e.g., the family of aircraft which share
common {AF, AR, CSPD, DPRP, WL} but varying WS.
2. Repeat Step 1, allowing any two variables to vary at a given time between aircraft from
the PPCEM platform values. There are 15 possible pairs of variables which are varied
290
twoatatime, and a compromise DSP is solved for each possible pair with the
PPCEM value taken as the starting point. NCI and PDI are computed for each of the
15 resulting product families.
3. Repeat Step 1, allowing any three variables to vary at a given time between aircraft. In
order to reduce the number of combinations that must be examined, CSPD is not varied
from aircraft to aircraft because it is known not to change much between aircraft in the
group of benchmark designs. Hence, only AF, AR, DPRP, WL, and WS are allowed
to vary from their PPCEM platform values, resulting in 10 different combinations of the
five variables taken three at a time. NCI and PDI are computed for each of the 10
resulting product families.
variety study for each scenario. The resulting NCI and PDI are listed in Table 7.18 for
Scenario 2 and in Table 7.20 for Scenario 3. In the tables, the results are grouped by the
number of variables where the variables not listed are being held constant. For instance, in
Table 7.18 the NCI and PDI for the family of aircraft when allowing WL to vary from one
aircraft to the next are 0.0171 and 0.0155, respectively, with all over design variables fixed at
the PPCEM values; when AF and WL are allowed to vary from one aircraft to the next, the
resulting NCI and PDI for the group of products are 0.0234 and 0.0150, respectively.
291
Table 7.18 Product Variety Tradeoff Study  Scenario 2
NCI PDI
Benchmark Designs  Each aircraft is optimized; all
0.1795 0.0038
variables can vary
PPCEM Designs using Cdk  Each aircraft is
0.0000 0.0184
designed to have same variables
Allow 1 AF 0.0178 0.0181
variable to AR 0.0040 0.0181
vary between CSPD 0.0000 0.0182
aircraft from DPRP 0.0026 0.0179
PPCEM WL 0.0171 0.0155
designs WS 0.0381 0.0117
AR 0.0147 0.0181
CSPD 0.0178 0.0181
AF DPRP 0.0155 0.0175
WL 0.0234 0.0150
Allow 2 WS 0.0509 0.0113
variables to CSPD 0.0041 0.0181
vary between AR DPRP 0.0172 0.0175
aircraft from WL 0.0230 0.0147
PPCEM WS 0.0559 0.0096
designs DPRP 0.0027 0.0179
CSPD WL 0.0233 0.0152
WS 0.0382 0.0117
DPRP WL 0.0249 0.0154
WS 0.0528 0.0106
WL WS 0.0702 0.0086
DPRP 0.0110 0.0181
AR WL 0.0370 0.0146
Allow 3 AF WS 0.0848 0.0100
variables to DPRP WL 0.0495 0.0154
vary between WS 0.0672 0.0147
aircraft from WL WS 0.1068 0.0081
PPCEM AR DPRP WL 0.0405 0.0151
designs WS 0.0572 0.0107
AR WL WS 0.0803 0.0068
DPRP WL WS 0.0701 0.0075
The gray shaded rows in Table 7.18 indicate the best increase in PDI which can be
achieved by allowing 1, 2, or 3 variables to vary at a given time. So, if only one variable is
allowed to vary between aircraft, then allowing WS to vary yields the best improvement in
292
Scenario 2; if two can vary, then WL and WS should be allowed to vary; if three can vary, then
varying AR, WL, and WS yields the best improvement. The complete set of results for each
scenario for each aircraft are listed in Section F.4. Plots of NCI versus PDI for each scenario
follow each table and are discussed in turn. The PDI and NCI values for Scenario 2 are plotted
in Figure 7.14.
0.018
KEY:
0.016
²PDI 1 Cdk
0.014 WL CdkVary1
²PDI 2 CdkVary2
0.012 WS
CdkVary3
PDI
Notice that the PPCEM solution using the Cdk formulation yields the top left point in
Figure 7.14; the individual benchmark designs provide the bottom right point with all of the
293
variations on the PPCEM Cdk solutions falling in between the two, creating an envelope of
possible combinations of NCI and PDI. As highlighted in the Table 7.18, varying {WS}, {WS
and WL}, and {AR, WL, and WS} yields the best improvement in PDI if 1, 2, and 3 variables
are allowed to vary between each of the PPCEM aircraft; notice that these points lay on the
front of the product variety envelope. In general, as more design variables are allowed to vary,
Is there any way to move down this curve without having to look at all possible
combinations? As it turns out, the design variables that have the most impact on the
performance of aircraft are the ones that progress down the front of the curve. This information
can be obtained from a statistical Analysis of Variance (ANOVA) of the data used to build the
kriging metamodels in Step 3 of the PPCEM. The full ANOVA for the family of GAA is given
in Section F.3, and Pareto plots based on the results of the ANOVA are illustrated in Figure
7.15. The Pareto plots provide a means of quickly identifying which variables have the most
impact on a particular response; the larger the horizontal bar, the more influence a variable has
on the response. In Figure 7.15, only the effects of the design variables on the response means
have been plotted because they govern the average performance of the GAA family.
Based on these Pareto plots for the GAA response means, the effect of each factor on
each response can be ranked by order of importance, see Table 7.19. In the table, 1 indicates
most important and 6 the least. So for example, the seat width (WS) has the largest effect on
the purchase price (PURCH) and cruise speed (CSPD) has the least. The economic responses
294
in the first priority level in Scenario 2 are shown in the top half of the table; the performance
responses which are in the first priority level in Scenario 3 are shown in the bottom half of the
table.
(g) ? LDMAX
295
Response 1 2 3 4 5 6
DOC CSPD DPRP WL AR WS AF
WEMP WS AR WL DPRP AF CSPD
PURCH WS AR DPRP WL AF CSPD
Importance on Performance Related Goals
Response 1 2 3 4 5 6
LDMAX AR WL WS CSPD DPRP AF
RANGE WL WS DPRP AF AR CSPD
WFUEL WS WL AR DPRP AF CSPD
VCRMX WL DPRP WS AR CSPD AF
Returning to the tradeoff study for Scenario 2, the design variables that shape the front
of the envelope when allowed to vary are as follows: WS, WL, {AR,WS}, {WS,WL}, {WS,
WL, DPRP}, and {AR, WL, WS}. Looking at the rank ordering of importance in Table 7.19,
it can be seen that the variables in these combinations are variables that have the largest effect
on the responses. WS has the largest effect on WEMP and PURCH, two of the three
economic responses in Scenario 2; {WS, AR} are the two most important factors both of these
economic responses; {AR, WL, WS} are among the top three variables that are most
important to the three economic responses in Scenario 2. Thus, by allowing the design variables
with the most impact to vary between aircraft while keeping the others fixed, substantial
To see if the same holds true in Scenario 3, the NCI and PDI values for Scenario 3 are
listed in Table 7.20 and plotted in Figure 7.16. As highlighted in the table, the best
improvement in PDI can be obtained by allowing AR to vary between aircraft if only one
variable is allowed to vary. Notice in Table 7.19 that AR is most important to LDMAX.
296
Recall from Figure 7.13 that the largest discrepancy between the PPCEM family of aircraft and
between aircraft within the PPCEM family, each aircraft is able to achieve better LDMAX,
resulting in a lower PDI for the PPCEM product family. Meanwhile, if two variables are
allowed to vary, then AF and WL yield the best improvement in PDI because WL has a large
impact on both RANGE and VCRMX. In the three variable case, varying DPRP, WL, and
WS yield the best improvement; notice that all three of these variables are among the most
297
Table 7.20 Product Variety Tradeoff Study  Scenario 3
NCI PDI
Benchmark Designs  Each aircraft is optimized; all
0.0918 0.0284
variables can vary
PPCEM Designs using Cdk  Each aircraft is
0.0000 0.0452
designed to have same variables
Allow AF 0.0267 0.0452
1 variable AR 0.0059 0.0434
to vary CSPD 0.0000 0.0453
b/n aircraft DPRP 0.0013 0.0452
from Cdk WL 0.0010 0.0437
designs WS 0.0068 0.0443
AR 0.0269 0.0430
CSPD 0.0000 0.0453
AF DPRP 0.0193 0.0450
WL 0.0119 0.0390
Allow WS 0.0159 0.0440
2 variables CSPD 0.0017 0.0452
to vary AR DPRP 0.0101 0.0428
b/n aircraft WL 0.0082 0.0402
from Cdk WS 0.0183 0.0404
designs DPRP 0.0000 0.0453
CSPD WL 0.0049 0.0397
WS 0.0093 0.0449
DPRP WL 0.0150 0.0401
WS 0.0081 0.0446
WL WS 0.0071 0.0422
DPRP 0.0530 0.0430
AR WL 0.0085 0.0394
Allow AF WS 0.0361 0.0399
3 variables DPRP WL 0.0272 0.0388
to vary WS 0.0158 0.0442
b/n aircraft WL WS 0.0349 0.0408
from Cdk AR DPRP WL 0.0135 0.0414
designs WS 0.0203 0.0397
AR WL WS 0.0154 0.0417
DPRP WL WS 0.0194 0.0361
The PDI and NCI values for Scenario 3 are plotted in Figure 7.16. As in the previous
graph for Scenario 2, the PPCEM solution using the Cdk formulation yields the top left point; the
individual benchmark designs provide the bottom right point. Notice that the combinations of
298
design variables which move the family of aircraft down the front of the product variety
envelope are, in general, the ones which are rank ordered highest in Table 7.19. Notice also
that more than half of ?PDIlost can be gained back if {DPRP, WL, WS} are allowed to vary
KEY:
0.045 ²PDI 1 Cdk
WL
²PDI 2 CdkVary1
AR
²PDI 3 CdkVary2
0.040
CdkVary3
PDI
CSPD,WL
Benchmark
0.035 AF,WL ²PDI i = best
²PDI lost change in PDI
by allowing
DPRP,WL,WS i variables to
0.030 vary b/n each
aircraft design
²NCI gain
0.025
0.00 0.02 0.04 0.06 0.08 0.10
NCI  Weighted by Importance
comparing the benchmark and PPCEM aircraft in Section 7.1.2, considerable improvement in
the performance of the PPCEM family of aircraft can be obtained by allowing 1 or more
299
variables to vary between aircraft while holding the remainder of the variables at the platform
setting. In the product variety study performed in this section, it has been shown how statistical
analysis of variance can be used to traverse the front of the product variety tradeoff envelope,
maximizing the gains in PDI with minimal loss in commonality. It is now up to the discretion of
the designers/managers to evaluate the implications of this tradeoff on inventory, production, and
sales to decide the appropriate compromise between commonality and performance. A closer
look at some of the lessons learned from this example is offered in the next section along with a
In this chapter, the PPCEM is applied in full to the design of a family of General
Aviation aircraft. The GAA family is based on a common scalable product platform which is
scaled around the number of passengers in much the same way that Boeing has scaled their 747
series of aircraft around the capacity and flight range (cf., Rothwell and Gardiner, 1990). The
market segmentation grid has been used to help identify an appropriate leveraging strategy for
the family of aircraft based on the initial problem statement, i.e., horizontally leverage the family
of GAA to satisfy a variety of lowend market segments. Each aircraft eventually could be
vertically scaled as well through the addition and removal of features as technology improves to
Particularization of the PPCEM for this example occurs through GASP, the General
Aviation Synthesis Program, which is used to model and simulate the performance of each
300
aircraft mathematically . Kriging metamodels for response means and variances are employed
within the PPCEM to facilitate the implementation of robust design based on GASP analyses.
These kriging metamodels then are used in conjunction with design capability indices and a
GAA compromise DSP to synthesize a robust aircraft platform which is scalable into a family of
aircraft.
Three different design scenarios are used to exercise the GAA compromise DSP to
create alternative product platforms and the product platform portfolio. Instantiation of the
individual aircraft within the PPCEM family reveals that the PPCEM provides an effective
means for designing a common scalable aircraft platform for the family of GAA. However,
upon comparison with individually designed benchmark aircraft, a tradeoff is found to exist
between having a common set of design variables which define the aircraft platform and the
performance of the scaled derivatives based on that platform. To examine the extent to which
this tradeoff occurs, a product variety tradeoff study is performed using the PPCEM to
demonstrate the ease with which alternative product platforms and product families can be
generate and to make use of the NCI and PDI measures proposed in Section 3.1.5. It is
observed that considerable improvement can be made by allowing one or more variables to
vary between each aircraft based on the original PPCEM platform; however, commonality
between the aircraft is sacrificed. To determine which variables to vary, ANOVA of the data
used to build the kriging metamodels in Step 3 of the PPCEM can be used to determine the
variables that have the largest effect on each response, allowing the front portion of the product
301
variety envelope to be traversed for maximum improvement in PDI with minimal loss of
commonality. The implications of this tradeoff on inventory, production, and sales must be
alternatives for the common product platform and corresponding product platform and not to
SubHypothesis 1.1  The market segmentation grid is utilized in Section 7.1.3 to help
identify an appropriate (horizontal) scale factor for the family of GAA—the number of
passengers—in order to achieve the desired platform leveraging based on the problem
objectives; this further supports SubHypothesis 1.1.
SubHypothesis 1.2  The scale factor for the GAA product family is the number of
passengers, see Sections 7.1.3 and 7.2. Robust design principles then are used in this
example to develop an aircraft platform—defined by six design variables—which is
insensitive to variations in the scale factor and is thus good for the family of General
Aviation aircraft based on the two, four, and six seater configurations. The success of
this implementation helps to support SubHypothesis 1.2.
SubHypothesis 1.3  Design capability indices are utilized in this example to aggregate
individual targets and constraints and to facilitate the design of a family of General
Aviation aircraft. Combining this formulation with the compromise DSP allows a family
302
of GAA to be designed around a common, scalable product platform, further verifying
SubHypothesis 1.3.
So, are the solutions obtained from the PPCEM useful? The PPCEM has been
used to generate a variety of feasible options for the GAA platform and corresponding family of
aircraft. While there is some tradeoff between the performance of the individual aircraft based
aircraft, the increased commonality between the design specifications of each aircraft (i.e.,
aspect ratio, seat width, propeller diameter, etc.) should generate sufficient savings to offset the
minimal loss in performance. Regardless of whether it does or not, the family of aircraft
obtained using the PPCEM yields considerable improvement over the initial family of aircraft
based on the baseline Beechcraft Bonanza design, see Table 7.8 and the discussion thereafter.
Are the time and resources consumed within reasonable limits? Basically, the
PPCEM has been used to design a family of three aircraft almost as efficiently as a single
aircraft. The initial start up costs to use the PPCEM in this example are about one day which is
the time it takes to sample the GAA design space and construct kriging metamodels to
approximate GASP. Once this is accomplished, the computational savings resulting from using
As a result, the computational savings are comparable to, if not greater than, those obtained in
the universal electric motor example in Chapter 6. Consider, for instance, that it requires
MP. Meanwhile, the kriging metamodels require approximately 0.25 seconds to run after about
from using metamodels in the PPCEM are substantial when one considers the large numbers of
design scenarios and tradeoff studies used in this chapter and in Appendix F, not to mention the
fact that multiple starting points are used in all cases. The cost savings (in terms of number of
analysis) are not as clear cut as they are in the universal motor example in Chapter 6 (see
Section 6.5.3); therefore, they are not estimated. However, the discussion in (Simpson, 1995)
regarding the cost savings of using approximations to replace GASP sheds some light on the
Is the work grounded in reality? As stated in Section 7.1.3, the baseline design (i.e.,
starting point) for the GAA product family is the Beechcraft Bonanza B36TC presented in
Section 7.1.3. While the Beechcraft Bonanza is only a six seater aircraft, its specifications are
employed in GASP to provide a family of baseline to compare with the PPCEM family of
aircraft based on a common scalable platform. Discussion of these results in Section 7.5.1
reveals that the PPCEM solutions are able to improve upon both the technical and economic
304
improvements are slightly less than the improvements obtained by individually designing each
aircraft (i.e., the benchmark designs), the time savings resulting from using the PPCEM to design
the family of three aircraft simultaneously can be used to “tweak” the individual designs as
needed to ensure adequate performance and product quality. It still stands, however, that the
PPCEM solutions, even with all six design variables held at the common product platform
specifications, yield improvement over the baseline design. The results of the product variety
tradeoff studies discussed in Section 7.6.4 provides several options to improve the PPCEM
family of aircraft.
Finally, do the benefits of the work outweigh the cost? The true benefit from using
the PPCEM in a problem like this is the wealth of information that is obtained during its
implementation. The PPCEM greatly facilitates the generation of a variety of alternatives for a
common product platform and its corresponding scaled derivative products. Use of the
PPCEM permits the product platform and the scaled product family to be designed
simultaneously, thus increasing the commonality of specifications across the products within the
family. Product variety tradeoff studies can be easily performed using the PPCEM (and NCI
and PDI metrics) to evaluate the compromise between commonality and individual product
This concludes the second, and final, example for testing and verifying the PPCEM
having demonstrated the full implementation of the PPCEM to design a family of products and
facilitate product variety tradeoff studies. In the next and final chapter, a summary of
305
achievements and contributions from the work is offered along with critical review of the
306
8.
CHAPTER 8
In this dissertation, a method has been developed, presented, and tested to facilitate the
design of a scalable product platform for a product family. The development and presentation
of this method is brought to a close in this chapter. In Section 8.1, closure is sought by returning
to the research questions posed in Chapter 1 and reviewing the answers that have been offered.
The resulting contributions are then summarized in Section 8.2. Limitations of the research are
discussed in Section 8.3, and possible avenues of future work are described in Section 8.4.
Concluding remarks are given in Section 8.5, closing this chapter and the dissertation.
293
Chp 8: Achievements and Recommendations
294
8.1 CLOSURE: ANSWERING THE RESEARCH QUESTIONS
develop the Product Platform Concept Exploration Method (PPCEM) to facilitate the design of
a common product platform which can be scaled to realize a product family. In particular, the
concept of platform scalability is introduced and exploited in the context of the following
Q1. How can a common scalable product platform be modeled and designed for a
product family?
Two secondary research questions are also offered in Section 1.3.1 for investigation in this
Q3. Are space filling designs better suited for building approximations of deterministic
To address these questions, research hypotheses and posits are introduced and
identified in support of achieving the principal objective for the dissertation. Their elaboration
295
and verification have provided the context in which the research work has proceeded. The end
result is a synthesis of engineering design, operations research, applied statistics, and strategic
management methods and tools to form the Product Platform Concept Exploration Method. Its
development has been portrayed pictorially using Figure 8.1 which depicts the flow of the
296
Chp 8: Achievements and Recommendations
Chp 6 Chp 7
Chp 5
Platform
Nozzle Design
Product Platform Concept Exploration Method
Chp 4 Chp 3
Answering Question 1: Question 1 is the primary research question posed for the
work in this dissertation and its answer is embodied by the Product Platform Concept
297
Exploration Method: a Method which facilitates the synthesis and Exploration of a common
Product Platform Concept which can be scaled into an appropriate family of products. The
method consists of a prescription for formulating the problem and a description for solving it.
• the design of a universal electric motor platform which is (vertically) scaled around the
stack length of the motor to realize a family of electric motors capable of satisfying a
variety of torque and power requirements (Chapter 6), and
• the design of a General Aviation aircraft platform which is (horizontally) scaled into a
two, a four, and a six seat configuration to realize a family of aircraft capable of
satisfying a variety of performance and economic requirements (Chapter 7).
While only demonstrated for these two examples, it is asserted that the method is generally
applicable to other examples in this class of problems: parametrically scalable product platforms
whose performance can be mathematically modeled or simulated. Other examples which have
taken advantage of this type of scaling include the design of a family of oil filters (Seshu, 1998)
and the design of a family of absorption chillers for a variety of refrigeration capacities
(Hernandez, et al., 1998). Both examples integrate nicely within the framework of the PPCEM.
In support of the primary research question and objective, three additional questions
also are offered in Section 1.3.1. Answers to these questions are summarized as follows.
Q1.1. How can product platform scaling opportunities be identified from overall
design requirements?
298
In this research, the market segmentation grid (Meyer, 1997) is employed to help
identify platform scaling opportunities based on overall design requirements. Its success as an
attention directing tool for mapping scaling opportunities within a product family is discussed in
Section 2.2.1 and then demonstrated in both examples. In the universal motor example in
Chapter 6, the market segmentation grid is used to identify vertical scaling opportunities within
the desired product family to realize a range of torque and power ratings for different
price/performance tiers within the market; standardization of the motor interfaces will provide
horizontal leveraging opportunities of this family of motors into other market segments in a
manner similar to Black & Decker’s response to Double Insulation in the 1970s (Lehnerd,
leveraging strategy is identified by means of the market segmentation grid, resulting in a family of
three aircraft based on a two, four, and six seater configuration leveraged about a common
product platform. Opportunities for vertical scaling of the resulting family of aircraft through
engine upgrades, addon features, and technological advancements also are discussed;
Q1.2. How can robust design principles be used to facilitate designing a common
299
By identifying “conceptual noise” factors around which a family of products can be
scaled, robust design principles can be abstracted for use in product family and product
platform design. Consequently, the idea of a scale factor is introduced in Section 2.3.2 as a
factor around which a product platform can be “scaled” or “stretched” to realize derivative
products within a product family. Scale factors are, in essence, noise factors for a scalable
product platform, and robust design principles can be used accordingly to minimize the
this approach is demonstrated through the two examples. In the universal motor example in
Chapter 6, the stack length of the motor is taken as the (parametric) scale factor around which a
family of motors is created. In the General Aviation Aircraft example, the number of passengers
is the (configurational) scale factor around which a family of three aircraft are developed. In
both cases, robust design principles are employed to develop a common set of design variables
which are robust with respect to variations in the scaling factor as the product platform is scaled
and instantiated to realize the product family. A product variety tradeoff study also is performed
in the General Aviation aircraft example (see Section 7.6.2) to further verify this approach.
Q1.3. How can individual targets for derivative products be aggregated and modeled
300
Through the identification of appropriate scaling factors during the product family design
process, the individual targets for derivative products can be aggregated into a mean and
variance around which the product family can be simultaneously designed either by having
separate goals for “bringing the mean on target” and “minimizing the variation” or through the
capability of a family of designs to satisfy a ranged set of design requirements. The former
approach is utilized to design the universal electric motor platform in Chapter 6. Goals for
“bringing the mean on target” and “minimizing the variation” caused by variations in the scale
factor (stack length) are used within a compromise DSP to effect a platform design which
matches the target mean and variation for the aggregated product family. In Chapter 7, design
capability indices are employed to design a family of General Aviation aircraft around a common
metamodeling tool for engineering design by Sacks, et al. (1989), kriging has received little
attention by the engineering community building surrogate models. Perhaps this is because of
the added complexity of fitting the model or using it or of the inability to glean useful information
directly from the MLE parameters used to fit the model. Whatever the reason, the research in
this dissertation has been directed at improving the ease with which kriging models can be built,
validated, and used. Moreover, the initial feasibility study and comparison of kriging models—
301
and the extensive kriging/DOE investigation in Chapter 5 is aimed at familiarizing the reader with
kriging and making it a viable alternative for building surrogate metamodels of deterministic
computer experiments. Its utility was tested extensively in Chapter 5 wherein it was concluded
that the Gaussian correlation function provides the most accurate kriging predictor, on average,
and that kriging can accurate model a wide variety of functions typical of engineering analysis.
While the study is not all inclusive, nor is it intended to be, it has provided valuable insight into
the utility of kriging metamodels for engineering design. Potential avenues of future work to
classical experimental designs are not well suited for sampling computer experiments which are
deterministic; rather, points should be chosen to “fill the space,” providing good coverage of the
design space since replicate sample points are not needed. In an effort to verify the utility of
space filling experimental designs, a comparison of nine space filling and two classical
experimental designs is performed in Chapter 5 (see Section 5.4 in particular) to address this
third research question. The eleven experimental designs are compared on the basis of their
capability to produce accurate kriging metamodels for the testbed of six engineering problems
used in this dissertation. For the sample sizes investigated in this study, it was observed that the
space filling experimental designs yielded more accurate kriging models in the larger design
spaces (3 and 4 variables) while the classical experimental designs (CCDs) performed well in
the two dimensional design space for the reasons discussed at the end of Section 5.4.4. Prior
302
to this investigation, few researchers had compared their experimental designs against one
another, or to classical designs for that matter. As such, the findings in the kriging/DOE study in
Chapter 5 represent unique contributions from the research. A summary of the research
The contributions offered in this dissertation are introduced in Section 1.3.2 and realized
throughout the dissertation. As stated at the beginning of Chapter 1, the primary contribution
from this work is embodied in the Product Platform Concept Exploration Method which
provides a method to identify, model, and synthesis scalable product platforms for a product
• A procedure for identifying scale factors for a product platform, see Sections 3.1.1 and
3.1.2.
• An algorithm to build, validate, and use a kriging model, see Section 2.4.2, Chapter 4
and 5, and Appendix A.
303
• A preliminary comparison of the predictive capability of secondorder response
surfaces and kriging models in the design of a rocket nozzle, see Section 4.2.
• An algorithm for generating minimax Latin hypercube designs, see Section 2.4.3 and
Appendix C.
worth to be either an addition to the fundamental knowledge of the field or a new and better
interpretation of the facts already known. The contributions associated with kriging represent a
new interpretation of facts already known. Kriging has been around since the 1960s (see, e.g.,
Cressie, 1993; Matheron, 1963) when it was developed originally for mining and geostatistics
applications; however, it has received limited attention in the engineering design community until
recently. The kriging algorithm presented is not totally unique to this dissertation; however, the
use of a simulated annealing algorithm (see Appendix A for more details on its use in the
maximum likelihood estimation for the kriging metamodels) to find the “best” kriging model is.
Moreover, the comparison of the accuracy of different correlation functions on the resulting
kriging model had never been performed in such depth. Likewise, the comparison of space
filling and experimental designs represents a new and better interpretation of facts already
known because such an extensive study has never been undertaken. With the exception of the
304
minimax Latin hypercube design, the experimental designs investigated in this dissertation are the
result of years of research work by statisticians and mathematicians. The minimax Latin
hypercube design, however, represents an addition to the fundamental knowledge of the field of
experimental design.
The contributions made in the area of product family design, specifically the method of
the field. While other product family design strategies and methods have been slowly evolving,
the investigation of a method for platform scaling is previously unrecorded. The incorporation of
the market segmentation grid into the engineering design process provides a new interpretation
of facts already known, demonstrating how the market segmentation grid becomes a useful
attention directing tool for identifying platform leveraging strategies in product family and, with a
little engineering knowledge, appropriate scale factors for the intended scalable platform. In this
regard, the concept of scale factors in product family design and extending robust design to
product family and product platform design is unique to this dissertation as are the NCI and PDI
measures for product family noncommonality and performance deviation. The measures are
not of significant value in and of themselves, however, the product variety tradeoff studies which
these indices make possible provide significant insight into the tradeoffs of product family design.
Taken together, the resulting Product Platform Concept Exploration Method for designing
scalable product platforms for a product family provides an addition to the fundamental
knowledge of the nascent field of product family design. However, the PPCEM is by no means
305
a panacea for product platform and product family design nor is it without its limitations.
Toward this end, a critical evaluation of the work is offered in the next section followed by
This section comprises the confessional portion of the dissertation wherein the research
itself is critically evaluated. Already the PPCEM has been critically evaluated as it pertains to
the two example problems in Chapters 6 and 7, see Sections 6.5 and 7.6.5. In this section, the
product family based on a common scalable product platform? There are two basic
requirements which must be met in order for the PPCEM to be applicable to the design of a
scalable product platform. First, the concept of scalability must be exploitable within the
product family; exploited in the sense that having one or more scale factors provides a means to
realize a variety of performance requirements while also facilitating the manufacturing process.
For instance, in the electric motor example in Chapter 6, the motor could have just as easily
been scaled in the radial dimension as it was in the axial direction (i.e., stack length) to achieve
the necessary torque requirements; however, the underlying assumption in the choice of stack
length as the scale factor is that it can be exploited from both a technical sense and a
manufacturing sense. As Lehnerd (1987) alludes to in his article on Black & Decker and their
universal motor platform, by varying only the stack length of the motor, all of the motors—
306
ranging from 60 Watts to 660 Watts—could be produced on the same machine simply by
stacking more laminations onto the field and armature. Had the radius of the motor been scaled
instead of the stack length, different machines and tooling configurations would have been
required to produce the family of motors since varying the radius of the motors is more than a
stacking operation. Consequently, it is very important that one or multiple scale factors be
identified for the product family and that it be capable of being exploited from both a
technical standpoint and a manufacturing standpoint in order for the PPCEM to yield
useful results.
The second consideration when applying the PPCEM is that the performance of the
product family must be able to mathematically modeled, simulated, or quantified in order for the
PPCEM to be employed. It would be extremely difficult, if not impossible, for the PPCEM to
be utilized to design a common scalable automotive body platform based solely on aesthetic
considerations for instance. Consider the examples discussed in Section 1.1.1; to which of
these examples could the PPCEM be applied and why (or why not)? For the sake of
Would
Example
PPCEM Why or Why Not?
from §1.1.1
Apply?
Their platform strategy involves modular design and
Sony: Walkman No standardization of components; few, if any, scaling issues are
present within the product family.
307
They employ a combinatoric strategy to realize the necessary
Nippondenso:
No product variety based on a few welldesigned, standardized
Panel Meters
parts; few, if any, scaling issues are present.
Lutron: Lighting No Same reasoning as Nippondenso.
Control Systems
The platform is scaled around the stack length of motor and an
Black & Decker:
Yes attempt was made to recreate their family of motors as the
Universal Motor
initial “proof of concept” for the PPCEM in Chapter 6.
The majority of copier design involves modular design of
Not components and assemblies; however, some scaling issues may
Canon: Copiers really arise to accommodate different print volumes, paper sizes, etc.
Yes, in The RTM322 was scaled to create a new product platform,
Rolls Royce:
some but modularity of engine components facilitated vertical scaling
RTM322 Engine
aspects of the platform to upgrade and derate engine.
As stated in Section 1.1.2, the types of problems to which the PPCEM is readily
applicable (given that the previous two conditions regarding scalability and quantifiability are
met) typically involve parametric or variant design. The fact that the PPCEM is intended
primarily for parametric or variant design raises another important issue, namely, successful
implementation of the PPCEM assumes that the basic concept or configuration on which the
product platform is being based is good for the entire product family. In order for the
PPCEM to be employed, a good underlying concept or configuration must have already been
established in order to obtain the full benefit of the method. In the GAA example in Chapter 7,
for instance, if the three blade, high wing position, retractable landing gear configuration had not
been a suitable concept for the two, four, and six seater aircraft, then no matter what parameter
settings were obtained from using the PPCEM, the performance of the family of aircraft would
have been poor regardless because the underlying concept was not good for all three aircraft.
308
An attempt to identify a good configuration for the family of GAA is discussed in (Simpson,
1995) but the existence of such a concept is assumed to already exist in this work.
Incorporation of the conceptual and configurational design of the product family along with the
parametric scaling of the product platform is a fertile area for future work.
options for common product platforms which can be scaled into an appropriate product family.
The PPCEM is not necessarily intended to be used to evaluate these options or select one of
them. The idea behind the product platform portfolio—the output from applying the PPCEM—
requirements for as long as possible. As the product platform design progresses into the
detailed stages of design, this design freedom is reduced; however, during the early stages of the
design process, formulating and answering a variety of "what if" type questions and examining a
wide variety of design scenarios is important to the product platform design process.
Meanwhile, the NCI and PDI measures introduced in Section 3.1.5 and employed in
Section 7.6.4 represent an attempt to provide a means to evaluate different product platforms
and their respective product families. Ultimately the noncommonality of a set of parameters
losses in customer sales; however, this is extremely difficult to accomplish without sufficient
industry input. Modeling the process and manufacturing aspects of product platforms and
product families is another fertile research area which has yet to be explored.
309
As far as the scale factors themselves goes, the concept of a scale factor—while
discussed in Section 2.3.2 and 3.1.2—is still not fully understood. In the motor example in
Chapter 6, for instance, the mean and standard deviation of the motor stack length was a scale
factor which was treated much like a design variable. Meanwhile, in the GAA example in
Chapter 7, the scale factor was the number of passengers which was treated as a design
parameter which varied from two to six, i.e., its permissible range of values was known a priori
based on the intended leveraging strategy. In any event, when metamodels are to be utilized
within the PPCEM, an initial range for each scale factor is necessary in order to construct these
metamodels. This follows in the same manner that a permissible range of any noise factor is
expected to be known before robust design principles can be applied to a problem (cf.,
Phadke, 1989). It is important to examine the concept of scale factors further, finding more
examples of scaled product platforms to understand the manner in which they have been scaled
and, more importantly, how those scale factors are identified during the design process.
This brings to light another shortcoming of the PPCEM, namely, the use of the market
segmentation grid to “identify” scale factors around which the product platform is leveraged
within a product family. As stated in Section 2.2.1, the market segmentation grid is only an
attention directing tool and considerable engineering “knowhow” and problem insight are
required before a successful platform leveraging strategy can be identified. Then, only after a
suitable platform leveraging strategy is identified, can engineers hope to find (and be able to
exploit) scaling opportunities within the product family to realize the necessary product variety.
310
The market segmentation grid is the end result of this process and is really only useful for
mapping the resulting platform leveraging strategy. The two examples used in this
dissertation trivialize this process when in reality it is extremely difficult, if not impossible, to
identify one or more scaling factors which can be exploited within a product family. Developing
tools and methods to facilitate the process of identifying scale factors is one potential avenue for
further investigation.
Part of understanding scale factors better involves understanding their effect on product
performance and how scale factors can be used effectively to satisfy a wide variety of customer
requirements. If scale factors induce too much variability in product performance, then it might
not be possible to apply the PPCEM to develop a common product platform which does not
significantly compromise the performance of the product family over the range of interest. In
such a case, it might be necessary to “split” the design space into two or more product
platforms and corresponding product families rather than compromise product performance and
quality by having one single product platform which is scaled over the entire range of
performance. The work in (Chang and Ward, 1995; Lucas, 1994; Rangarajan, 1998; Seshu,
1998) further investigates and discusses these types of issues. Lucas (1994) in particular
presents interesting remarks on how to resolve these types of issues using concepts from robust
Turning to specific implementation issues within the PPCEM, it may not have been
sufficiently clear that kriging, while part of the PPCEM, is not an integral part of the PPCEM
311
since it is not the only metamodeling technique which can be used within the PPCEM.
Response surfaces, neural nets, radial basis functions, etc. are all viable metamodeling options
for use in engineering design and with the PPCEM. The extensive literature review of
metamodeling applications in engineering design in (Simpson, et al., 1997b) supports this. The
engineering design is that they are sufficiently accurate for the task at hand.
The investigations into kriging in this dissertation are primarily intended to shed light on
alternative metamodeling techniques which offer some advantages to response surface models
which are typically employed. The case for investigating alternatives to response surfaces has
been made in Section 2.4.1 and is also discussed in (Simpson, et al., 1998; Simpson, et al.,
1997b). The objective in this research is not to prove that kriging metamodels are better than
response surface models; rather, it is to demonstrate that kriging metamodels are a viable
Similarly, the use of space filling experimental designs as opposed to classical designs
are not mandated by this research. The investigation served to gain a better understanding of
the different sampling strategies which exist and the associated advantages and disadvantages of
each. If one experimental design type had proven superior in every example, then perhaps only
that design should be considered in the future. However, that was not the case, and the results
of this study are by no means generalizable to all types of engineering design problems. Very
few engineering problems, for instance, involve only two to four variables, and the availability of
312
codes to generate these space filling designs, the computation expense of them, and the nature
of the underlying analyses are just a few of the key factors that influence the decision of how to
sample a design space efficiently and effectively. Recommendations for future work in the areas
of experimental design and kriging are discussed in more detail in the next section.
means complete nor is it intended to be. Obviously, a wider variety problems should be
considered in order to obtain more generalizable recommendations. Additional space filling and
classical experimental designs which have not been considered include the following:
Classical Experimental Designs: fractional factorial designs and small central composite
designs (see, e.g., Box and Draper, 1987); Doptimal designs (see, e.g., Box and
Draper, 1971; Giunta, et al., 1994; Mitchell, 1974; St. John and Draper, 1975); I, A,
E, and Goptimal designs (see, e.g., Hardin and Sloane, 1993; Myers and
Montgomery, 1995); minimum bias designs (see, e.g., Myers and Montgomery, 1995;
Venter and Haftka, 1997); and other hybrid designs (see, e.g., GiovannittiJensen and
Myers, 1989; Myers and Montgomery, 1995)
Space Filling Experimental Designs: median Latin hypercubes (see, e.g., Kalagnanam
and Diwekar, 1997; McKay, et al., 1979); minimax and maximin designs (Johnson, et
al., 1990); scrambled nets (Koehler and Owen, 1996); orthogonal arrays of different
strengths (Owen, 1992); Maximum entropy designs (Currin, et al., 1991; Shewry and
313
Wynn, 1987; Shewry and Wynn, 1988); and factorial hypercube designs (Salagame
and Barton, 1997).
the current testbed of problems, larger problems also should be investigated because very few
engineering problems only have 24 variables. However, problems with larger dimensional
design spaces (i.e., more design variables), invoke new complications. For instance, many of
the generators used to create the space filling experimental designs become computationally
expensive in and of themselves for large numbers of factors. For example, the simulated
annealing algorithm for generating maximin Latin hypercube designs (Morris and Mitchell, 1992;
1995) becomes extremely slow even for four factor designs with as few as 25 variables as
discussed in Section 5.1. Moreover, fractional factorial based central composite designs are
available for problems with five or more factors. Hence, larger problems require different
As for the minimax Latin hypercube design, which is unique to this dissertation, the
genetic algorithm which is employed to generate these designs needs further studying to develop
a better understanding of its workings and to learn the optimal combination of parameters for
their use, namely, population size, number of permissible generations, mutation rates, and
termination criteria. Also, as it stands right now, the current design criterion—minimize the
maximum distance between sample points and prediction points—does not yield a unique
design for a given sample size and number of design variables. Developing and implementing an
314
optimization criterion such as that proposed by Mitchell and Morris (1995) for their maximin
Latin hypercube designs could improve the effectiveness of the minimax Latin hypercubes.
As for kriging, only kriging metamodels which employ an underlying constant for the
global portion of the model have been investigated in this work. In general, f(x) in Equation
2.14 could be taken as a linear or quadratic model instead of a constant which may permit more
accurate kriging approximations; however, the problem of having a sufficient number of samples
to estimate all of the unknown coefficients in f(x) resurfaces. A preliminary investigation of such
an approach is documented in (Giunta, et al., 1998); they find that minimal improvement in the
Meanwhile, the power of kriging lies in its capability to interpolate accurately a wide
range of linear and nonlinear functions. An iterative or sequential strategy which takes
advantage of this may prove useful provided the kriging models can be fit and validated quickly
from one iteration to the next. Consequently, trust region based approaches which incorporate
kriging metamodels (see, e.g., Alexandrov, et al., 1997; Booker, et al., 1996; Booker, et al.,
1995; Cox and John, 1995; Dennis and Torczon, 1996; Osio and Amon, 1996; Schonlau, et
al., 1997).
Finally, alternative optimization algorithms for finding the “best” kriging model also must
be investigated for use with larger problems. The simulated annealing algorithm currently
employed to fit the kriging models, see Appendix A, becomes extremely inefficient for problems
315
with more than eight variables and approximately 180 sample points. Moreover, the matrix
inversion routines in the current prediction software do not take full advantage of the properties
of the correlation matrix, R, in kriging which is always symmetric and positive definite. Several
matrix decomposition and inversion algorithms have been developed to take advantage of these
The concept of scalability and scalable product platforms has provided an excellent
inroads into product family and product platform design, marrying current research efforts in
DecisionBased Design, the Robust Concept Exploration Method, and robust design with tools
from marketing/management science. The end result is the Product Platform Concept
Exploration Method which has been demonstrated by means of two examples: the design of a
family of universal motors and the design of a family of General Aviation aircraft. While it has
been shown that the PPCEM is effective at producing a family of products based on scaled
Furthermore, much in the same way that the product platform provides a platform for
leveraging with a product family, the Product Platform Concept Exploration Method provides a
platform for leveraging future work in product family and product platform design, see Figure
8.2. The different types of systems can be classified on the vertical axis of a market
segmentation grid and different characteristics of product platform design on the horizontal axis.
316
The use of the PPCEM to design scalable product platforms for a variety of systems then can
be plotted on this market segmentation grid as illustrated in Figure 8.2 for the two examples in
this dissertation. Perhaps through the addition of different “Processors” to the PPCEM,
additional capabilities could be developed within the framework of the PPCEM to design
modular platforms or facilitate product family redesign around a common platform, for instance.
Complex
Systems
GAA ...
Simple Universal
Systems Motor
Figure 8.2 The PPCEM as a Platform for Other Platform Design Methods
Several avenues of future work have also been mentioned during the critical analysis in
Section 8.3. In addition to these potential research areas, additional verification and extensions
of the PPCEM are offered in the following sections as they tie to current research within the
the Georgia Institute of Technology. These sections have been cowritten with colleagues who
317
are planning to pursue (or are currently pursuing) the discussed research. A summary of those
providing input for this section and their standing within the Systems Realization Laboratory are
8.4.3 Additional Verification of the PPCEM and Kriging Metamodels through the
Concurrent Design of an Engine Lubrication System
The objective in the Ford Engine Design Project is to develop and improve engine
lubrication system models to support advanced concurrent powertrain design and development
(cf., Rangarajan, 1998). As part of this work, robust design specifications are sought which are
capable of satisfying a wide variety of torque and power requirements for different automobile
engines. After developing a better understanding of the engine lubrication system and its
318
components, potential scaling opportunities within the engine lubrication systems components
can be identified and exploited using the PPCEM to develop a robust and common platform
design for the valves, pistons, bearings, etc. This platform then can be instantiated quickly using
minimal additional analysis for different classes of vehicles (e.g., automobiles, trucks, and vans)
in an effort to maintain better economies of scale across a wide variety of automobile makes and
models.
components, the use of kriging metamodels for building surrogate approximations of the
associated complex fluid dynamics analyses also can be investigated. Currently secondorder
response surfaces are used extensively during the design process; however, the complex
analyses for friction losses, power losses, etc. cannot be modeled well by response surfaces
over a large region of the design space, thus limiting the search for good solutions. Building
accurate global approximations of these analyses using kriging metamodels may yield additional
insight into the complexities of the design space, allowing better solutions to be identified. The
utility of the kriging for partitioning and screening large systems also can be examined in the
context of the engine lubrication system since a large number of factors (˜ 20) currently are
being utilized which would push the limits of the kriging metamodeling software (i.e., fitting the
model, matrix inversion, etc.). Finally, additional metamodeling techniques such as neural
networks (see, e.g., Cheng and Titterington, 1994; Hajela and Berke, 1992; Rumelhart, et al.,
319
1994; Widrow, et al., 1994) also can be compared to kriging given the size and complexity of
the problem.
Balancing the need to customize products for target markets while enabling the
proliferation of options and model derivatives leads to increased tooling cost and production line
complexity. At first glance, it may appear that automotive platforms are prime examples for
product variety design research. However, in a recent study, Siddique, et al. (1998) identified
significant differences between the variety characteristics of automotive platforms from some of
the examples that other researchers have studied (e.g., the Sony Walkman family). For
example, the majority of product family design research is applicable to products that are
modular with respect to functions as discussed in Section 2.2.3. The automotive platform, on
the other hand, is not modular because the platform accomplishes one function as a whole. As
a result, many product family design approaches do not readily apply; however, careful
commonization of platforms can still be used to increase product variety while reducing the
number of components between different models and the product line complexity.
Developing a common platform requires a robust platform that can support all of the
requirements for different car models and also a common assembly process that can support
these variations. For the automotive industry, platform requirements come from packaging
constraints (underhood, passenger, etc.), safety/crash requirements, size of the vehicle, styling,
320
and other requirements/regulations. Cars in similar classes have similar types of requirements
(except for styling, maybe); as such, the underbody for similar cars have the potential to be
commonized. Toward this end, a method for the configuration design of common product
platforms is to be developed, extending the parameter design capabilities of the PPCEM for
designing scalable product platforms. As discussed in (Siddique, 1998; Siddique and Rosen,
Using configuration design methods, the underlying common core for different platforms can be
identified along with the required variations. This information then can be used to increase the
commonality of the product platform and determine how to isolate the variability in specific
modules.
is desired so that the same assembly line can be used to produce all of the (minor) platform
derivatives. Using the same component loading sequences, tooling sequences, etc. provides
some of the requirements when developing a common assembly process (cf., Nevins and
Whitney, 1989; Whitney, 1993). Other requirements that need to be considered specifically for
321
automobile platforms include common locators, weld lines, transfer points, etc. Hence, it is
8.4.5 Integrated Product and Process Design of Product Families and Mass
Customized Goods
Mass customization, i.e., the manufacture of customized products with the efficiency and
advantage and possibly the next worldclass manufacturing paradigm. Although the
marketplace is rapidly moving towards mass customization, very little work has been done on
formalizing an integrated product and process development method that would enable
companies to practice mass customization in a systematic and efficacious manner. For example,
the PPCEM provides a method to develop a common product platform which can be scaled to
provide the necessary variety for a product family; however, its focus is solely on modeling the
systems (see, e.g., Abair, 1995; Anderson and Pine, 1997; Chinnaiah, et al., 1998; Dhumal, et
al., 1996; Hormozi, 1994; Richards, 1996) focuses primarily on developing cost effective
manufacturing systems to realize a wide variety of products. Integrating the two fields of
research has received little attention in the context of designing families of products.
should be given to the integration of product design, production system design, and organization
322
• Principles of product and process development for mass customized production:
 systems to support the required information transfer and group decision making.
• Design for disaggregated production, i.e., decentralized supply chains and production
systems for the growing global economy.
An initial investigation into the concurrent modeling of product and process for design of
integrated product and process design of a family of absorption chillers for a variety of
capacities is presented. In related work, gametheoretic models of product and process design
have been implemented (see, Hernandez, 1998) to facilitate the formulation and solution of such
an approach, providing a foundation for future integration of product and process design of
family of products.
323
8.4.6 Product Family Mappings and “Ideality” Metrics
telecommunications industry, is (1) that several solution paths exist to satisfy a given set of
customer requirements using available components and (2) when customers ask for new
functional capabilities, it is difficult to determine how this functionality can be created and
must be established for the purpose of identifying the most appropriate solution strategy given
the specific design and customer requirements. The NCI and PDI measures presented in this
however, these measures cannot be used in “realtime” by designers to guide the product
platform development process. Therefore, the objective is to survey further the existing
3. define useful “realtime” metrics to guide engineering design and improve the product
family architecture,
4. map new functionality and products into the product family, and
areas of greatest improvement. Possible metrics include those relating to the system flexibility,
324
complexity, upgradability, etc., in addition to improving current metrics for commonality,
modularity, etc. for “realtime” use by designers. The end result will be an efficient process for
designing assembletoorder systems, thereby replacing the expensive and time consuming
products, one of which is the ability to reuse and remanufacture components and modules from
one product to the next (cf., Alexander, 1993; Paula, 1997; Rothwell and Gardiner, 1990;
Sanderson and Uzumeri, 1995). Product reuse is the act of reclaiming products (or parts of
products) from a previous use and remanufacturing them for another use (where the second use
may or may not be the same as the original). Product reuse is both economically and
• previously used products are diverted from landfill or other means of disposal,
• all of the energy, emissions, and financial resources involved in creating the geometric
form of components are reduced.
of product design characteristics, product development strategies, and external factors on the
value of reuse and remanufacture over time. The model can be used to assess the potential
325
value of product remanufacture for an OEM (original equipment manufacturer) which integrates
the reclaiming and reuse of products into its existing production system. The model allows
• product design characteristics of each product model, (e.g., the number of components
and required disassembly sequence),
• product development decisions over time (e.g., the level of product variety, the rate of
product evolution, and the degree of component standardization across product variety
and evolution), and
• external business factors which affect reclaiming and remanufacturing (e.g., the cost of
labor and the retirement distribution of used products over time).
The model then is used to determine which products to reclaim, which components to recycle
and remanufacture, and the resulting costs and benefits of these actions over time. Thus, it
provides an analysis tool to assess the potential value of reuse and remanufacturing on the
development of product families based on common product platforms, providing additional cost
In closing this dissertation, a quote by T.S. Eliot found in the introductory section of
326
And know the place for the first time.”
— T.S. Eliot
The PPCEM is not an end in itself; rather, it provides a stepping stone for future research work
in this nascent field of engineering design. For it is only at the end of this dissertation that the
problems and difficulties associated with product family and product platform design are truly
understood and appreciated. And now that we understand them, either for the first time or in
greater depth, new paths can be explored and new methods can be developed which continue
to advance the stateoftheart in product family and product platform design. It is the hope of
the author that the PPCEM enjoys the same success as the RCEM, providing a foundation on
which future research can be established in the same manner that this work builds upon the
327
REFERENCES
1982, The Concise Oxford Dictionary, Oxford University Press, Oxford, UK.
Abair, R., 1995, October 2227, "Agile Manufacturing: This Is not Just Repackaging of
Material Requirements Planning and JustInTime," 38th American Production and
Inventory Control Society (APICS) International Conference and Exhibition,
Orlando, FL, APICS, pp. 196198.
Alexander, B., 1993, June 1416, "Kodak Fun Saver Camera Recycling," Society of Plastics
Engineers Recycling Conference  Survival Tactics thru the '90's, Chicago, IL, pp.
207212.
Alexandrov, N., Dennis, J. E., Jr., Lewis, R. M. and Torczon, V., 1997, "A Trust Region
Framework for Managing the Use of Approximation Models in Optimization,"
NASA/CR20145, ICASE Report. No. 9750, Institute for Computer Applications in
Science and Engineering (ICASE), NASA Langley Research Center, Hampton, VA.
Anderson, D. M. and Pine, B. J., II, 1997, Agile Product Development for Mass
Customization, Irwin, Chicago, IL.
Balling, R. J. and Clark, D. T., 1992, September 2123, "A Flexible Approximation Model for
Use with Optimization," 4th AIAA/USAF/NASA/OAI Symposium on
Multidisciplinary Analysis and Optimization, Cleveland, OH, AIAA, Vol. 2, pp.
886894. AIAA924801CP.
Barton, R. R., 1992, December 1316, "Metamodels for Simulation InputOutput Relations,"
Proceedings of the 1992 Winter Simulation Conference (Swain, J. J., Goldsman,
D., et al., eds.), Arlington, VA, IEEE, pp. 289299.
Barton, R. R., 1994, December 1114, "Metamodeling: A State of the Art Review,"
Proceedings of the 1994 Winter Simulation Conference (Tew, J. D., Manivannan,
S., et al., eds.), Lake Beuna Vista, FL, IEEE, pp. 237244.
467
Booker, A. J., 1996, "Case Studies in Design and Analysis of Computer Experiments,"
Proceedings of the Section on Physical and Engineering Sciences, American
Statistical Association.
Booker, A. J., Conn, A. R., Dennis, J. E., Frank, P. D., Serafini, D., Torczon, V. and Trosset,
M., 1996, "MultiLevel Design Optimization: A Boeing/IBM/Rice Collaborative
Project," 1996 Final Report, ISSTECH96031, The Boeing Company, Seattle, WA.
Booker, A. J., Conn, A. R., Dennis, J. E., Frank, P. D., Trosset, M. and Torczon, V., 1995,
"Global Modeling for Optimization: Boeing/IBM/Rice Collaborative Project," 1995
Final Report, ISSTECH95032, The Boeing Company, Seattle, WA.
Box, G. E. P. and Behnken, D. W., 1960, "Some New Three Level Designs for the Study of
Quantitative Variables," Technometrics, Vol. 2, No. 4, pp. 455475, "Errata," Vol. 3,
No. 4, p. 576.
Box, G. E. P. and Draper, N. R., 1987, Empirical Model Building and Response Surfaces,
John Wiley & Sons, New York.
Box, M. J. and Draper, N. R., 1971, "Factorial Designs, the X'X Criterion, and Some Related
Matters," Technometrics, Vol. 13, No. 4 (November), pp. 731742.
Bras, B. A. and Mistree, F., 1991, "Designing Design Processes in DecisionBased Concurrent
Engineering," SAE Transactions, Journal of Materials & Manufacturing, SAE
International, Warrendale, PA, pp. 451458.
Byrne, D. M. and Taguchi, S., 1987, "The Taguchi Approach to Parameter Design," Quality
Progress, Vol. December, pp. 1926.
Chaloner, K. and Verdinelli, I., 1995, "Bayesian Experimental Design: A Review," Statistical
Science, Vol. 10, No. 3, pp. 273304.
Chambers, J. M., Freeny, A. E. and Heiberger, R. M., 1992, "Chapter 5: Analysis of Variance;
Designed Experiments," Statistical Models in S (Chambers, J. M. and Hastie, T. J.,
eds.), Wadsworth & Brooks/Cole, Pacific Grove, CA, pp. 145193.
Chang, T.S. and Ward, A. C., 1995, September 1720, "DesigninModularity with
Conceptual Robustness," Advances in Design Automation (Azarm, S., Dutta, D., et
al., eds.), Boston, MA, ASME, Vol. 821, pp. 493500.
Chang, T.S., Ward, A. C., Lee, J. and Jacox, E. H., 1994, November 611, "Distributed
Design with Conceptual Robustness: A Procedure Based on Taguchi's Parameter
468
Design," Concurrent Product Design Conference (Gadh, R., ed.), Chicago, IL,
ASME, Vol. 74, pp. 1929.
Chen, W., Rosen, D., Allen, J. K. and Mistree, F., 1994, "Modularity and the Independence of
Functional Requirements in Designing Complex Systems," Concurrent Product Design
(Gadh, R., ed.), ASME, Vol. 74, pp. 3138.
Chen, W., 1995, "A Robust Complex Exploration Method for Configuring Complex Systems,"
Ph.D. Dissertation, G. W. Woodruff School of Mechanical Engineering, Georgia
Institute of Technology, Atlanta, GA.
Chen, W., Allen, J. K., Mavris, D. and Mistree, F., 1996a, "A Concept Exploration Method
for Determining Robust TopLevel Specifications," Engineering Optimization, Vol.
26, No. 2, pp. 137158.
Chen, W., Allen, J. K., Tsui, K.L. and Mistree, F., 1996b, "A Procedure for Robust Design:
Minimizing Variations Caused by Noise and Control Factors," Journal of Mechanical
Design, Vol. 118, No. 4, pp. 478485.
Chen, W., Simpson, T. W., Allen, J. K. and Mistree, F., 1996c, August 1822, "Use of Design
Capability Indices to Satisfy a Ranged Set of Design Requirements," Advances in
Design Automation (Dutta, D., ed.), Irvine, CA, ASME, Paper No. 96
DETC/DAC1090.
Chen, W., Allen, J. K., Schrage, D. P. and Mistree, F., 1997, "Statistical Experimentation
Methods for Achieving Affordable Concurrent Systems Design," AIAA Journal, Vol.
35, No. 5, pp. 893900.
Chen, W., Allen, J. K. and Mistree, F., 1997, "A Robust Concept Exploration Method for
Enhancing Productivity in Concurrent Systems Design," Concurrent Engineering:
Research and Applications, Vol. 5, No. 3, pp. 203217.
Cheng, B. and Titterington, D. M., 1994, "Neural Networks: A Review from a Statistical
Perspective," Statistical Science, Vol. 9, No. 1, pp. 254.
Chinnaiah, P. S. S., Kamarthi, S. V. and Cullinane, T. P., 1998, "Characterization and Analysis
of MassCustomized Production Systems," International Journal of Agile
Manufacturing, under review.
Clark, K. B. and Wheelwright, S. C., 1993, Managing New Product and Process
Development, Free Press, New York.
469
Cogdell, J. R., 1996, Foundations of Electrical Engineering, Prentice Hall, Upper Saddle
River, NJ.
Collier, D. A., 1981, "The Measurement and Operating Benefits of Component Part
Commonality," Decision Sciences, Vol. 12, No. 1, pp. 8596.
Collier, D. A., 1982, "Aggregate Safety Stock Levels and Component Part Commonality,"
Management Science, Vol. 28, No. 22, pp. 12961303.
Cox, D. D. and John, S., 1995, March 1316, "SDO: A Statistical Method for Global
Optimization," Proceedings of the ICASE/NASA Langley Workshop on
Multidisciplinary Optimization (Alexandrov, N. M. and Hussaini, M. Y., eds.),
Hampton, VA, SIAM, pp. 315329.
Cressie, N. A. C., 1993, Statistics for Spatial Data, Revised Edition, John Wiley & Sons,
New York.
Currin, C., Mitchell, T., Morris, M. and Ylvisaker, D., 1991, "Bayesian Prediction of
Deterministic Functions, With Applications to the Design and Analysis of Computer
Experiments," Journal of the American Statistical Association, Vol. 86, No. 416,
pp. 953963.
Davis, S. M., 1987, Future Perfect, AddisonWesley Publishing Company, Reading, MA.
Dennis, J. E. and Torczon, V., 1995, March 1316, "Managing Approximation Models in
Optimization," Proceedings of the ICASE/NASA Langley Workshop on
Multidisciplinary Design Optimization (Alexandrov, N. M. and Hussaini, M. Y.,
eds.), Hampton, VA, SIAM, pp. 330347.
Dennis, J. E., Jr. and Torczon, V., 1996, September 46, "Approximation Model Management
for Optimization," 6th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary
Analysis and Optimization, Bellevue, WA, AIAA, Vol. 2, pp. 10441046. AIAA
964099CP.
Dhumal, A., Dhawan, R., Kona, A. and Soni, A. H., 1996, August 1822, "Reconfigurable
System Analysis for Agile Manufacturing," 5th ASME Flexible Assembly Conference
(Soni, A., ed.), Irvine, CA, ASME, Paper No. 96DETC/FAS1367.
DiCamillo, G. T., 1988, "Winning Turnaround Strategies at Black & Decker," Journal of
Business Strategy, Vol. 9, No. 2, pp. 3033.
Diwekar, U. M., 1995, "Hammersley Sampling Sequence (HSS) Manual," Engineering &
Public Policy Department, Carnegie Mellon University, Pittsburgh, PA.
470
Eggert, R. J. and Mayne, R. W., 1993, "Probabilistic Optimal Design Using Successive
Surrogate Probability Density Functions," Journal of Mechanical Design, Vol. 115,
No. 3, pp. 385391.
Erens, F. and Breuls, P., 1995, "Structuring Product Families in the Development Process,"
Proceedings of ASI'95, Lisbon, Portugal, .
Erens, F. and Verhulst, K., 1997, "Architectures for Product Families," Computers in
Industry, Vol. 33, No. 165178, pp.
Erens, F. J. and Hegge, H. M. H., 1994, "Manufacturing and Sales Coordination for Product
Variety," International Journal of Production Economics, Vol. 37, No. 1, pp. 83
99.
Erens, F., 1997, "Synthesis of Variety: Developing Product Families," Ph.D. Dissertation,
University of Technology, Eindhoven, The Netherlands.
Fang, K.T. and Wang, Y., 1994, Numbertheoretic Methods in Statistics, Chapman & Hall,
New York.
Finger, S. and Dixon, J. R., 1989a, "A Review of Research in Mechanical Engineering Design.
Part 1: Descriptive, Prescriptive, and ComputerBased Models of Design Processes,"
Research in Engineering Design, Vol. 1, pp. 5167.
Finger, S. and Dixon, J. R., 1989b, "A Review of Research in Mechanical Engineering Design.
Part 2: Representations, Analysis, and Design for the Life Cycle," Research in
Engineering Design, Vol. 1, pp. 121137.
Fujita, K. and Ishii, K., 1997, September 1417, "Task Structuring Toward Computational
Approaches to Product Variety Design," Advances in Design Automation (Dutta, D.,
ed.), Sacramento, CA, ASME, Paper No. DETC97/DAC3766.
G.S. Electric, 1997, "Why Universal Motors Turn On the Appliance Industry,"
http://www.gselectric.com/electric/univers4.htm.
Giunta, A., Watson, L. T. and Koehler, J., 1998, September 24, "A Comparison of
Approximation Modeling Techniques: Polynomial Versus Interpolating Models," 7th
AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis &
Optimization, St. Louis, MI, AIAA, AIAA984758.
Goffe, W. L., Ferrier, G. D. and Rogers, J., 1994, "Global Optimization of Statistical Functions
with Simulated Annealing," Journal of Econometrics, Vol. 60, No. 12, pp. 65100.
Source code is available at http://netlib2.cs.utk.edu/opt.
Hagemann, G., Schley, C.A., Odintsov, E. and Sobatchkine, A., 1996, July, "Nozzle
Flowfield Analysis with Particular Regard to 3DPlugCluster Configurations," AIAA
962954.
Hajela, P. and Berke, L., 1992, "Neural Networks in Structural Analysis and Design: An
Overview," Computing Systems in Engineering, Vol. 3, No. 14, pp. 525538.
Hardin, R. H. and Sloane, N. J. A., 1993, "A New Approach to the Construction of Optimal
Designs," Journal of Statistical Planning and Inference, Vol. 37, pp. 339369.
Hernandez, G., 1998, "A ProbablisticBased Design Approach with Game Theoretical
Representations of the Enterprise Design Process," M.S. Thesis, G. W. Woodruff
School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA.
Hernandez, G., Simpson, T. W., Allen, J. K., Bascaran, E., Avila, L. F. and Salinas, F., 1998,
September 1316, "Robust Design of Product Families for MaketoOrder Systems,"
Advances in Design Automation Conference, Atlanta, GA, ASME, DETC98/DAC
5595.
Hollins, B. and Pugh, S., 1990, Successful Product Design, Butterworths, Boston, MA.
472
Hubka, V. and Eder, W. E., 1988, Theory of Technical Systems: A Total Concept Theory
for Engineering Design, Springer, New York.
Hubka, V. and Eder, W. E., 1996, Design Science: Introduction to the Needs, Scope and
Organization of Engineering Design Knowledge, Springer, New York.
Iacobellis, S. F., Larson, V. R. and Burry, R. V., 1967, December, "LiquidPropellant Rocket
Engines: Their Status and Future," Journal of Spacecraft and Rockets, Vol. 4, pp.
15691580.
Ignizio, J. P., 1985, Introduction to Linear Goal Programming, Sage University Papers,
Beverly Hills, CA.
Ignizio, J. P., 1990, An Introduction to Expert Systems: The Methodology and its
Implementation, McGrawHill, New York.
Ignizio, J. P., Wyskida, R. M. and Wilhelm, M. R., 1972, "A Rationale for Heuristic Program
Selection and Evaluation," Vol. 4, No. 1, pp. 1619.
Iman, R. J. and Shortencarier, M. J., 1984, "A FORTRAN77 Program and User's Guide for
Generation of Latin Hypercube and Random Samples for Use with Computer Models,"
NUREG/CR3624, SAND832365, Sandia National Laboratories, Albuquerque, NM.
Jacobson, G. and Hillkirk, J., 1986, Xerox: American Samurai, Macmillan Publishing
Company, New York.
Johnson, M. E., Moore, L. M. and Ylvisaker, D., 1990, "Minimax and Maximin Distance
Designs," Journal of Statistical Planning and Inference, Vol. 26, No. 2, pp. 131
148.
Johnson, N. L., Kotz, S. and Pearn, W. L., 1992, "Flexible Process Capability Indices,"
Institute of Statistics Mimeo Series, University of North Carolina, Chapel Hill, NC.
Journel, A. G. and Huijbregts, C. J., 1978, Mining Geostatistics, Academic Press, New
York.
Kalagnanam, J. R. and Diwekar, U. M., 1997, "An Efficient Sampling Technique for OffLine
Quality Control," Technometrics, Vol. 39, No. 3, pp. 308319.
Kannan, B. K. and Kramer, S. N., 1994, "An Augmented Lagrange Multiplier Based Method
for Mixed Integer Discrete Continuous Optimization and Its Application to Mechanical
Design," Journal of Mechanical Design, Vol. 116, No. 2, pp. 405411.
473
Kleijnen, J. P. C., 1987, Statistical Tools for Simulation Practitioners, Marcel Dekker,
New York.
Kobe, G., 1997, "Platforms  GM's Seven Platform Global Strategy," Automotive Industries,
Vol. 177, pp. 50.
Koch, P. N., 1997, "Hierarchical Modeling and Robust Synthesis for the Preliminary Design of
Large Scale, Complex Systems," Ph.D. Dissertation, G. W. Woodruff School of
Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA.
Koch, P. N., Allen, J. K., Mistree, F. and Mavris, D., 1997, September 1417, "The Problem
of Size in Robust Design," Advances in Design Automation, Sacramento, CA,
ASME, Paper No. DETC97/DAC3983.
Koch, P. N., Mavris, D., Allen, J. K. and Mistree, F., 1998, September 1316, "Modeling
Noise in ApproximationBased Robust Design: A Comparison and Critical Discussion,"
Advances in Design Automation, Atlanta, GA, ASME, DETC98/DAC5588.
Korte, J. J., Salas, A. O., Dunn, H. J., Alexandrov, N. M., Follett, W. W., Orient, G. E. and
Hadid, A. H., 1997, "Multidisciplinary Approach to Aerospike Nozzle Design," NASA
TM110326, NASA Langley Research Center, Hampton, VA.
Kota, S. and Sethuraman, K., 1998, September 1316, "Managing Variety in Product Families
Through Design for Commonality," Design Theory and Methodology  DTM'98,
Atlanta, GA, ASME, DETC98/DTM5651.
Lee, H. L. and Billington, C., 1994, "Designing Products and Processes for Postponement,"
Management of Design: Engineering and Management Perspective (Dasu, S. and
Eastman, C., eds.), Kluwer Academic Publishers, Boston, MA, pp. 105122.
Lee, H. L. and Tang, C. S., 1997, "Modeling the Costs and Benefits of Delayed Product
Differentiation," Management Science, Vol. 43, No. 1, pp. 4053.
Lee, H. L., Billington, C. and Carter, B., 1993, "HewlettPackard Gains Control of Inventory
and Service through Design for Localization," Interfaces, Vol. 32, No. 4, pp. 111.
Lehnerd, A. P., 1987, "Revitalizing the Manufacture and Design of Mature Global Products,"
Technology and Global Industry: Companies and Nations in the World Economy
(Guile, B. R. and Brooks, H., eds.), National Academy Press, Washington, D.C., pp.
4964.
474
Lewis, K., Lucas, T. and Mistree, F., 1994, September 79, "A Decision Based Approach to
Developing Ranged TopLevel Aircraft Specifications: A Conceptual Exposition," 5th
AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and
Optimization, Panama City, FL, Vol. 1, pp. 465481.
Lewis, R. M., 1996, "A Trust Region Framework for Managing Approximation Models in
Engineering Optimization," 6th AIAA/USAF/NASA/ISSMO Symposium on
Multidisciplinary Analysis and Optimization, Bellevue, WA, AIAA, Vol. 2, pp.
10531055. AIAA964101CP.
Li, H.L. and Chou, C.T., 1994, "A Global Approach for Nonlinear Mixed Discrete
Programming in Design Optimization," Engineering Optimization, Vol. 22, No. 2, pp.
109122.
Lin, S., 1975, "Heuristic Programming as an Aid to Network Design," Networks, Vol. 5, No.
1, pp. 3343.
Lucas, J. M., 1976, "Which Response Surface Design is Best," Technometrics, Vol. 18, No.
4, pp. 411417.
Lucas, J. M., 1994, "Using Response Surface Methodology to Achieve a Robust Process,"
Journal of Quality Technology, Vol. 26, No. 4, pp. 248260.
Martin, M. and Ishii, K., 1996, August 1822, "Design for Variety: A Methodology for
Understanding the Costs of Product Proliferation," Design Theory and Methodology 
DTM'96 (Wood, K., ed.), Irvine, CA, ASME, Paper No. 96DETC/DTM1610.
Martin, M. V. and Ishii, K., 1997, September 1417, "Design for Variety: Development of
Complexity Indices and Design Charts," Advances in Design Automation (Dutta, D.,
ed.), Sacramento, CA, ASME, Paper No. DETC97/DFM4359.
Mather, H., 1995, October 2227, "Product Variety  Friend or Foe?," Proceedings of the
1995 38th American Production & Inventory Control Society International
Conference and Exhibition, Orlando, FL, APICS, pp. 378381.
Matheron, G., 1963, "Principles of Geostatistics," Economic Geology, Vol. 58, pp. 1246
1266.
475
Mavris, D. N., Bandte, O. and Schrage, D. P., 1995, May, "Economic Uncertainty Assessment
of an HSCT Using a Combined Design of Experiments/Monte Carlo Simulation
Approach," 17th Annual Conference of International Society of Parametric
Analysts, San Diego, CA, .
Mavris, D., Bandte, O. and Schrage, D., 1996, September 46, "Application of Probabilistic
Methods for the Determination of an Economically Robust HSCT Configuration," 6th
AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and
Optimization, Bellevue, WA, AIAA, Vol. 2, pp. 968978. AIAA964090CP.
McCullers, L. A., 1993, "Flight Optimization System, User's Guide, Version 5.7," NASA
Langley Research Center, Hampton, VA.
McDermott, C. M. and Stock, G. N., 1994, "The Use of Common Parts and Designs in High
Tech Industries: A Strategic Approach," Production and Inventory Management
Journal, Vol. 35, No. 3, pp. 6568.
McKay, A., Erens, F. and Bloor, M. S., 1996, "Relating Product Definition and Product
Variety," Research in Engineering Design, Vol. 8, No. 2, pp. 6380.
McKay, M. D., Beckman, R. J. and Conover, W. J., 1979, "A Comparison of Three Methods
for Selecting Values of Input Variables in t
Гораздо больше, чем просто документы.
Откройте для себя все, что может предложить Scribd, включая книги и аудиокниги от крупных издательств.
Отменить можно в любой момент.