Вы находитесь на странице: 1из 20

The current issue and full text archive of this journal is available at

www.emeraldinsight.com/2040-4166.htm

Improving
Improving process improvement: process
executing the analyze and improvement
improve phases of DMAIC better
231
Pathik Mandal
Indian Statistical Institute, Kolkata, India

Abstract
Purpose – This paper aims to highlight that a define, measure, analyze, improve, and control
(DMAIC) project should be carried out keeping the broader business goal of achieving continuous
improvement in mind and that a design of experiment (DOE) based improvement approach should be
preferred to achieve this goal.
Design/methodology/approach – “Ease of control” of the improved process and “gain in process
knowledge” from a DMAIC study are identified as two measures for judging the contribution of a
DMAIC project towards continuous improvement. Various improvement approaches are classified
into seven groups and the likely impact of each of these seven approaches on the above two quality
measures are discussed.
Findings – The improvement approach adopted during the improve phase is partially determined by
the nature of the root cause(s) – type X or type Y. The type Y root cause leads to the adoption of the
“innovation-prioritization” approach, which is very popular but has many limitations. Accordingly, an
“analysis strategy” is proposed for efficient identification of the X-type root causes.
Practical implications – The above findings suggest that one should try to identify as many
X-type root causes as possible. However, in case of service and transactional processes one finds it
difficult to do so. Much more research is necessary in the area of service process design before the path
of continuous improvement of such processes can be embarked on effectively.
Originality/value – It is expected that an awareness of the broader goal of continuous improvement,
the classification of the end states of the analyze phase, the proposed “analysis strategy” and the
practical guidelines provided for selecting an appropriate improvement approach will be helpful in
executing the analyze and improve phases of DMAIC better.
Keywords DMAIC project quality measures, Analysis strategy, Improvement approaches,
Improve phase road map, Service and transactional process improvement, Continuous improvement,
Process analysis
Paper type Research paper

1. Introduction
In Six Sigma, the process improvement projects are carried out following the well
known define-measure-analyze-improve-control (DMAIC) or define-measure-analyze-
design-verify (DMADV) disciplines (Harry and Schroeder, 2000). The discipline of
DMAIC is used to improve an existing process/product, while the DMADV is used for
designing a new process/product. Many organizations all over the world have derived
significant financial benefits from effective deployment of Six Sigma and many of these
are well documented by Harry and Schroeder (2000). However, our experience of guiding International Journal of Lean Six
and evaluating many Six Sigma projects suggests that the manner in which the process Sigma
Vol. 3 No. 3, 2012
improvement is achieved in many DMAIC studies is far from satisfactory. For example, pp. 231-250
the design of experiment (DOE) is supposed to be the primary tool to be used during the q Emerald Group Publishing Limited
2040-4166
improve phase of DMAIC. But only a very few DMAIC studies make use of DOE for DOI 10.1108/20401461211282727
IJLSS process optimization. Moreover, even when the DOE is used, the application is usually of
3,3 a low order full factorial experiment. This scenario is more prevalent in case of service
process improvement (Table III). In fact, there are Six Sigma Black Belt courses targeted
towards service, information technology and related industry that do not include DOE
as a subject in their course curricula! These service providers perhaps believe that the
DOE, as a Six Sigma tool, may not be useful to their customers. Such an assumption is
232 obviously not true since we already have many DOE success stories in a
non-manufacturing set up (Antony et al., 2011; Almquist and Wyner, 2001). Our
experience also suggests that there are both opportunities and challenges in using DOE
for improving non-manufacturing processes (Section 7).
It must however be emphasized that the absence of DOE per se is not of much
concern. There can be cases where the use of DOE may not be required or may not be
feasible even if desired. But the widespread neglect and failure to harness the power of
DOE in DMAIC projects is definitely a cause for major concern. Here an attempt has been
made to address this concern by redefining the larger goals of a process improvement
exercise and illustrating the role of DOE and the related concepts in achieving these
goals. It is argued that the judicious use of DOE can facilitate continuous process
improvement. The pitfalls of a non-DOE based approach for improving a process and
the safeguards needed to avoid these pitfalls are also discussed in detail.

Overview
A brief summary of the rest of the material is given below in a question-answer format.
How do we measure the quality of a DMAIC project? It is argued that the quality of a
DMAIC project should be judged based on its contribution towards continuous
improvement. Accordingly, “quantum gain in process knowledge” and “ease of control
of the improved process” are proposed as the two important DMAIC project quality
measures (Section 2).
What is the present status of DMAIC projects with respect to the above two measures?
It is observed that in a large majority of cases, DMAIC projects are not focused on
achieving the above two goals. Some evidence from Indian industries is also provided
(Table III) in support of this observation.
What do we mean by executing a DMAIC study better? It means improving the
present status so that one remains in the path of continuous improvement (as judged
by the above two measures). All the five phases of DMAIC play an important role in
achieving this goal. But our scope here is limited to the two most critical phases, i.e. the
analyze and the improve phase.
How to execute the analyze phase better? The related issues are discussed in Sections
3 and 4. In Section 3, we discuss about the various elements of process analysis as a
foundation for Section 4 (and also for Section 5). The proposed “analysis strategy”
(Section 4) is expected to lead us to the desirable end state of the analyze phase efficiently.
How to execute the improve phase better? Various improvement approaches are
classified in seven groups (Section 5). It is argued that the approach of robust design
using a dynamic characteristic should be the most preferred approach for process
improvement. The innovation-prioritization approach, which is most commonly used
in practice, particularly for improving non-manufacturing processes has many pitfalls
and hence should be avoided, wherever possible. The “risk analysis” step of the
improve phase (Section 6) becomes critical whenever this later approach is used.
Apart from the above, the challenges in adopting the robust design approach for Improving
improving service and transactional processes are discussed in Section 7. The main process
conclusions of the study are summarized in Section 8, which also indicates the possible
directions of future research for improving services processes. improvement

2. DMAIC project quality measures


The amount of savings generated by a project is obviously important for both the 233
employer and the project team. But the amount of savings generated can hardly be used
to judge the quality of a DMAIC project. The spirit of Six Sigma itself tells us that we
ought to focus on the process! A careful analysis of the DMAIC process reveals that
the five steps of DMAIC are not independent. Each step has a carryover effect on the
subsequent steps. Keeping this in mind we shall take “ease of control” of the improved
process as our first DMAIC process measure. The ease of control of the improved process
is important since this determines our ability to hold on to the gains. However, it is also
important to realize that the ease of control is determined less by the control solutions
proposed than by the way an improvement is sought to be achieved. For example, it may
be both very effective and easy to control temperature (a process input variable) by
installing a temperature controller. But this will not minimize the overall process control
load since the functioning of the controller itself needs to be controlled. In particular,
there can be failures of the controller. Therefore, it will be best to make the process robust
enough so that there is no need to control the temperature. The ease of control is
particularly important in case of service and transactional processes. It is not uncommon
to find that we come back to the original process after several rounds of improvement!
This brings us to the important issue of foundation for continuous improvement and for
this it is necessary to look even beyond the control phase. We believe that the process
knowledge gained during an improvement cycle and the stability of the current process
provides an appropriate foundation for the next improvement cycle. So we shall take
“quantum gain in process knowledge” as our second DMAIC process measure. Note that
the knowledge gained from a process improvement study will be particularly valuable, if it
is portable. The greater the portability, say from one process to another, the greater will be
its utility. To illustrate, consider the results of three different case studies shown in
Figure 1. It is seen that the temperature of an object being deformed has great impact on the
robustness of a wide range of deformation processes – backward extrusion of zinc callot
producing dry cell cans, forward extrusion of soap granules to form soap billets, and hot
rolling of steel coils. In fact, the knowledge gained from the behavior of the zinc extrusion
process was used later to improve the other two processes. Of course, the only basis for
successful utilization of prior experience in this case was just a broad understanding of the
deformation process. So the later two attempts might not have been successful. This
suggests the need for methods which can generate process knowledge having objective
basis of its portability, i.e. we need methods that will allow us to judge the portability of the
optimal solution along a predefined path. This is discussed later (Section 5) under
experimentation using dynamic characteristic (improvement approach A4).
To summarize, “ease of control” for sustainable improvement and “quantum gain in
process knowledge” for continuous improvement are two important DMAIC project
quality measures. It is extremely important to keep these two measures in mind while
selecting the improvement approach and making a variety of other decisions in course
of the DMAIC journey.
IJLSS

Average bottom thickness

Average weight (gm)


3,3 0.75
102

(mm)
101
0.73
100
234 Optimum Optimum

0.71 99
170 190 210 230 250 89 93 97 101
Chute temperature (C) Extrusion temperature (C)
(a) (b)
Average coil diaameter (mm)
5.81

5.8

5.79

5.78

5.77 Optimum

5.76
1,175 1,225 1,275
Soaking temperature (C)
Figure 1.
Effect of processing (c)
temperature on robustness Notes: (a) Backward extrusion of zinc alloy; (b) forward extrusion of soap granules;
of three deformation (c) hot rolling of steel; process setting in each case remained same during the period
processes
of data collection

3. Elements of process analysis


Types of factors
The process model Y ¼ f (X), where Xs are the variables/factors affecting the process
response Y, is at the heart of Six Sigma. However, all the Xs are not similar. It is important
to recognize the various types of Xs and their roles in a given process in order to have a
deeper understanding of the process. Here we shall define only the five types of factors
that will be sufficient for process analysis. However, for successful experimentation it
may be necessary to include other types of factors in the experiment. See Taguchi (1987)
for a detail discussion on various types of factors used in experiments.
Control factor. The factors which are controlled during operation at their best levels
specified by the designer are called control factors.
Signal factor. The factors whose levels are varied by the user/experimenter to
achieve or specify one or more process targets are called signal factors. The user can be
the user of a product or of a process. If the output characteristic has more than one
target then it is called a dynamic characteristic. For example, the turning radius of a
car is a dynamic characteristic and the steering angle is the corresponding signal factor.
The characteristics which are not dynamic are called non-dynamic or static
characteristics. The non-dynamic characteristics are further classified into various
categories such as nominal-the-best (NTB), larger-the-better and smaller-the-better type. Improving
A signal factor is also used to control a NTB type of characteristic. For example, the draft process
of rolling (a signal factor) is varied to control the thickness (a NTB type of characteristic)
of the sheets obtained after rolling. improvement
Note that there are processes in which the signal factor (e.g. the diameter of a
forging die) may be treated as a control factor during operation but the factor can be
varied during experimentation for specifying various targets. Such a factor will be 235
called a “potential signal factor”.
Adjustment factor. The factors used by the designer/experimenter to adjust either
the process mean (in case of a NTB type of characteristic) or the sensitivity of the
signal (in case of a dynamic characteristic) are called adjustment factors. For example,
the gear ratio of an automobile steering system may be used as an adjustment factor
for achieving the desired level of sensitivity of the signal generated by the steering
angle. Note that the factor used by the designer/experimenter as an adjustment factor
is always treated as a control factor during operation. Thus, from an operational point
of view, we can only talk of a “potential adjustment factor”, i.e. a special type of control
factor that is suitable for process adjustment.
Block factor. These are factors for which it is not possible to operate the process at
their best levels, even if such levels exist. This is because the variation of these factors
is an inherent feature of the process. For example, the shift of production, the
identification code of a smelter and the molding line used to prepare the mould are all
block factors related to a casting process.
Noise factor. The set of all uncontrolled factors responsible for variation in process
response will be called noise factors. Ideally the set of noise factors together is expected
to have a random impact on the process response and this idealization is used to judge
the stability and capability of a process.
An example of the above five type of factors belonging to a particular process is
given in Table I. Further details on the classification of factors and the roles of signal
and adjustment factors are available elsewhere (Mandal, 2011).

Types of root causes


The root cause of a problem is defined as the cause which, when removed, prevents a
recurrence of the problem. It will be useful for the purpose of our discussion

Signal Adjustment Block


factor factor Control factor factor Noise factora

Mould Holding Melt temperature, mould Cavity Small variations of the control
dimension pressureb temperature, injection velocity, location factors, environmental
type of material, holding time, condition, operator skill, etc.
type of feed system, etc.
Notes: aIn case of a dynamic characteristic experiment, the cavity location and order of production
may be treated as systematic noise factors; bif, during operation, the holding pressure is used for Table I.
adjusting part dimension to the target, then it becomes an operational signal factor; however, for the Classification of factors
purpose of experimentation, the mould dimension should be used as the signal factor; experimental affecting the performance
results and operational convenience may suggest that either holding pressure (most likely) or some of a multi-cavity injection
other control factor be used as an adjustment factor molding process
IJLSS to distinguish between two types of root causes – types X and y. A root cause will be
3,3 called of type X if it refers to the non-optimality of an individual factor and the optimality
can be achieved by taking direct action on the factor (usually following a known
procedure). Note that since the control action is direct, it is expected to be very effective
and also will not involve any changes in the basic system design. On the other hand, a
y-type root cause refers to the occurrence or non-occurrence of an event. Thus, the action
236 to remove the root cause will necessarily be indirect in nature. However, the action may
or may not involve any changes in the system design. For example, the delay in getting
the signature of an intermediate authority may be a y-type root cause for the delay in
making payments. A possible action to remove the root cause may be to send a reminder
at an appropriate time. Such a control action does not involve any change in the system
design. However, in order to remove the root cause, one may also think of changing the
system design so that there will be no need for the signature. It is obvious that the
reminder may not be effective while the other solution, i.e. to modify the system design
will remove the root cause with certainty but will also involve the risk of affecting other
critical to quality characteristics. This is the general feature of removing y-type root
causes. There will usually be several options to remove the cause but the
solutions will differ in terms of their effectiveness, cost of implementation and the
risks involved.
However, so far as the y-type root causes related to human error are concerned, the
approach of “fool proofing” (Shingo, 1986) is likely to provide an effective, economic, as
well as risk-free solution, particularly in case of manufacturing problems.
Note that the concept of a y-type root cause is just an extension of the “big Y – little y”
framework, where the big Y refers to a business goal and the little y to the corresponding
project goal. The X and y type root causes lead us to the Y ¼ f (X, y) framework, where Y
is the project Y.

End states of the analyze phase


The analyze phase prepares the foundation for the improve phase. At the end of the
analyze phase one may be at any one of the following three states: (S1) the root cause(s)
of the problem are established (S2) the important factors, out of which at least one
is a control factor, are identified but the effects of all the factors are yet to be fully
established and (S3) the theoretical process model is identified/developed and its
suitability for process optimization is established. The analytical/simulation model
available at S3 may be semi-empirical in nature, but cannot be a regression type pure
empirical model. In case an adequate regression type model is available at the end of
the analyze phase, then the end state will be designated as S2 so that the possibility of
experimentation during the improve phase can be examined.
In the above classification, if the state S1 consists of too many (say . 4) y-type root
causes or the state S2 consists of too few (say , 3) un-established X-type root causes
then the process characterization may be considered as inadequate. In order to
minimize/avoid the risk associated with the removal of too many y-type root causes,
we may either think of breaking the problem into several sub-problems or explore the
possibility of redesigning the process completely.
It is obvious that we would always like to be at S3 since this will make the
optimization task easier, particularly when field experimentation is difficult. However,
this does not mean that the state S3 is the most desirable state in the sense that it will
provide the best possible result. In fact, the state S2 may provide a better optimum than Improving
S3 if the assumptions made while developing the theoretical process model are not process
satisfied. Nevertheless, the importance of a theoretical model can hardly be
overemphasized and the project team should explore the possibility of using any improvement
theoretical model related to the process that is readily available.
To illustrate, consider the problem of high measurement error while measuring the
concentration of a solution using a spectrophotometer. It is well known that the 237
fundamental model underlying the measurement process is the Beer’s law, which may
be written as Log(P0/P) ¼ A ¼ 1cl, where P0 is the monochromatic radiant power
incident on the medium (sample solution), P is the monochromatic radiant power
transmitted by the medium, 1 is the molar absorption coefficient, c is the concentration of
the solution, l is the absorption path length and A is called the absorbance. However,
arriving at S3 with such a simple process model as the Beer’s law may not be adequate
for optimizing the measurement process. The effects of a much larger number of root
causes related to the five parameters of the model may need to be examined, which may
be well beyond the scope of any established theory. Thus, the end state of the analyze
phase will be either S1 or S2. Note that although the fundamental model is inadequate for
process optimization, it can still be used to conduct a dynamic characteristic experiment
during the improve phase using A ¼ b c as the ideal function of the process, where b is
called the measurement sensitivity (see Section 5 for further discussion on dynamic
characteristic experiment).

4. Analysis strategy
Having defined the elements of process analysis in the previous section, we now
propose an “analysis strategy” (Figure 2) that is expected to take us to either S1 or S2 in
the best possible manner, i.e. either the root causes or the important factors, a few of
which may have hitherto been ignored, are identified efficiently. The state S3 is not
considered here since it requires a completely different approach. It may be noted from
Figure 2 that the proposed strategy is just an ordering of the five types of factors
discussed in Section 3. The rationale behind the proposed ordering is explained below
with the help of case examples.

(Potential) Signal Factor

Potential Adjustment Factor

Block Factor

Control Factor

Figure 2.
Noise Factor Analysis strategy
IJLSS In practice we encounter many problems having a single root cause. If any such problem
3,3 results from the unwanted variation of the signal factor, it should be easy to identify and
correct such problems. To illustrate, consider a casting process where about 22 percent
of the balls it produced were getting rejected due to lower weight than specified. Now,
since the porosity and blow holes are known to be very common defects in aluminium
castings, the concerned persons held a strong belief that the lower weight was due to the
238 high level of porosity and/or blow holes present in the castings. However, the rejection
level remained the same despite many efforts to reduce the level of porosity and blow
holes. At this stage a systematic study was initiated. The study revealed that the lower
weight was not due to the presence of porosity/blow holes as suspected but because the
diameter of two of the ten cavities was smaller than the rest! In other words, the
unwanted variation in the potential signal factor (diameter of the mould cavity) was
found to be the root cause. The corrective action, of course, was straightforward. Note
that thinking in terms of porosity amounts to thinking in terms of control and noise
factors, which are at the bottom of the analysis strategy (Figure 2). Had we begun our
search at the beginning, i.e. with the signal factor then the problem would not have lasted
for long.
Let us now consider another example (iodization of edible salt) to highlight the
importance of potential adjustment factors. In this process, an iodine solution is
injected into a centrifuge where it gets mixed with the brine solution. It is important to
control the variation of iodine content in the salt to minimize consumption of iodine. In
practice, the iodine content is controlled by varying the flow rate of the iodine solution
(the signal factor). It is easy to see that the concentration of the iodine solution can be
used as an adjustment factor. In this particular case since the adjustment error was
found to be the major source of variation, the concentration of the iodine solution was
reduced and this reduced the process variation considerably.
The block factors have been placed ahead of the control factors primarily because of
their ability to throw up important but unknown control factors. For example, the clue
to the solution of the casting problem discussed above was obtained by treating the
cavity location as a block factor and comparing the average weight of ten balls from
each cavity. In general, the variations of control factors are usually monitored under
routine process checks. So if the main effect of any control factor is responsible for a
problem, it usually gets identified and corrected. Any problem arising out of the
interactions among the control factors is however difficult to solve.
The implication of placing the block factors ahead of the control factors is that the
cause-and-effect diagram (if desired) should be constructed only after studying the
block factors and not at the beginning of the analyze phase. Note that “brainstorming”
is a tool for promoting creative thinking, while identification of signal and adjustment
factors calls for critical thinking and problem formulation skills. Note further that a
good cause-and-effect diagram should contain a large number of potential control
factors, identified through brainstorming or otherwise. An analysis of the block factors
first will facilitate this task. Of course, some important but not so obvious block factors
may also be identified through brainstorming. In that case, these should be analyzed
first before moving on to the control factors.
The noise factors have been placed last for obvious reason of poor cost-benefit ratio.
However, if the approach of robust design is adopted then selection of the appropriate
noise conditions becomes as important as any other aspects of experimentation.
5. Improvement approaches Improving
Depending on the status (S1-S3) of the problem at the end of the analyze phase, the process
nature of the process and our ability to innovate, we may adopt any one or a hybrid
version of the seven basic improvement approaches as shown in Figure 3. The improvement
rationale behind such a classification of the improvement approaches is explained in
Figure 4. Here we will not discuss the methodological aspects of the approaches.
However, the concepts and the special features of each of the seven approaches are 239
discussed below in brief.

Obvious solution (A1) and innovation-prioritization approach (A2)


It is seen from Figure 3 that the approaches A1 and A2 originate from the state S1. The
obvious solution approach may be used when the root cause is of type X and a type y
root cause leads us to the innovation-prioritization approach. Since the solution
alternatives in case of a y-type root cause are unknown, the possible solutions are first
identified usually through brainstorming and then the best solution is selected, as
objectively as possible, using a tool such as the prioritization matrix.
Our previous casting problem was solved using the “obvious solution” approach.
The two undersized cavities were blocked till a new pattern became available. It should
however be noted that in this approach, it is the solution which is obvious but the
identification of the root cause may not be so at all.

Root cause (s) established Important factors short listed Theoretical Y = f (X) established

S1 S2 S3

Examine the nature Examine feasibility


of the root cause (s) of experimentation

Type y or y No Select
Feasible ?
type X? approach

X Yes
Examine the optimal Decide the type of Yes
Innovative?
value of X characteristic

No
No Yes
Known? Dynamic?

Yes No

A1 A2 A3 A4 A5 A6 A7 Figure 3.
Seven improvement
Notes: A1 – obvious solution; A2 – innovation-prioritization; A3 – experimentation (static approaches (A1-A7)
characteristic); A4 – experimentation (dynamic characteristic); A5 – non-experimental corresponding to three end
straightforward data analysis; A6 – non-experimental innovative approach; A7 – quantitative states (S1-S3) of the
analyze phase
optimization
IJLSS Nature of Process Analysis
3,3

Empirical Theoretical/
Semi-empirical
240 Nature of Process Improvement
A7

Optimal Optimal solution


solution known unknown

A1 Nature of Process Data

Experimental Non-experimental
(Happenstance data)

Simple comparative Others


experimentsa Straight forward Innovative

Static Dynamic
A2 A5 A6
characteristic characteristic

Figure 4.
Classification of the seven A3 A4
improvement approaches
(A1-A7) from a Notes: aUsually comparison is made with the existing process condition;
methodological situations requiring multiple comparisons (for comparing more than two
perspective
solutions) are rare

The innovation-prioritization approach (A2) is very popular among problem


solving teams, whether the forum is quality circle or Six Sigma. It is true that this
approach has great potential to make significant process improvement since the process
changes may be made at the system design level. However, the approach has its
limitations too. It has already been noted that there are risks associated
with making changes in the system design and the risk-reward ratio may not be
favorable if:
.
there are too many y-type root causes to be removed;
.
the proposed changes are not directly related to the root cause; and
.
the cost of validating the effectiveness of the solution itself is very high.

Further, so far as the ease of control and the gain in process knowledge are concerned,
these will depend on the nature of the solution and the manner in which the solution is
obtained. But more often than not the solution is likely to score poorly in these respects.
Apart from the inherent weaknesses as above, our experience suggests that the Improving
“innovation-prioritization” approach is not used properly in many cases. The most process
commonly observed flaw is that neither the genuineness of the root cause nor the
effectiveness of the proposed solution is properly validated with data. For example, improvement
the root cause is arrived at only through a mental why-why analysis starting from a
symptom of the problem or from the significance of a block factor. Note also that if the
final solution consists of several (non-validated) process modifications aimed at 241
eliminating one or more root causes then implementing all the changes together is
particularly problematic. This is because the solution set may work as a whole but we
cannot say anything about the effectiveness of the individual solutions/modifications.
An effective way to validate several solutions will be to conduct an experiment using
the individual solutions as factors.

Experimentation using static characteristic (A3)


A large number of excellent books on DOE (Montgomery, 1984; Taguchi, 1987) are
readily available in the market. So we shall not discuss the principles and methodologies
of DOE here. However, from the perspective of this article a few comments on the issue of
interaction will be worthwhile. Traditionally the issue of interaction is dealt with from
the standpoint of cost of experimentation. But this issue is also related to our two quality
measures identified earlier. The criterion of “ease of control” demands that the optimized
process be robust enough. But the presence of large interactions indicates otherwise. So
instead of estimating the interactions, our efforts should be directed towards arriving at
a process condition where the interaction effect of the factors is minimal. Note further
that suspecting the presence of interactions does not require any process knowledge
since interactions are almost always present. Suspecting the presence of a few selected
interactions requires some process understanding and, of course, this understanding is
enriched further when we know how to get rid of them. Thus, the use of sliding level
technique for eliminating interactions becomes highly desirable although Hamada and
Wu (1995) have criticized its use on the ground that the interactions may not be
eliminated completely. That may very well happen but should not be of much concern
provided the gain in process knowledge and the resulting simplification of the effect
curves are sufficient to choose the direction for further process improvement.

Experimentation using dynamic characteristic (A4)


This is perhaps the most effective experimental approach for process improvement.
The details of this approach are available in Phadke (1989) and Taguchi et al. (2000,
2005). Here we shall only highlight its importance with the help of an example.
Consider an injection molding process producing plastic body of electrical
connectors, where we are facing the problem of warpage of a particular type of
connector. How should we approach to solve the problem? Note that the occurrence of
warpage even in a particular type of connector implies that the process is not robust
enough. It is just a matter of time before this lack of robustness is manifested in some
form (not necessarily warpage) in some other connector. So we must not lose sight of
this bigger picture while selecting an appropriate approach for process improvement.
But how should we proceed to make an overall improvement of the injection molding
process? Assume that a connector of given shape and size has ten important
dimensions. Apart from meeting the specified dimensions and the strength
IJLSS requirement, each connector should also be free from defects like cracks, dents and
3,3 warpage. It is obvious that if the ten die dimensions along with the others (like molding
temperature, holding time and cooling conditions) are selected as control factors for
optimizing the 14 or more responses through appropriate trade off then it will be a very
difficult task indeed. Moreover, even if the process is optimized successfully for a given
connector, the same exercise needs to be repeated for all the other types of connectors
242 that the company makes. Thus, the process optimization task becomes huge. An
effective and efficient solution to the above problem is possible using the dynamic
characteristic approach (A4), i.e. by using the die dimension as a signal factor and the
corresponding part dimension as the process response.

Non-experimental straightforward (A5) and innovative (A6) data analytic approach


The non-experimental straightforward approach may be described in terms of the tools
used in this approach such as multiple regression analysis (Draper and Smith, 1996),
discriminant analysis and other multivariate methods (Timm, 2002). So far as the
innovative data analytic approach (A6) is concerned, it is very difficult to specify what
this approach will entail since it may not be possible to arrive at a consensus on
whether a particular approach is sufficiently innovative or not. Accordingly, instead of
discussing the approach itself, we shall pose below a problem in general terms. Such a
problem is frequently encountered in practice. In our opinion, any data based approach
that can efficiently solve such problems will qualify as A6.

A difficult problem
Consider a highly unstable continuous chemical process, which needs to be improved.
The improvement team takes a 30,000 ft. view of the process but still finds the process to
be unstable. But the team somehow estimates the approximate sigma level of the process
and then proceeds to the analyze phase. In this phase, quite a few control factors are
identified easily, since these are monitored on a regular basis. However, the team finds it
extremely difficult to establish the effects of these control factors. They also observe that
the levels of some of these factors are adjusted frequently in response to the unstable
behavior of the process. Although the standard operating limits are available, these
limits are sometimes violated in course of such adjustments. More surprisingly, in most
cases, such violations apparently had no impact on the process performance.
Occasionally however, the process performance did improve after adjustments.
Note that the status of the project at the end of the analyze phase is S2, i.e. the
important factors are short listed but their effects are yet to be fully established. Also,
the feasibility of field experimentation is ruled out given the nature of the process. So
the team tries to develop a regression model based on the process data collected from
past QC records. Some live data are also collected by observing the process. However,
the team fails to develop any useful model. But it becomes clear to the team that there
are strong interactions among the Xs, the noise factors included. What should the team
do next? The team needs to be really innovative enough to tackle the interactions
among the Xs.

Optimization using quantitative optimization techniques (A7)


The process model developed from theoretical considerations may be either analytic or
suitable for simulation. Once an analytical model becomes available the same can
be used to optimize the process employing a variety of optimization techniques Improving
(Taguchi and Wu, 1985; Hou, 1998; Lehman et al., 2004). But whatever may be the process
technique chosen for optimization, it is important to keep in mind that “minimizing the
variation from target is not the sole design objective. The reduction of quality loss improvement
should be balanced with the increase, if any, of production and operating costs.” This
comment also applies to any technique chosen under A3-A6.
The simulation models, the discrete event simulation models in particular, are 243
especially useful when the reduction of process cycle time is the primary project
objective. The “proceedings of the winter conferences” contain many applications of
simulation for system design and improvement. For example, see McCarthy and
Stauffer (2001), Rivera and Marovich (2001), Huschka et al. (2008) and Villarreal et al.
(2008). The last article is on optimization of an injection molding process using
simulation. It will be interesting to compare this approach with the dynamic
characteristic approach (A4) described earlier.
The methods of operations research (OR) ( Jensen and Bard, 2003; Hiller and
Lieberman, 2005) are also classified under this category. Usually the OR problems are
not solved following DMAIC, which is best suited for solving cause-unknown type of
problems. But there is nothing in principle to disqualify such problems being taken up as
DMAIC projects. Tang et al. (2007) have also argued for inclusion of OR techniques in the
black-belt curricula. However, it is important to recognize that there is a difference in the
mindset of a quality practitioner and that of an OR practitioner. Both are looking for
variations and constraints. But an OR practitioner usually does not have any issue with
variations and constraints so long as these are incorporated into the model. However,
“a quality practitioner always finds variations and constraints to be discomforting,
except of course those which are either specified by the customer or originate at his or her
end.” This healthy feature of quality can be incorporated in an OR solution by modifying
the role of the sensitivity analysis. Traditionally, the sensitivity analysis is performed to
provide confidence in the results obtained. This is important. But from the quality point
of view, what is more important is to know what sort of violations of the model provides
maximum opportunity for further improvement. Such a change will facilitate
integration of “quality” and “OR” whereby the two can complement each other better.

Selecting an improvement approach


It is apparent from the above discussion that neither it is possible to order the seven
approaches in terms of their effectiveness nor can one select any of the seven
approaches for a given situation. The selection of a particular approach is largely
determined by the nature of the process (chaotic or not, experimentation feasible or not)
and availability of data. However, there will be flexibility in selecting an appropriate
approach for a given situation. Table II provides a summary of the guidelines for
selection of an appropriate approach.

Status of A1-A7 in practice


Table III gives the distribution of the improvement approaches used in 17 DMAIC projects
undertaken in various Indian industries and presented in a Six Sigma project competition
(ISI-QCI, 2007). It may be seen from Table III that out of the 17 projects, the
innovation-prioritization approach (A2) was used in about 65 percent of the cases. Thus,
although we have noted that the use of this approach, as far as possible, should be avoided,
IJLSS
Preferred approach Likely scenario (after the analyze phase)
3,3
A1: obvious solution Usually a single root cause. The root cause is the deviation of the level
of an existing control or signal factor from its known optimal value
A2: innovation- Not more than four “type y” root causes
prioritization The root causes are usually the smallest possible y, identified through
244 repeated Pareto analysis. The X-type root causes are either unknown or
assumed to be beyond the control of the improvement team
Proposed solutions are not perceived as too risky. The technique of
“fool proofing” may be applicable for generating the solutions
Effectiveness of the solutions can be verified within a reasonable
time frame
A3: experimentation Process is not chaotic
(static characteristic) At least one of the root causes is of type X. Usually 3-8 X-type root causes
(control factors). Noise factors, found to be important during the analyze
phase, may have to be upgraded as control factors
Experimentation is feasible
The approach A2 can also be extended to become A3. The simplest
possible experiment here will be of 2k type, where the existing condition
and its modification are the two levels of the proposed k modifications
A4: experimentation 3-8 control factors are shortlisted during the analyze phase
(dynamic characteristic) A signal factor has been identified for the purpose of experimentation
Always the preferred approach whenever experimentation is feasible,
cost of experimentation is not high and the above two conditions are met
Particularly suitable for cases when the signal factor avoids the need to
optimize many similar processes
Perhaps the approach A3 has failed to provide the optimal solution
A5: non-DOE Process is not chaotic
(straightforward data Experimentation is either not feasible or is too costly
analysis) High quality and sufficient data of the form (Y1, Y2, . . . , X1, X2, . . .)
is either available in the existing data base or can be collected from
the process
At least one of the Xs is a control factor
The empirical model developed has sufficiently low prediction
error
A6: non-DOE (innovative) See the subsection titled “A difficult problem” under Section 5
A7: quantitative The process is complex. Many factors affect process performance
optimization Approaches A5 and A6 have not been successful
Perhaps the one-factor-at-a-time experiments conducted in the past also
have not yielded the desired result
Multifactor experimentation is either not feasible or is perceived
as too risky
The process may be already too strained so that a better process
understanding is necessary for evaluating the risk of making too many
process changes simultaneously
Table II. It has been possible to develop an analytic or a simulation model of the
Guidelines for selecting process based on theoretical considerations
an appropriate A process model in the form of a black box may be available.
improvement approach So computer experimentation is feasible
the approach is found to be very popular among practitioners. This is perhaps because of Improving
the convenience of its use and high visibility of the innovations. In contrast, the DOE based process
approach A3 (static characteristic) was adopted in only about 30 percent of the cases.
Moreover, the advanced approaches like the dynamic characteristic approach (A4), the improvement
innovative data analytic approach (A6) and the quantitative optimization approach (A7)
were not used at all. It is likely that the factors like abundance of low hanging fruits, huge
pressure to meet project deadlines, savings expectations, lack of proper training and 245
comparatively greater logistical complexities in using DOE resulted in such a skewed
distribution. But whatever be the reason, the results are definitely not comforting, even
though the projects generated substantial savings for the companies.

6. Improve phase road map


The improve phase road map is shown in Figure 5, where the decision steps are
omitted for the sake of simplicity. Here we shall discuss only the risk analysis step of
the road map. The other steps are fairly straightforward.
In practice, usually the “failure mode and effect analysis (FMEA)” is used as a tool
for risk assessment. Unfortunately, in most cases the FMEA table is updated
mechanically only to satisfy the quality management system requirement. Performing
such rituals is obviously a wasteful activity. Instead it will be useful to evaluate the
risk associated with the proposed solution by asking questions such as the following:
have we cut corners anywhere to compromise on safety? Are we cheating our
customers? Is the new standard operating practice a bitter pill for the employees
concerned? Does the proposed solution make the process more complex? Have we
understood the full implications of the change? Will the improvement last long enough?
Obviously the project team cannot answer the above questions in an unbiased manner.
Making a fair assessment will be particularly difficult when the proposed solution
appears to be highly innovative or is a pet solution of someone important. It is common for
concerned persons to nurture pet solutions while living with a chronic problem for long.
The Six Sigma initiative provides an appropriate opportunity to push such solutions. So it
is essential that the risk assessment be performed by an independent team. The Six Sigma
Champion has to play a critical role in deciding when to trigger such referrals.

7. Improving service and transactional processes


The two most significant means by which a large number of service and transactional
process improvements have been achieved in the past are:

Type of process
Analyze phase statusa Improvement approachb Manufacturing Transactional Service Total

S1 A2 5 2 4 11 Table III.
A3 1 – – 5 Distribution of
S2 A3 4 – – improvement approaches
A5 1 – – 1 in 17 DMAIC projects
carried out in Indian
a b
Notes: S1 – root cause(s) established; S2 – important factors short listed; A2 – innovation- industries and presented
prioritization approach; A3 – experimentation using static characteristics; A5 – non-experimental in a Six Sigma project
straightforward data analytic approach competition
IJLSS
3,3 A1-A7

Find Optimal Solution


246
Compare Expected
Benefits and Project Goals

Conduct Confirmatory
Trials

Set Operating Tolerances

Assess Risks

Pilot Solution

Develop Implementation
Plan
Figure 5.
Improve phase road map
Exit to Control Phase

(1) redesigning the activity sequence, including elimination of the non-value added
activities; and
(2) the use of information technology.

Note that both the above actions involve making changes in the system design.
Therefore, one needs to be careful if such actions are identified following the
innovation-prioritization approach (A2).
It should also be recognized that the extent of systemic changes that may be made is
not unlimited. The amount of capital needed or other constraints may come in the way
of making the desired changes. In general, the system design needs to be followed up by
the parameter and tolerance design to realize the full potential of a given system. This
is a standard practice followed in case of manufacturing process design and there is no
reason why the same cannot be adopted in case of service and transactional processes.
Unfortunately, we do not find many applications of service process improvement
using the approaches A3-A6. This is primarily due to the failure to identify important
control factors and the difficulty of field experimentation. In particular, the case studies
on service process improvement following the dynamic characteristic approach (A4) Improving
appear to be non-existent. This is not only because the approach A4 requires a process
comparatively larger number of trials but also due to the fact that it requires the use of a
signal factor. As an example of a signal factor of a transactional process, let us consider improvement
the data entry process of a marketing research organization. For this process, one may
consider the time allotted for data entry as the signal factor and the number of entries
made in the schedules within the allotted time as the process response. However, note 247
that the process response here is a path variable (as opposed to being a state variable). So
it is necessary to verify that the effect of the path variation is not excessive and can be
considered as random. If not, the path needs to be controlled so that its effect becomes
negligible. Fortunately, for the data entry process, the time taken to fill up a schedule was
found to be directly proportional to the number of entries made in the schedule for
various types of audits. Also there was no obvious sign of the presence of significant
systematic component. Thus, a linear relationship between the time allotted and the
number of entries made within the allotted time can be safely considered as the ideal
function for the data entry process. It is obvious from the above discussion that the
identification of a proper signal factor for a service process is not easy. Nevertheless, it
will be worthwhile to make the effort and adopt the approach A4, wherever feasible.
Let us now examine the generic factors involved in a service process design. Das
and Canel (2006) provide a list of the same. Modifying this list a little, the general model
for a service process may be written as:

Y ¼ f ðBatching; Activity sequence; Process layout; Extent of customer contact;


Behavior at customer contact; Environment dimensions; Environment aesthetics;
Process load; Process capacity; Marketing strategies; Customer communication;
Process control schemes; Human factors; Use of information technology;
Quality of hardware; Local culture;
Organizational factors like employee welfare and inter–departmental issuesÞ
Now if we consider the service quality dimensions like empathy and assurance as Y then
the above function may not be of much help. However, if we consider Y as a variable like
cycle time, then the above general model may be useful for identifying important control
factors. In particular, the extent of customer contact, originally proposed by Chase (1978,
1981), appears to be a very important control factor. The point we are trying to make here is
that the technical content of a service process indeed is low compared to a manufacturing
process due to our limited understanding of the former. But still a blind adoption of
the innovation-prioritization approach (A2) may not be warranted in many cases of service
process improvement. Even if we use A2, efforts should be made to identify alternative
solutions which can be characterized by ordinal type control factors having at least three
levels and test the effectiveness of these solutions to gain a greater insight into the process.
Otherwise there is always the danger of the solution being changed or rendered ineffective
due to process tinkering in response to some process problem or the other.

8. Conclusion
Any breakthrough study should be considered as a contribution towards the journey of
“continuous improvement”. Accordingly, we propose the following two as important
IJLSS DMAIC project quality measures: “quantum gain in (portable) process knowledge” and
3,3 “ease of control” of the improved process. The project team should keep these two
criteria in mind while selecting the improvement approach and making many other
decisions in course of the DMAIC journey.
A process can be improved in many ways. We have classified various process
improvement approaches in seven classes (A1-A7). Apart from the nature of the
248 process, the suitability of a particular approach for a given situation is determined also
by the nature of the problem and availability of data. However, there will be many
occasions where more than one approaches may be suitable for improving the process.
It is argued that, wherever possible, the approach (A4) of robust design using a signal
factor should be selected since it has the greatest potential to have significant positive
impact on the two project quality measures stated above. The approaches A7 (computer
experimentation), A3 (field experimentation using a static characteristic) and
A6 (innovative data analysis) can also be used as a substitute of A4.
It is obvious that in order to be able to use the above approaches, it is necessary to
analyze the process properly. The proposed “analysis strategy” is expected to facilitate
adoption of the approaches A3, A4 and A6.
However, it may not be easy to adopt an approach like A4 for improving a service
or a transactional process. Identification of an appropriate signal factor is likely to be a
very challenging task in such cases. Further research is necessary in the broader area
of “service process design” for improving the technical content of a service process and
thereby promoting the use of an advanced approach like A4 for its improvement.
Another problem that is commonly encountered in improving a service or a
transactional process is that many of the factors identified are psychological in nature.
A great deal of further research is necessary to understand the role of the psychological
factors affecting the performance of a service/transactional process. For example, one
may find (through a suitable experiment) that sending a reminder after five days is
optimal for reducing the delay in receiving a reply or a payment against an invoice.
However, it is important to realize that todays optimal of five days may not remain
optimal the next year. It is necessary to have a clear understanding of the psychology
of an “accounts receivable” process so that the improvements made are sustainable.
As Deming (1993) says, “[. . .] there is no knowledge without theory [. . .] without theory
you have nothing to revise, nothing to learn from [. . .]”.

References
Almquist, E. and Wyner, G. (2001), “Boost your marketing ROI with experimental design”,
Harvard Business Review, Vol. 79 No. 9, pp. 135-41.
Antony, J., Coleman, S., Montgomery, D.C., Anderson, M.J. and Silvestrini, R.T. (2011), “Design of
experiments for non-manufacturing processes: benefits, challenges and some examples”,
Journal of Engineering Manufacture, Vol. 225 No. 11, pp. 2078-87.
Chase, R.B. (1978), “Where does the customer fit in a service operation?”, Harvard Business
Review, Vol. 56 No. 6, pp. 137-42.
Chase, R.B. (1981), “The customer contact approach to services: theoretical bases and practical
extensions”, Operations Research, Vol. 29 No. 4, pp. 698-706.
Das, S.R. and Canel, C. (2006), “Designing service processes: a design factor based process
model”, International Journal of Services Technology and Management, Vol. 7 No. 1,
pp. 85-107.
Deming, W.E. (1993), The New Economics – For Industry, Government and Education, Improving
MIT Center for Advanced Engineering Study, Cambridge, MA.
process
Draper, N.R. and Smith, H. (1996), Applied Regression Analysis, Wiley, New York, NY.
Hamada, M. and Wu, C.F.J. (1995), “The treatment of related experimental factors by sliding
improvement
levels”, Journal of Quality Technology, Vol. 27 No. 1, pp. 45-55.
Harry, M. and Schroeder, R. (2000), Six Sigma: The Breakthrough Management Strategy
Revolutionizing the World’s Top Corporations, Currency/Doubleday, New York, NY. 249
Hiller, F.S. and Lieberman, G.J. (2005), Introduction to Operations Research, McGraw-Hill,
New York, NY.
Hou, X. (1998), “Robust parameter design with monotone loss function”, Statistica Sinica, Vol. 8
No. 1, pp. 87-100.
Huschka, T.R., Denton, B.T., Narr, B.J. and Thompson, A. (2008), “Using simulation in the
implementation of an outpatient procedure center”, Proceedings of the 2008 Winter
Simulation Conference, The Society for Computer Simulation International, San Diego, CA.
ISI-QCI (2007), Guidebook for Six Sigma Implementation with Real Time Applications, Indian
Statistical Institute, Bengaluru.
Jensen, P.A. and Bard, J.F. (2003), Operations Research Models and Methods, Wiley,
New York, NY.
Lehman, J.S., Santner, T.J. and Notz, W.I. (2004), “Designing computer experiments to determine
robust control variables”, Statistica Sinica, Vol. 14 No. 2, pp. 71-590.
McCarthy, B.M. and Stauffer, R. (2001), “Enhancing six sigma through simulation with iGrafx
process for six sigma”, Proceedings of the 2001 Winter Simulation Conference,
The Society for Computer Simulation International, San Diego, CA.
Mandal, P. (2011), “Factor classification tree, roles of signal and adjustment factors and robust
design with nonlinear signal”, in Handa, S.S., Umashankar, C. and Chakraborty, A.K. (Eds),
Proceedings of the International Congress on Productivity, Quality, Reliability, Optimization
and Modeling, Vol. I, Allied Publishers, New Delhi, pp. 669-85.
Montgomery, D.C. (1984), Design and Analysis of Experiments, Wiley, New York, NY.
Phadke, M.S. (1989), Quality Engineering Using Robust Design, Prentice-Hall,
Englewood Cliffs, NJ.
Rivera, A. and Marovich, J. (2001), “Use of six sigma to optimize cordis sales administration and
order and revenue management process”, Proceedings of the 2001 Winter Simulation
Conference, The Society for Computer Simulation International, San Diego, CA.
Shingo, S. (1986), Mistake Proofing for Operators Learning Package: Zero Quality Control: Source
Inspection and the Poka-yoke System, Productivity Press, Madison, NY.
Taguchi, G. (1987), System of Experimental Design, Vol. 1 and 2, Krass International
Publications, White Plains, NJ.
Taguchi, G. and Wu, Y. (1985), Introduction to Off-line Quality Control, Central Japan Quality
Control Association, Nagoya.
Taguchi, G., Chowdhury, S. and Taguchi, S. (2000), Robust Engineering, McGraw-Hill,
New York, NY.
Taguchi, G., Chowdhury, S. and Wu, Y. (2005), Taguchi’s Quality Engineering Handbook,
Wiley, Hoboken, NJ.
Tang, L.C., Goh, T.N., Lam, S.W. and Zhang, C.W. (2007), “Fortification of six sigma: expanding
the DMAIC tool set”, Quality and Reliability Engineering International, Vol. 23 No. 1,
pp. 3-18.
IJLSS Timm, N.H. (2002), Applied Multivariate Analysis, Springer, New York, NY.
3,3 Villarreal, M.G., Mulyana, R., Castro, J.M. and Cabrera-Rios, M. (2008), “Simulation modeling
applied to injection molding”, Proceedings of the 2008 Winter Simulation Conference, The
Society for Computer Simulation International, San Diego, CA.

Further reading
250 Taguchi, G. (1993), “Robust technology development”, in Kuo, W. (Ed.), Quality Through
Engineering, Elsevier, Amsterdam.

Corresponding author
Pathik Mandal can be contacted at: pathik58@hotmail.com

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com


Or visit our web site for further details: www.emeraldinsight.com/reprints

Вам также может понравиться