Вы находитесь на странице: 1из 18

265

Forum

A dynamic approach to operations management:


An alternative to static optimization*

Abstract
In todays globally competitive manufacturing environment, many firms are comperfed to rapidfy improve and evolve their
operations. But traditional formal analysis of operations management is static,emphasizing optimization in a steady state
world. We propose an aiternative dynamic approack to an&zing operations mana~ament, Our approach deals explicitly with
four elements not considered by most static a~proa~kes: knowledge, learning, ~ont~nge~ci~ and problem solving. In studying
each of these elements in detail, emphasis shifts from improving efficiency assuming complete ~~knolo~ca~ knowledge, to
deliberatefy enhancing rates of improvement and of adaptation to new situations. Robotic assembIg ofwatckes is discussed in
detail as an exampie of a process that ought to fit the static approach, is.. be managed for static efficiency. in fact, we find that tke
process is managed d~namicaily. We propose several ways of applying uaditional modeling tools to dynamic issues. Considerable further research wilt be needed to develop models for dynamic situations that are as powerful as traditional models are for
static situations.

Advances in t~chnolu~y and abashes in the


nature of ~urn~t~t~on have affected the structure of manufa&tur~ng and service operations.
Yet, we have seen no concomitant change in
the paradigms for modeling and managing operations, despite acknowledged dissatisfaction
with them. The paradigm that remains in use

092%5273/92/$05.00

1992 Elsevier Science Publishers

can be crudely characterized as *static aptimization of a known objective function, subject


to known and stationary ~onstraints~. This
paradigm hoIds manufacturing separate from
knowledge-treating activities, such as product
design, process design, and scientific research.
Thus, research: on R&D and engineering have
traditionally been a separate subdiscipline
from research on manufacturing.
We propose a dynamic approach that explicitly treats important elements of modern
manufacturing operations that are ignored by
static paradigms - specifically, knowiedg~~
learning, problem solving, and contingencies.
We show that static and dynamic perspectives
are effective in different manufacturing contexts and o&r evidence that the domain in

B.V. All rights reserved.

266
which a dynamic approach
is applicable
is
growing. It is no longer sufficient to ask How
can we make this operation more efficient at its
existing tasks? Research must also be directed
at How can we become better at recognizing
and dealing with contingencies,
learning from
their resolution, and accumulating
a broader
base of knowledge?
Some managers
are
asking this question, but the use of dynamic
and knowledge oriented approaches
remains
an ad hoc pursuit, without much theoretical
foundation.
We begin by articulating
some of the implicit assumptions
in operations management
research. Section 3 describes our suggested
alternative,
a dynamic approach
that emphasizes the importance of knowledge in production systems. Section 4 discusses the design
and operation of an automated assembly line.
We show that traditional
static models of assembly lines ignore key activities of managers
and workers, which center on dynamic rather
than static issues. In Section 5, we consider
other applications of a dynamic approach. We
conclude, in Section 6, with a reexamination
of
the comparison
between static and dynamic
approaches,
and a discussion of the research
that will be needed to build a useful and rigorous dynamic view of manufacturing.
A single paper cannot fully establish the
validity or usefulness of a dynamic approach.
We emphasize operations research, the manufacturing domain, and the academic study of
operations
management.
Management
practice in some industries is well ahead of research
in incorporating
dynamic issues into day to
day manufacturing
activities and is the key
motivation
for seeking new academic paradigms. We discuss some practical techniques in
the penultimate Section of the paper, but defer
a general dynamic
analysis of present day
operations management.

2. Operations management

in a static world

It is useful to review standard production


and operations management
before discussing

alternatives. When he coined the term scientific management


in the 1990s Frederick
W. Taylor [l] started the field of industrial
engineering.
The assumptions,
models, and
thought patterns of Taylorism, so influential
in the tremendously
successful development of
American
mass
production,
persist
to
this day. One of those assumptions
was that
there exists one best way to undertake each
task.
The principal
assumptions
of the static
paradigm are that production
technology
is
known, that labors role is solely to perform
procedures,
that the environment
is known
and stationary, that inputs are homogeneous,
and that there is a single, known goal. Working under these assumptions,
the manager or
analyst selects the type, amount, or use of
assets. Lower level management
or workers,
such as manufacturing
engineers, then implement the decision.
Choice and execution
are separate stages, and neither feeds back to
the other. The job of the manager is onceand-for-all
decision making, as opposed to
incremental
problem
solving
or ongoing
learning.
Taylor was himself an avid experimenter,
with a strong belief in the importance of knowledge about
technological
processes.
For
example, his model of cutting speeds versus
tool wear, developed through exhaustive experimentation,
is still used today. However, in
Taylors paradigm, knowledge was developed
off-line by specialists and then passed in a one
way information
flow to workers. The role of
feedback (actual performance
compared with
expected performance)
was limited to a punitive one, not as a source of knowledge. This
view that R&D can and should be completely
separate from execution is a crucial difference
between dynamic and static approaches.
We will now review each of the major implicit assumptions
in the static operations
management
paradigm. We do not claim that
managers or modelers defend the literal truth
of these assumptions,
but lack of appropriate
dynamic
frameworks
leads them to make
decisions and build models as if the assumptions were approximately
correct.

267
Assumption

1: Known production

technology

This assumption states that manufacturing


knowledge is complete. Each realistically possible production
technique
is known, well
defined, and fixed over time. This includes
hardware (equipment), operating procedures,
and other aspects of the technology. In situations involving a new plant or new equipment, the relevant knowledge is assumed to be
available from vendors for a price. Knowledge
about the technology is thus not a key issue,
and learning is not a goal. Design of production methods is hence a choice of technique
problem: which of the known available machines and procedures
should be used? This
technique becomes the optimal way to produce.
The only role of learning in this view of the
world is as training - the transfer of procedure
from manuals or instructors to employees who
are taking on an unfamiliar task (e.g., new
assembly line workers who need to be taught
procedures).
There is no need for organizational learning or research.

Assumption
procedures

2: Labors role is solely to perform

This assumption holds that the task of each


worker is to carry out assigned, fully specified
procedures
in response to unambiguous
signals from machines, other workers, and the
environment.
By procedure we mean a well
defined set of actions, analogous to a computer
program.
Procedures,
like computer
programs, may contain some conditional instructions, but all contingencies
are assumed to
have been anticipated
and appropriate
response specified. Managements role is to specify these procedures (the one best way) and
monitor their execution. Since tasks are well
defined, performance can be monitored simply
by looking over the workers shoulder, i.e.,
by observing inputs and whether operator behaviour follows the correct procedures. Learning and modification
occur external to the
production
unit, with new procedures
communicated back to it.

This assumption
is powerful and useful
when the necessary tasks have been reduced to
appropriate
procedures.
However, reduction
to procedure is not desirable for some environments and tasks characterized
by large and
important contingencies,
high complexity, or
ambiguity about key variables and their relationships. This includes many aspects of product design and problem solving.

Assumption 3: Known and stationary


environment
This assumption
holds that the environment, like the technology, is known and stationary. Usually it is static (i.e., deterministic
and constant). If not static, it is stationary (i.e.
drawn from a stationary probability distribution with known parameters).
For example,
when capital investment
choices are being
made, demand and factor costs are assumed to
be known for the life of the equipment. So is
the nature and performance of the production
technology. At the extreme of this assumption,
product markets, input markets, workers, and
machines are all deterministic
and unchanging. Choice is easy under such a strong
assumption:
just optimize
for the current
environment. If it is not deterministic, the environment is assumed to be stationary.

Assumption

4: Homogeneous

inputs

Factors of production
such as labor, raw
materials, machinery, and energy, are assumed
to be homogeneous,
with exogenously
determined standardized
characteristics,
and available in complete markets. The markets are
usually assumed to be efficient.

Assumption

5: Known goal

This assumption holds that the purpose or


goal (objective function) of the organization
is
known and well defined, and is uniform
throughout
the organization.
Typically this
goal is profit maximization
or a temporary
subgoal such as maximizing output.

268
These five assumptions
lead to a consistent
view of a world that may be complex, but is
fully specified. The task of operations management research is static optimization
to select
the best way to produce despite complexity.
The task of managers is to select a rigid procedure for workers, then monitor and ensure
their compliance.
This static paradigm has no place for explicit consideration
of dynamic issues such as
knowledge,
learning, or problem solving. It
deals with contingencies only by invoking predetermined
conditional
procedures
and by
using stationary stochastic models which treat
the contingencies
as exogenous.

plicit in the economic theory of production


functions. Their list is very similar to our statement of the assumptions of the static paradigm
of operations
management.
Murnane
and
Nelson hold that these assumptions,
and
therefore the standard theory of production,
do not work for education and other sectors
where techniques are poorly articulated and
idiosyncratic.
They theorize that other factors, including experimentation
and creative
problem solving, are important for explaining
the success of different teachers. Nelson [S]
and Nelson and Winter [6] also discuss difficulties with the standard economic theory of
production,
and propose
an evolutionary
model.

Previous Literature
We are not the first to argue that standard
academic approaches
to manufacturing
management are deficient for the model environment. The criticisms in Ackotfs article, The
Future of Operations
Research is Past [2],
evoked a variety of responses
[3]. One of
Ackoffs criticisms called for a dynamic approach (p. 98).
The structure
and the parameters
of
problematic
situations
continuously
change, particularly
in turbulent
environments. Because optimal solutions are very
seldom made adaptive
to such changes,
their optimality is generally of short duration. . . . With the accelerating rate of technological and social change dramatized by
Alvin Toffler and others, the expected life of
optimal
solutions
and the problems
to
which they apply can be expected to become
increasingly negative.
For these reasons there is a greater need
for decision-making
systems that can learn
and adapt quickly and effectively in rapidly
changing situations than there is for systems
that produce optimal solutions that deteriorate with change.
In their critical examination
of the standard economic theory of cost and production,
Murnane and Nelson [4] list assumptions im-

3. A dynamic approach
One approach to dealing with dynamic elements of the manufacturing
environment
is to
construct models that relax the five assumptions of the static paradigm one at a time. For
example, decision tree models can be used to
handle uncertain and non-stationary
environments. Another approach is to reject formal
modeling and quantification
in favor of intuition. A third approach,
which we propose
here, is to develop new concepts for dealing
with critical issues that are not treated by static
approaches.
3.1. Elements

of a dJ,namic approach

For several years, we have been pursuing an


alternative view of operations
that takes account of four elements critical for effectively
analyzing dynamic manufacturing
situations
[7]. This approach aims at managing and improving operations in a world with a high rate
of change, where full proceduralized
methods
are widely practiced but are incomplete and
subject to continuous improvement.
Knowledge. Knowledge is an explicit input
to the production
process. Knowledge of the
best ways to produce is always incomplete.
Firms with more knowledge about products,

269

processes, and environment have, in effect, better production technology. Relevant knowledge includes how production should be done,
the sources and types of common contingencies, how effective problem solving and learning can be done, and what further learning is
needed. Knowledge, unlike most inputs to production, is not consumed by use. Neither is it
automatically generated by experience. Managing, exploiting, and augmenting a firms stock
of useful knowledge are key operating tasks.
Learning. Because knowledge is a key element of processes, a foundation of competitive
advantage, and is always incomplete, organizational learning is a key task. Learning must
be interwoven with production, not be conducted entirely in laboratories and other nonproduction facilities. Manufacturing facilities
should be designed and operated to enhance
the rate of learning. A variety of methods for
effective learning exist.
Contingencies. Contingencies arise due to
gaps in knowledge about the internal and external worlds. We have a contingency when
a realized event does not match an anticipated
event. Both favorable and unfavorable contingencies occur. Contingencies
should be
considered explicitly during the design of
processes and operating methods, as well as
during start-up and ongoing operations.
Problem solving. Contingencies define problems. A fundamental task in production operations is to identify and solve the problems that
lead to contingencies. Problem solving that is
neglected or performed ad hoc will reduce performance in the short run and retard learning
over the long run.
Figure 1 contrasts static and dynamic approaches to contingencies. Gaps in knowledge
give rise to contingencies, which give rise to the
opportunity for problem solving. In a static
paradigm, contingencies are patched or
worked around. In a dynamic approach, contingencies lead to deeper problem solving
(root cause analysis) which investigates why
a contingency occurred, how to better predict
and detect it, and how to prevent it (if undesirable) or make it more frequent (if desirable).
Such problem solving is a form of learning.

When properly fed back to change actions, the


new knowledge alters future events and capabilities in ways favorable to a firm.
3.2. Stages

qf knowledge

New knowledge drives both progress in old


technologies and the development of new technologies [S]. Since knowledge is so important,
we have developed methods to analyze and
even model the extent of a firms knowledge.
We postulate eight stages of knowledge that
describe how much a firm knows about a process (Fig. 2). Knowledge progresses over time
from Stage 1 towards Stage 8 [32,34]. The
firms ability to avoid or respond to contingencies becomes progressively higher with each
stage.
Process knowledge can be decomposed into
knowledge about primary and secondary variables. Primary variables have a significant
impact relative to an allowed tolerance, secondary variables a moderate or negligible impact. For example, in the assembly of a childs
plastic toy by a robot arm, thermal distortion
and vibration of the arm are secondary variables. The design and operation of the line can
ignore them. But for the assembly of a watch
mechanism, the same conditions are probably
primary variables. In addition, a secondary
variable may change and become a primary
variable. An example is a production process
that suffers a loss of precision when moved to
a different site because of humidity.
Knowledge about process variables is very
uneven. If we are capable of recognizing good
output, without any sense of how to obtain it,
we are at Stage 1. We move to Stage 2 when we
begin to recognize variables, and to Stage 3
when we begin to perceive their relevance. We
have Stage 4 knowledge when we can measure
a variable, and Stage 5 knowledge when we
can exert local control over it. When we understand how local changes in a variable affect
output, we have arrived at Stage 6, and when
we develop a complete understanding of how
primary variables affect output within allowed
tolerances, but subject to controlling secondary variables, we are at Stage 7. Stage 8 means

270

e
Observe

No

long

learning

+
contingency

term

4.

. . . .

. .

<
1

Learn and rem$ mber

Take long-term ktions

Static Approach
Dynamic Approach

Fig. 1. Static versus dynamic

approach

.. . .
----

to contingencies,

complete knowledge of a process for all values


of all relevant variables. A firms ability to
effectively respond to or avoid contingencies
improves progressively with each successive
stage.
The stage of knowledge has a critical effect
on the degree to which a process should be
automated. A purely procedural process executes as an algorithm, with zero judgement or
human intervention. At the other extreme,
pure expertise, individuals carry out tasks
without the use of any repetitive procedures;
each execution is completely novel. Figure 3
shows the relationship between degree of procedure and stage of knowledge.
Unfortunately, different portions of a single

production process are often at different stages


of knowledge. Consider product design as
a production process [33]. The ~~u~~sisportion of design is, for many technologies, sufficiently understood to make automated simulations quite accurate, while the knowledge
needed for synthesis remains at a much earlier
stage. When internal (process) and external
(product) changes render some knowledge obsolete, individual variables will be differentially
affected. Learning by the firm pushes toward
higher stages. Thus, process variables progress
through the stages of knowledge unevenly over
time.
Such backsliding can be seen in many
areas. Competitive pressure and customer

271
Secondary development

Primary development
1.

Can

recognize good
output (but don't
know how to obtain
it).

6. Can recognize and


discriminate secondary
variables.

2.

Can
recognize variables (but don't know
which are relevant).

7. Can control secondary


variables.

Complete Proceduralization

8. Possess complete
procedural knowledge.

3. Can discriminate
among variables.
4. Can mea&e
variables.

primary

5. Can control primary


variables locally.

Fig. 2. Stages of knowledge about production


100x
Expertise

100x
Procedure

6-1
7

6stage

5-

of
4Knowledge
3-

2-

Fig. 3. Degree of procedure versus stage of knowledge.

needs force vendors to put most technology on


the market long before Stage 7 knowledge is
reached for all portions of a process. Thus, in
many industries, new knowledge about a process allows a momentary advance to a higher
stage of knowledge, but carries the seeds of
change that force a regression to earlier stages.
Learning becomes a continuous task of the
design and manufacturing operations. Management and control methods must be able to

manage effectively at multiple stages of knowledge.


4. An example: static and dynamic approaches
to automated watch assembly
This Section presents a detailed discussion
of an assembly line process. It illustrates
the differences between static and dynamic

212

Fig. 4. Assembly

workstation

on the automated

watch assembly

approaches in a practical context and suggests


how operations
research methods might be
developed to handle some of the dynamic variables and new problems rasied by a dynamic
approach. We have deliberately chosen a process that appears to fit the static assumptions
well: high volume assembly of wrist watch
mechanical
movements
by robots on an assembly line. Yet, as we will show, many tasks
crucial to competitive
survival are highly dynamic and cannot be described solely on the
basis of static issues.

4.1. Description

of the assembly line

The automated watch assembly line consists


of about 50 modular stations performing 150
operations.
The line is adaptable to a wide
variety of models (250 models in production in
any month), high volume (a pallet of 100

This description is a simplification


derived from observations of several lines of large Japanese watch companies made in 1986 and 1987.

line.

watches every 600 seconds), and high precision


(2 micron accuracy for critical operations).
Each of the 50 stations is configured
with
a pick and place robot, an assembly robot with
station,
and
parts feeders, an inspection
a rotating carrier (Fig. 4). All stations are
modularly configured. Parts feeders and arms
of the assembly robots are idiosyncratic,
designed for the particular task to be performed
at a specific station.
The sequence of events in each module is as
follows. The pick and place robot retrieves
a carrier containing a partially assembled device and places it in the empty slot on the index
table, which rotates the carrier under the
robot. The robot adds new components
to the
assembly, and the table rotates the carrier to
an inspection
station and then back within
reach of the pick and place robot, which either
returns it to the pallet of 100 devices or deposits
it in the reject chute. This sequence is repeated
for all 100 devices on the pallet, after which the
pallet moves to the next station on the assembly line. As the conveyor line is asynchronous,
the next station need not be physically adjacent

273
to the prior station. The conveyor forms a buffer of pallets in process.
Each assembly robot at each station contains the tooling and software required for
a particular operation. Most of the inspection
statians are physically similar, but use software appropriate to the detection of specific
task objectives. Stations are programmed such
that all tasks take a fixed amount of time, less
than or equal to six seconds. The modular
assembly unit is functionally similar to a person on an assembly line, but faster and more
repeatable.

4.2. Static approach to the design

of assembly

lines

Assembly line models in the OR literature


cover either design or operation. Design is the
problem of determining line layout and deciding which tasks to assign to each robot station.
Baybars [9] surveys exact algorithms for
simple assembly line ba~anc~ug. The principle
assumptions are: all input parameters are
known with certainty; a task cannot be processed in arbitrary sequence due ta technological requirements; and all tasks must be
performed. Other constraints may be formulated as a mathematical program. The objective function is usually either to minimize
the number of stations along the line, keeping
the cycle time the same, or, equivalently, to
minimize the cycle time for a fixed number of
stations. The static assumptions in Section
2 are implicit in the models.
Assembly fines used to produce twa or more
models of the same product may do so in
sequential batches (the multi-model case) or
intermixed (the mixed-models case). For the
latter, Dar-El [lo], Dar-El and Cather [ll],
Macaskill [12), and Thamopoulaus
[13].
Constraints might include: zoning restrictions
that limit the grouping of tasks at one station
[14, IS]; tasks that must be done at a particular station; parallel stations [Id]; other forms
of positional restrictions; buffer stocks and
other generalities such as feeder lines or paratIel subassembly fines fl?].

The optimization problem fur all of these


formuIatio~s is one of search through a large
space to find optimal task assi~me~ts and
tine layouts, The search is carried out only
once, during the line design phase. The line
layout and balancing problem is usually
modeled deterministically, exceptions being
Silverman and Carter [ 18) and the articles
cited therein,

We now examine, from a static perspective,


contingencies in the minute by minute operation of the line. Contingencies are introduced
in two related groups of models: those concerned with buffering between stations, and
those concerned with inspection and rework.
Typically, each machine is assumed to have
a stationary and independent distributian of
failure probabilities. A failure is assumed to be
visible, and to shut the machine down completely. With na buffers between stations,
a faiture or slowdown at one station would
shut down the entire tine. Therefore buffers are
inserted, and the optimization goal is to set
their size and location. Models for buffer sizing
optimize the cost of buffer inventory versus the
value of last production time when other stations are starved or blocked. Even detezmining
the frequency of blocking is difficult [19f. Failure probabilities and repair times are assumed
to be stationary and outside the control of
management.~
The ather common contingency in static
models is rework or scrapping of units. If an
operation is performed incorrectly, a subsequent inspectian will detect the problem and
the unit can be routed to a rework station.
Optimal placement of inspection stations and
the optimal capacity and buffering for rework
are determined by dynamic programming and
of repair, especially simulation
mod&*
repair
peopleas a finite resource. Hence,
time to repair depends on how many peopk have been
assigned, a rna~a~rnent decision. However, the parameters ofthese models are stifl assumed to be exogenous.
%Me

expticitly

models

model

274
queuing models of work under repair. Again,
analysis is by static economic
optimization
incorporating
variables such as the costs of:
rework, scrapping, idle repair capacity, and
inadequate
capacity. Probabilities
of flawed
operation of machines or test stations are assumed to be exogenous, stationary,
and not
subject to controlled
learning. Furthermore,
the models neglect the roles of rework and
testing as likely sources of information,
noise,
and confusion [20].
Thus, though the static paradigm accepts
contingencies
as important to line design, the
only response it allows is to work around
them. All probabilities
are constant
and
exogenous, the goal of static models being to
optimize static efficiency despite the presence
of contingencies.
Contingencies
are treated
neither as sources of information
for learning
nor as targets for improvement.
4.4. Performance

of the watch line

By the standards
of the static paradigm,
performance of the assembly line was excellent.
Each machine had an average uptime of about
99.8%, with a mean time to repair of 4 minutes
and a standard deviation of repair of 2 minutes.
A single pallet (100 watches, 10 minutes of
work) buffer between each station would be
sufficient to prevent starvation of any station.
A one pallet buffer for each of the 50 stations
would total less than a day of work-in-process
inventory, which is minimal by any standards.
Similarly, the cost of scrap plus rework was
about 1% of total material cost. Problems
were concentrated
in about 5 of the 50 stations; an approximate
model
would
be
2 watches rejected per thousand per station, or
one watch rejected every 50 minutes. A single
operator could visit each station once an hour
to rework or scrap the defective movements.
Viewed in the static paradigm, this automated line is remarkably free of contingencies,
and those that do arise can be accommodated
by one or two people. The line is also very
flexible (250 models on a line at one time, with
more than half of these new each year). Finally,
it has low costs. As the line is run 24 hours per

day, 5 days a week, the robots, which are


mostly standard
models with few axes, are
used intensively. Capital cost is well below
$0.25 per watch. In terms of the static paradigm, the line is already very efficient.
4.5. Dynamic

approach to the watch line

Given such performance,


why bother with
a dynamic approach?
Primarily because despite its excellent performance,
the line was
continually evolving and process improvement
was a high priority. For example, actual equipment turned over frequently, with 40% of the
machines being replaced by better ones each
year. In addition, as the locus of planned learning shifted from one station to another, the
tasks assigned to the existing machines were
reconfigured
weekly. This dynamic evolution
had been going on continuously
for more than
five years, since the first robotic line was built.
About 60% of the labor cost of the entire plant
(both direct and indirect labor) was associated
with process improvement.
Paradoxically,
it is precisely the efficiency
and programmed
nature of automation
that
competitive
pressure.
Fully
procauses
ceduralized methods can be easily copied by
competitors,
and the watch company has several strong competitors.
Unless it constantly
creates new products or new process innovations embodying new knowledge, the firm will
not be able to generate profits beyond economic rent. Given the rapid evolution of the
world watch market, with thousands of new
watch styles each year, including ten or more
new families requiring new tooling and new
programming,
the firm would quickly lose
market share if it failed to innovate. Consequently, the production
line makes a number
of product families, each in a different stage of
the product life cycle. The most crucial period
is early in the life of the family.
We observed one new watch family from
ramp-up to full-scale assembly. A critical component of the new watch is stamped and
formed from sheet metal, then heat treated and
fed directly to the assembly station. Of the 19
specified dimensions
of the part, three, all

275

involving bending the sheet metal at an angle,


were considered critical. Due to springback
effects, it was very difficult to meet the required
tolerances for these operations.
In early production of the watch, frequent
rejects occurred during assembly of this
component, indicating one or more out-oftolerance conditions. As part of their problem
solving, operators took a sample of 1,000 of
these components and measured each of the
3 critical dimensions on each part. The results
were plotted and the mean, standard deviation, probability of rejection, and coefficient
of performance (defined as the ratio of the
measured standard deviation times 6, to the
allowed tolerance band) calculated. This allowed them to compare actual process precision to what is required. For two of the three
dimensions it was well above one, suggesting
that process precision is adequate. (Both of
these dimensions were highty shifted to one
edge of the tolerance band, however, indicating that the stamping die needed to be
changed.)
For the third dimension, the coefficient of
performance was about 1.0, with a process
standard deviation of 27 microns and an
allowed tolerance of 160 microns. This was
unacceptable.
One solution, commonly employed in the
static world, would be to inspect and weed out
all bad parts. This is often the easiest, fastest to
implement, and cheapest solution, since it
requires no further experiments or changes.
Numerous existing OR models deal with
optimal frequency of inspection for weeding
out.
A dynamic approach rejects such weeding
out for a variety of reasons, the most immediate being that the module where this part was
inserted would thereafter be unstable and
problematic, requiring a large buffer and frequent human intervention to deal with contingencies. Over the long run, this solution would
impede further development of new watches,
since all would have to confront the same
problem. In contrast, a root cause solution to
this problem would add to the base of knowledge available for later products.

One solution considered was to reduce the


process standard deviation. Another was to
redesign the watch mechanism to make the
acceptable tolerances broader. For other types
of problems, a solution might be to alter the
assembly station to make it more forgiving.
That is not an option here, as this dimension is
important not just to watch assembly but to
watch performance.
Problem solving activity like this also occurred after ramp-up of a model. New equipment
was continually being installed, and small improvements in module speed or conformance
were frequent targets of experimentation. In
addition, it occasionally happened that a previously well behaved module developed problems for some watch models, as indicated by
the size and composition of its reject pile. Continual effort was made to detect and focus
attention on such problems, and then to solve
and learn from them.
One might think that eventually all 50 modules of the watch line would reach Stage 7 of
knowledge, where all primary variables are
well understood and all secondary variables
are adequately controlled. If they did, the process could be optimized and stabilized, and the
learning resources (60% of the people plus
some surplus equipment) removed. However,
the business environment of the firm does not
allow this. Competition pushes toward smaller
watches with tighter tolerances. With increased precision, less is known about how
some of the control variables affect results, and
variables that were previously secondary become primary. This is evident when old recipes
no longer work. Thus, the process is pushed
back to Stage 5 for some modules, and the
cycle repeats.
4.6. Dynamic approach to the design of
assembly-lines

Our dynamic approach takes an opposite


tack from the static paradigm, regarding all
contingencies as endogenous and key foci
during design and subsequent management
of the line. Much of the management of the
line focused on learning, i.e., on deliberate

276
improvement
in the operating behavior of the
line, brought about by new knowledge. Primarily because line layout is dynamic, and
evolves in response to the location and status
of various contingencies, the design of the line
does not fit the patterns predicted by static
models. At all times, a few of the modules of
the line are undergoing
intensive work. This
may take the form of experiments to speed up
the station or improve its conformance,
the
introduction
of a new piece of specialized
equipment, or even the replacement of an entire module. In other cases, it fits the previous
example, in which the fabrication process for
one part is being intensively evaluated, and
interactions between fabrication and assembly
are under study.
At almost no time is this line well balanced in the static sense. A few of the stations
are run at the six second cycle time, but many
of the other robotic operations can be done in
less than six seconds. The conformance quality
of the module interacts with its speed because
of the micron tolerances used to fit parts together, so that higher speeds can sometimes be
achieved only at the risk of increasing the need
for retries. To speed up a module, its robot arm
must be accelerated, causing vibration, which,
if not damped,
will be greater than two
microns. Various methods for reducing the
vibration are known, but must be applied and
tested on a case by case basis.
Alternate
modules were sometimes introduced into the process flow during these periods of experimentation
so that the module
under study could be isolated. At other times,
the in-process
buffer on both sides of the
module was increased to allow more latitude
for contingencies
without affecting the rest of
the process. Following a period of intensive
investigation,
experimentation,
and change,
buffer size might be kept larger than normal
for a few days while the frequency and nature
of contingencies
was carefully tracked.
4.7. Relevant

models for a dynamic approach

Useful models can be developed to describe


dynamic
activities. In some cases, existing

classes of models are relevant, with changes to


capture certain variables. In other cases, entirely new models are needed. We see three
categories
of models as useful and feasible
to develop from existing operations
tools/
research methods: attention-focusing
mechanisms, problem-solving
and strategies,
and
physical/economic
models of processes.
AttentionTfocusing
mechanisms. JIT is an
attention-focusing
mechanism
designed
to
highlight contingencies as soon as they appear.
The extreme form of JIT is kanban control
with no buffers. With this system, when a machine breaks down or there are quality problems, the assembly line comes to a halt. For
instance, if the percentage
defective at each
station in the watch line was 1% (or when
uptime for each machine was only 99%),
a kanban production
system with no buffers
would yield a system uptime of less than
5%. Clearly, pure kanban forms of JIT are
applicable only to processes under very tight
control.
A related attention-focusing
mechanism is
the E-lot system of manufacture
[21], which
locates small lots of buffer inventories at specific points on the line. When a contingency
arises, a component from the buffer lot is used
to maintain the normal flow and the rejected
item is shifted to the E-lot for analysis. The
existence and use of the E-lots is the attentionfocusing mechanism. The size of the lot is set to
bring the system to a halt for systematic problems but keep it moving for random errors.
The size of the E-lots can be reduced to focus
management
attention on a specific problem.
The most significant difference between E-lots
and traditional inventory systems is that E-lot
inventories, used specifically to control contingent conditions,
are the exception
rather
than the rule. As problems are permanently
resolved and contingencies
are reduced, the
E-lot system approaches a kanban system.
In the watch line, E-lot buffers are maintained at the five stations where problems
occur, and rejected lots are analyzed for defects. When systematic errors occur, the E-lot
inventories are used up before they are replenished, bringing the system to a halt. In the

277
systems we studied, E-lots ranged between
1 unit and 100 units.
Such a system raises a host of questions:
Where should E-lots be located? What should
be the size of the inventories? How should they
be replenished? When should the size of the
inventories be reduced? These questions can be
modeled by applying OR tools to dynamic
activities.
Problem-soloing strategies. The process of
assembly on the watch line is divided into 50
modules. Processes are better understood in
some modules than in others. A module in
which a robot repeatedly executes an algorithmic procedure flawlessly may be at Stage 7,
while processes in other operations, such as
bonding components with an adhesive, may be
at much lower stages. Environmental conditions may change and cause a usually reliable
Stage 5 process to fail. The process of fault
diagnosis in a rework station might be only at
Stage 3, requiring pure expertise.
When a system detects a number of problems, a decision must be made where to allocate limited resources. Fault tree analysis
provides one modeling construct for laying out
contingencies and their possible causes. This
can be complemented by statistical techniques
and industrial engineering methods of analysis
and by experiments constructed to identify
process functions more precisely. In future,
the operations research models capable of examining a system as a whole could develop
problem-solving strategies in the large. In formulating the assembly-line problem at the
ramp-up stage, for example, the objective
when assigning tasks to stations would be to
gather the most information on effective process parameters for each operation, not to minimize initial cycle time.
To be most effective, experiments should
be tied to models of physical phenomena
and related to the economics of production.
For instance, in punch press models, the
physics of deformation and control should
be studied. One can construct sequential
Bayesian models of the value of information
from experiments and the economics of process change.

Which problem-solving strategy to use depends on how much one knows about the
problem already. Each stage of knowledge requires a different kind of experiment with its
own economics. In the out of tolerance problem described above, for instance, the variance
of the dimension falls with heat treatment,
while the mean shifts one way or the other. The
physics of why and how variance falls is not
well understood; this effect is at a low stage of
knowledge. The reductions seem to be related
to the shape, thickness, and material composition of the component, and to the heat treatment process. Beyond this, not much is known.
The process of improvement in control is thus
one of rudimentary controlled trials. Expertise
and judgment enable us to assess similarities
and differences between components, relate
control parameters to output, and decide what
kinds of experiments to conduct.
By contrast, the stamping process before
heat treatment is well understood. The relationship between process parameters and process variance is su~ciently well understood
that statistical relationships can be built and,
for simple shapes, functional relationships estimated. Thus, the problem solving methods
should depend on what we already know
about the processes we are attempting to improve [22].
Physicaljeconomic models of processes. The
physical behavior of a process is critical to its
operation and economics. For physical operations at Stage 7 of knowledge, a model can be
written to predict the occurrence of endogenous contingencies. For example, engineers in
the watch line used computer-aided engineering (CAE) to simulate vibration at the end of
a robot arm at different speeds and accelerations, using different tool designs and damping
methods. This allowed them to design the robot procedures. Had knowledge of a robot
only been at Stage 6, pure simulation would
not have been adequate, but a combination of
experiments, algebraic models, and simulations could still have found and described key
relationships quite effectively, providing guidance for the design and software of the assembly robot.

278
The goal of such work is to develop a science
of manufacturing
methods. We believe that the
necessary tools are now available to do this. It
is no longer necessary to use pure expertise to
design manufacturing
methods, as if we had
only a Stage 4 knowledge.
So far, such
modeling has been done most extensively by
domain specific engineers during product design, such as stress and vibration calculations
for airplanes and hard disk drives. CAE tools
are only beginning to be used for manufacturing engineering, but it is already possible to
incorporate
operations
research
methods
directly into the CAE tools, for example to
conduct searches for lower cost or higher performance configurations.
A few fields, such as
chemical engineering, have already begun to
use OR methods.

5. Applications of a dynamic approach


We now illustrate how a dynamic approach
can illuminate
diverse manufacturing
phenomena.
5.1. Evolution

of process

control

A recent historical analysis of process control in machine tool-based industries found six
epochs, each characterized
by an intellectual
shift and the development
of an entire new
system of manufacture,
spanning machines,
the nature of work, and the organization
[23].
The first three epochs involved increased
mechanization:
substitution
of capital
for
human labor; progress through economies of
scale; and increasing mechanical constraint to
increase precision and control despite higher
energy intensity. The last three epochs reversed these trends, fostering increased versatility, substitution
of intelligence (both human
and machine) for capital, and economies
of
scope. Today, machines are increasingly used
as extensions of the human mind, and both
human and machine discretion and versatility
are growing.

There is a correspondence
between the six
epochs of process control and the eight stages
of knowledge. In the first three epochs, as in
the first five stages of knowledge, the emphasis
is on identifying,
differentiating,
measuring,
and gaining localized control of a process. In
the last three epochs of process control, and in
the latter stages of knowledge, system developers study and gain control of process contingencies until they are able to extract general
principles and technologies that can be applied
in a variety of domains [23, p. 901. Problem
solving and development
of new knowledge
assets such as software and parts descriptions
become dominant activities. Thus key activities shift from static to dynamic tasks.
5.2. Development
qxtems

qfjexihle

manufacturing

An analysis of flexible manufacturing


systems (FMSs) in Japanese and U.S. machine
tool companies found them to be markedly
different in ability to run untended, in versatility (number of different parts made), and in
effective metal cutting time [24]. For example,
none of the U.S. systems was able to run untended, while 18 of the 60 Japanese systems
had been developed to control process contingencies to the extent that they could run untended. This required Stage 7 knowledge.
The differences are traceable in part to system development
practices.
In the United
States, system development
projects
were
treated as a one-time activity, at the end of
which the development
team was disbanded.
These workers and engineers who run the system were told not to experiment and lacked
the expertise to fix even known bugs. This is
consistent with the static paradigms separation of knowledge from work, but is highly
inappropriate
in a CIM environment.
In contrast, the successful Japanese efforts kept the
original teams small and required them to remain with the system until it attained 90%
uptime. In fact, Japanese developers took on
day-to-day operational management, and continued to innovate and develop new applications for the systems. Learning was integrated

279
with routine activities, most of which could be
reliably performed
by the machines themselves.
5.3. Comparison with the experience
model of learning

curve

The standard economic model of learning is


the experience curve, also known as the manufacturing progress function or learning curve
[25, 261. Experience
curve models
state
that improvement
(usually measured as cost
reduction)
is an inevitable
by-product
of
normal production,
and that manufacturing
performance
improves
with
the log of
cumulative volume produced. Various studies
have used empirical data to estimate the
slope of the experience
curve in different
industries.
Although the experience curve concept is
useful for diverse purposes, including setting
performance
targets and prices, much of the
existing literature
on experience
curves is
seriously incomplete and misleading. It implicitly treats the rate of learning as beyond
the control
of management.
This follows
from the assumption
that the slope of the
experience
curve
is fixed, approximately
constant over time, and constant across all
firms in an industry. Although Alchians early
article provided extensive evidence that all of
these assertions
are wrong, subsequent
research has not followed up on this aspect of his
findings3
Models constructed
within the dynamic
paradigm suggest that there are many ways for
mangement to directly and indirectly alter the
rate of learning. For example, Bohn [27] documents very different process variability levels
in different plants and shows how this will
retard learning. In this model, the key independent variable is the cumulative number of experiments performed, rather than cumulative
production
volume.

3Dutton and Thomas [31], whose survey of more than


200 learning curve studies shows that the rate of learning
is far from fixed, are notable exceptions.

Analysis of the learning process in electronics ramp-ups


suggests that four forces
of experimental
determine
the effectiveness
learning in producing useful knowledge [22] :
_ information
time (the time
turnaround
and analyze
needed to design, execute
a single experiment),
_ signal to noise ratio of the experiment
(which is affected by process variability, how
the experiment
is conducted,
and sample
size, among other things),
_ cost per experiment,
both direct and indirect, and
- fidelity of the experimental
environment
to
the true process.
All four are affected by management
of learning per se, and also by management
of the
routine
manufacturing
process.
Thus two
plants using the same process can exhibit different abilities to learn.

6. Conclusions
Taylor believed that learning through experimentation
was crucial. But in his world, it
was to be done off-line, by specialized personnel, usually in a lab or pilot line. Information
fed back from normal manufacturing
was used
to reward high output but not as a source of
new knowledge. Thus the activities of execution and knowledge creation were both present, but highly separated. This separation is
found today in static approaches
which assume progress comes from outside the manufacturing plant, from vendors, research labs,
and development
efforts. Within manufacturing, knowledge is assumed complete. Longterm competitive success will go to the firms
that improve
the fastest over a sustained
period. In principle they can do this by (1)
purchasing
outside knowledge, (2) intensive
R&D outside manufacturing,
and (3) learning
within existing manufacturing.
We will look at
each in turn.
With world equipment
markets becoming
global, any knowledge embodied in purchased
equipment or software is widely available to

280

competitors.
As well understood
procedural
tasks are increasingly turned over to machines
and software available from vendors, competitive advantage lies more and more in expertise
based tasks such as design and process
improvement
beyond the original capability of
purchased equipment. This is strengthened
by
the trend toward flexibility in automation.
In
short, economic
rents are today available
mainly from knowledge which extends what is
directly available from vendors.
In principle,
development
of knowledge
within the firm might be done within separate
research
and
development
organizations.
When is it effective to completely
separate
learning from manufacturing,
as static approaches do? The difficulty with such separation is that non-manufacturing
environments have inherently low fidelity for some key
issues. Fidelity is the similarity between the
location where learning occurs and the manufacturing floor where it is used. For variables
at stages six or seven of knowledge, an artificial
environment
(pilot line) can be created with
adequate fidelity. Experiments
run in such an
environment
can be extrapolated
to predict
accurately what will happen in actual manufacturing. But pilot lines cannot have complete
fidelity for important issues at early stages of
knowledge.
For example,
when tolerances
tighten, subtle previously
unimportant
disturbances on the manufacturing
floor may become significant and have to be analyzed in
order to devise countermeasures.
Furthermore, interactions
among people, machines,
and materials in high volume manufacturing
cannot
be duplicated
realistically
in pilot
lines.
Therefore, fully effective learning for process
improvement
must use information
from the
manufacturing
process itself. Speed is also
a factor, since it is time consuming and expensive to create and run pilot lines. Thus, for
learning about certain issues, it is both more
effective and more economical
to use the
manufacturing
process as the laboratory
for
experiment and observation. These arguments
apply also to research environments,
which
have even lower fidelity.

In short, firms that maintain a rigid separation between learning in R&D and execution in manufacturing,
with a one-way flow of
knowledge and information between them, reduce their ability to learn and their rate of
improvement.
Competition
and other external
factors force continual change in the technology of manufacture
and the products offered
by firms. Schumpeter
[28] argued this more
than 40 years ago, and recent competition
from the Far East has painfully exacerbated
the pressure for such change. We argue that
the production
situation is dynamic; therefore
knowledge is key to competitive
success and
must be explicitly considered
in operations
management.
Managers
in some industries
have known and responded to this for decades.
However the formal analytical tools and concepts available to them have their roots in
Taylors separation of knowledge from execution, and are thereby limited in dealing with
dynamic issues.
A protected monopolist
might be able to
ignore dynamic issues and maintain a strictly
static approach. Historically,
though, monopolies that have assumed they were immune to
competition
have eventually been supplanted.

Usefulness of operations
dynamic world

research

in the

We have argued that static models of operations are useful only for those problems for
which relevant knowledge
is at or close to
Stage 7, and that this is only a subset of manufacturing in general. Nevertheless, the various
standard tools of operations research are still
usable, even when static paradigms and models
may not apply. We can divide problems into
three categories, depending on what types of
tools and variables are suitable.
(1) Problems that can be handled using standard tools applied to stationary variables;
that is, problems that fit the static paradigm. Efficient execution in stationary situations is still necessary, even if it is not
sufficient for success in most industries. As
technology,
competition,
and operating

281
practices progress dynamically, new problems amenable to stationary analysis arise.
The advent of Flexible Manufacturing Systems, for example, has posed a variety of
interestiong new scheduling problems. Yet,
it can be dangerous to ignore the dynamic
aspects of such problems. Schedules should
be designed to enhance the rate of learning,
not just to reduce immediate costs.
Problems
that can be handled using stan(2)
dard tools, but applied to dynamic variables and issues. The dynamic paradigm
covers many problems that, although they
remain to be researched, seem amenable to
existing operations
research methods.
Search theory and dynamic programming,
for example, have wide application in conducting sequences of experiments over
long periods of time. Examples of this class
of problem were discussed in the preceding
section.
(3) Problems attended by dynamic issues that
are not fully amenable to standard tools.
We see many dynamic problems that are
empirically important but cannot be fully
modeled. For example, just-in-time inventory control seems to have an impact on
the pressure experienced by workers and
managers to do root cause problem solving. Both the success and the value of JIT
are crucially tied to whether problems are
solved superficially, once-and-for-all (removal of the root cause), or not at all. Yet
this issue may be mediated by psychological factors we do not yet know how to
model. Suri and deTreville [29] model
some relevant effects of JIT, but they are
unable to model the underlying driving
forces. Another example is the management of product design, which occupies an
increasingly important role in modern
manufacturing,
but which has so far
proved quite hard to model.
It is our hope that this paper will stimulate
new research, both into the development of
new tools and models, and into the application
of known tools to dynamic problems. We
believe that framing situations explicitly in
a dynamic paradigm will yield powerful practi-

cal and theoretical insights. Although dozens


of practical books contribute to effective management of dynamic situations [20, 303, with
some exceptions operations management research has not yet contributed enough.
Acknowledgements

Our thanks to many colleagues for vital


discussions and comments, and especially to
John Bishop, Kim Clark, Uday Karmarkar,
Richard Rosenbloom, Don Rosenfield, Earl
Sasser, Gordon Shirley, and Michael Watkins.
John Simon provided invaluable editorial assistance. Research funding was provided over
many years by the Division of Research, Harvard Business School. Finally, we are grateful
for the penetrating comments of our anonymous referees. We alone are responsible for
remaining errors, omissions, and bad writing.
References
Tay!or, F.W., 1947. Scientific Management
(Comprtsmg Shop Management. The Principles of Scientific Management,
and Testimony Before the Special
House Committee). Harper & Brothers, New York,
condensed.
Ackoff, R.L., 1979. The future of operational
research is past. J. Oper. Res. Sot., 30: 93-104.
Dando, M.R. and Bennett, P.G., 1981. A Kuhnian
crisis in management
science. J. Oper. Res. Sot., 32:
91-103.
Murnane, R. and Nelson, R.R., 1984. Production and
innovation when techniques are tacit: The case of
eduction. J. Econ. Behav. Organization,
5: 353-373.
Nelson, R.R., 1980. Production
sets, technological
knowledge, and R&D: Fragile and overworked constructs for analysis of productivity growth? Innovation Technol. Progr. 70: 62-67.
Nelson, R.R. and Winter, S.G., 1982. An Evolutionary Theory of Economic Change. Belknap Press,
Cambridge, MA.
Jaikumar, R. and Bohn, R., l984. Production
management: A dynamic
approach.
Working
paper
9-784-066. Harvard Business School, Boston.
SahaI, D., 1985. Patterns of Technological
Innovation, Addison-Wesley,
Reading, MA.
Baybars, I., 1985. On currently practiced formulations of the assembly line balance problem. J. Oper.
Manage, 5: 449-453.

282
IO
11

12
13

14

15

16

17

18

19

20

21

22

Dar-El, E.M., 1978. Mixed-model


assembly line
sequencing problems. Omega, 6: 3 17-322.
Dar-El, E.M., and Cother, R.F., 1975. Assembly line
sequencing
for model mix. Int. J. Prod. Res., 13:
463-477.
Macaskill, J.L.C., 1972. Production line balances for
mixed model lines. Manage. Sci., 19: 4233434.
Thomopoulous.
N.T., 1967. Line balancing - sequencing for mixed model assembly. Manage. Sci.,
14: B59pB75.
Mitchell, J.. 1957. Computational
procedure for balancing
zoned
assembly
lines. Research
report
6-94801-l-R3.
Westinghouse
Research
Laboratories, Pittsburgh.
Tonge, F.M., 1961. A Heuristic Program for Assembly Line Balancing. Prentice-Hall,
Englewood Cliffs,
NJ.
Pinto, P.A., Dannenbring,
D.G. and Khumawala,
B.M., 1981. Branch and bound and heuristic procedures for assembly line balancing with parallel stations Int. J. Prod. Res., 19: 5655576.
Nanda, R. and &her, J.M., 1976. Non-parallelability
constraints in assembly lines with overlapping work
stations. AIIE Trans., 8: 343-349.
Silverman, F.N. and Carter, J.C., 1986. A cost-based
methodology
for stochastic
line balancing
with
intermittent line stoppages. Mange. Sci., 32: 455-463.
Suri, R. and Diehl, G.W., 1986. A variable buffersize model and its use in analyzing closed queuing
networks with blocking. Manage. Sci., 32: 206224.
Slade, B.N. and Mohindra,
R., 1985. Winning the
Productivity
Race. D. C. Heath & Co., Lexington,
MA.
Jaikumar,
R., 1988. Contingent
control
of synchronous
lines: A theory of JIT. Working paper
88-061. Harvard Business School, Boston.
Bohn, R.E., 1987. Learning by Experimentation
in
Manufacturing.
Working
paper 88-001. Harvard
Business School, Boston.

23

24
25

26
27
28

29

30
31

32

33

34

Jaikumar, R., 1988. From filing and fitting to flexible


manufacturing:
A study in the evolution of process
control. Working paper 88-045. Harvard Business
School, Boston.
Jaikumar, R., 1986. Post industrial manufacturing,
Harvard Business Rev., 64: 69-76.
Dutton, J.M., Thomas, A. and Butler, J.E., 1984. The
history of progress functions as managerial technology, Business History Rev., 58: 204-233.
Alchian, A., 1963. Reliability of progress curves in
airfare production.
Econometrica,
31: 679-693.
Bohn, R.E., 1991. Noise and Learning in Semiconductor Manufacture.
Schumpeter, J.A., 1939. Business Cycles: A Theoretical, Historical, and Statistical Analysis of the Capitalist Process. McGraw-Hill,
New York.
Suri, R. and deTreville, S., 1986. Getting from Just
in Case to Just in Time: insights from a simple
model. J. Oper. Manage., 6: 2955304.
Schonberger,
1982. Japanese Manufacturing
Techniques. Free Press, New York.
Dutton, J.M. and Thomas, A., 1984. Treating progress functions as a managerial opportunity.
Acad.
Manage. Rev., 9: 2355247.
Bohn, R.E., 1986. An informal note on knowledge
and how to manage it. Case 9-686-132. Harvard
Business School, Boston.
Bohn, R.E. and Jaikumar, R., 1986. The development of intelligent systems for industrial use: An
empirical investigation.
In: R. Rosenbloom
(Ed.),
Research on Technological
Innovation,
Management and Policy, Vol. 3. JAI Press, Greenwich, CT,
pp. 213-262.
Jaikumar, R. and Bohn, R.E., 1986. The development of intelligent
systems for industrial
use: A
conceptual
framework.
In: R. Rosenbloom
(Ed.),
Research on Technological
Innovation,
Management and Policy, Vol. 3. Jai Press, Greenwich, CT,
pp. 169-211.

Вам также может понравиться