Вы находитесь на странице: 1из 33

An Overview of Operation Research

----Its History, Today and Tomorrow


Xugang Ye1, S.A. Awoniyi1, R.N. Braswell1, Lei
Ji2, Weixuan Xu2
1
Department of Industrial Engineering, Florida
State University
2
Institute of Management Science, Chinese
Academy of Sciences
Operation Research (OR) is a relatively new
discipline. Whereas 70 years ago it would have
been possible to study mathematics, physics or
engineering (for example) at university it
would not have been possible to study OR,
indeed the term OR did not exist then. It was
really only in the late 1930's that operation
research began in a systematic fashion, and it
started in the UK as a military subject with
pioneer research that consisted mainly of
studies on Radar. It is also, however, possible
to trace many of the concepts and methods of
operation research back to activities occurring
prior to the World War II. Examples that have a
direct lineage to modern OR include Erlang’s
work on congestion in the Denish telephone
exchange, a clear predecessor of queuing
theory, and Lanchester’s theory of combat,
conceived during the World War I but still the
subject of research today.
Looking backward over the development and
progress of operation research, one can
distinguish three periods. As referred above,
the first covered the remarkable contribution
to the military successes including victory of
Royal Air Force. The second started with the
enthusiasm for the application of OR to civil
problems after the war and highlighted with
the entry of OR into Universities and academic
establishments across the world. The third
period found operation research in activities
with names like large dynamic system analysis
and real time optimization.
1. OR History

Wartime Origin

In the story of the introduction of radar into


the air combat, the group formed by Fighter
Command to address problems was named
operational research section. In their
presentations, there was little of theory, much
of practice. When Costal Command,
Antiaircraft Command and the other
organizations in the hard-pressed British
defense forces sought to recruit operational
research sections, they had a list of scientists
from which to choose. As it turned out, the
heterogeneous mixture of scientific specialties
proved to be very suitable for the varied
problems presented to the OR sections, and
established the interdisciplinary stamp which
has been an important feature of operation
research ever since. The United States Navy
and the U.S. Army Air Force established a
number of operational research sections soon
after their entry into the war, and in general
the compositions for the teams and the types
of analysis undertaken were similar to those of
the British. We cannot forget one of the
outstanding accomplishments  the
demonstration that shipping losses to
submarines could be reduced by increasing the
size of escorted convoys and the improvement
to the lethality of aircraft attacks on U-boats
by reducing the setting on the depth charges.
In retrospect it does seem that the
circumstances of World War II were especially
favorable for the birth of operation research.
The urgency of the problems, the availability of
a structured system for reporting of data, and
the willingness to experiment are unlikely to
be found in many situations in peacetime.
However, the peacetime developments had
surpassed the gloomy predictions of
Bronowski. From military beginnings OR had
successfully diffused into and taken root in
government departments, industrial
organizations, commercial enterprises and
academic establishments across the world.

Establishments

Initial efforts to establish OR in the


government sphere were faltering but the
groups established in the newly nationalized
coal industry and the steel industry flourished.
Early studies included the use of statistical
techniques, queuing theory, inventory models,
and simulation (by hand). Of course, many of
the problems could be satisfactorily solved
without sophisticated techniques at all, the
solution was apparent once the situation had
been appraised and the data gathered.
Simulation stood out as the technique that
proved most valuable in diverse industrial
situations. However, most commentators
agreed that it was not the techniques but its
conduct by scientists who applied the scientific
method that defined.

OR

Much excitement was generated by two


subsequent developments. First, efficient
algorithms were developed in the U.S. for the
solution of linear programming. In 1947, G. B.
Danzig published his linear programming
model and simplex method. It was an
important milestone in the history of OR. As
computer power grew it gradually became
possible to tackle a wide variety of large-scale
linear programming problems arising from
resource allocation, transportation and
production scheduling. Second, the
development of critical path analysis and its
use on the U.S. Polaris construction project led
to its obligatory application for large,
governmental construction projects on both
sides of the Atlantic.
OR was at its height in the 1960s with
industrial OR groups flourishing. Growing
interest in the commercial and service sectors
gradually take up in the UK civil departments
of government. There were energetic attempts
to apply the techniques to governmental
planning and budgeting. Blackett was one of
the four founders of the Operational Research
Club in 1948, which was later, renamed the
Operational Research Society. It was the first
OR society in the world with the Operations
Research Society of America following in 1952.
Blackett was also the first author to publish a
paper in the Operational Research Quarterly,
which was the first OR journal in the world to
be published. The first issue was 15 pages long
and Blackett’s paper was less than four pages
of what we today call A5 paper. By the sixties
both the Society and the journal had become
established and respected worldwide. The
Society remained essentially an open club of
people who wanted to be associated together.
Whether or not this was the right strategy to
adopt, it had a profound effect on the nature of
the Society, its activities, and its membership.
In the sixties OR started to be established in
University engineering, economics and
business administration departments. Later
they would become established in mathematics
and management departments and business
schools. The first dedicated chair in OR was
established at Lancaster University in 1964.
The First OR course in U.S. was offered at MIT.
OR was thus becoming established as an
academic discipline as well as a professional
activity.
An signpost of the development of operation
research is the Lanchester Prize, awarded
annually for a book or paper making a
significant contribution to the advancement of
the state art of OR. Since the publication must
be in the English language the coverage is less
than global; and since it is awarded for one
publication it does not allow for a long
program of research reported at intervals.
Nevertheless, the winning contributions do
represent landmarks in progress of operational
research. The first Lanchester Prize was
awarded to the paper "Traffic Delays at Toll
Booths" by Leslie C. Edie in 1956. Franz
Edelman Award is another signpost of the
development of operation research. Different
form Lanchester Prize, the purpose of the
Franz Edelman Award is to call out, recognize,
and reward outstanding examples of
management science and operations research
in practice. The prize is awarded for
implemented work, not for a submitted paper
or for the presentation describing the work.
The client organization that used the winning
work receives a prize citation; the authors of
the winning work receive a cash award. In
1972, the first award came to Pillsbury
Corporation and its author Richard A. Condon.
Each year since 1975, the Institute for
Operations Research and the Management
Sciences (Informs) awards the John von
Neumann Theory Prize to a scholar who has
made outstanding contributions to the theory
of operations research and management
science. The award acknowledges
extraordinary contributions within the fields
that have stood the test of time. For one of the
most influential results in 20th century, G.B.
Danzig received the first honor as the father of
linear programming.

OR Theories

As the core part of the post-war development


of OR, since 1960s, mathematical programming
won its wings as the result of fast development
of computer and computation technology.
Fletchcher and Reeves published their
conjugate gradient method for unconstrained
nonlinear programming in 1964; Davidon,
Fletchcher, Powell proposed famous DFP
algorithm in 1970s; Also in 1970s, Broyden,
Fletcher, Goldfarb and Shanno developed
famous BFGS. In constrained nonlinear
programming, Farkas developed fundamental
theorems for what was later known as famous
K-T equation, which was named after W.
Karush (1939), H. Kuhn (1951), and A. Tucker
(1951). Thanks to the works of Biggs, Han,
Powell, Gills, Flethcher, Schittowski, Yuan in
1980s, The SQP methods and trust region
methods stand as the state-of -the-art in
constrained nonlinear programming.
R. Gomory’s papers on Integer programming
and cutting plane methods in 1950s advented
as an important effort to attack combinatorial
optimization problems. Also, in 1950s, G.B.
Dantzig, D.R. Fulkerson and S.M. Johnson
nurtured the scintillating scheme of
decomposition, brunch bound and polyhedral
combinatory. Complexity theory and NP-
Completeness early appeared in works of J.
Edmonds, R. Karp in 1965, complete theory on
computational complexity took shape in books
of S. A. Cook, Osman, Reeves, Lawler E L.,
Papadimitriou C H from late 1970s to early
1990s. Starting in the 1970s, Glover, Darwin
Klingman, Alan J. Hoffman, Philip Wolfe, Egon
Balas, Ellis L. Johnson and Manfred W. Padberg
revolutionized the field of integer
programming and network optimization. They
developed new data structures, algorithms and
computer implementations that vastly
improved the speed and size capabilities of
network codes. In the best traditions of OR/MS,
they pioneered many important applications
including routing, packing, sequencing,
assigning and pattern cutting, and played a
major role in disseminating this technology to
practitioners. Starting in the late 1970s,
framework of the met heuristic like Tabu
Search, genetic algorithms and simulated
annealing has had an enormous impact on the
capabilities to solve hard combinatorial
problems. In many areas, ranging from
scheduling to financial planning to training
neural networks, met heuristic has solved or
quickly obtained a high-quality solution for
problems that were too difficult to tackle by
other methods. Also in 1970s, it was hard to
conceive what OR would be like without either
the indirect, philosophical impact of or the
direct invocation of methods Richard Ernest
Bellman pioneered as dynamic programming.
Related to optimization, linear or nonlinear,
continuous or discrete is the game theory,
which originated before the end of the war by
Von Neumann and Morgenstern in the context
of economic competition, but developed to a
high state of mathematical complexity in the
1950s with the distinctive and complementary
contributions of John Forbes Nash and Carlton
Edward Lemke. As a young graduate student at
Princeton, J.F. Nash conceived the idea of no
cooperative equilibrium in multi-person games
and went on to prove a general existence
theorem for this solution concept. His proofs
(1950, 1951) are beautiful applications of the
topological fixed-point theorems of Brouwer
and Kakutani. Nash's equilibrium proofs were
non-constructive, and for many years it
seemed that the nonlinearity of the problem
would prevent the actual numerical solution of
any but the simplest no cooperative games.
The breakthrough came in 1964 with an
ingenious algorithm for the bimatrix case (i.e.,
finite, two-player games) devised by Carlton
Lemke and J. T. Howson, Jr. It provided both a
constructive existence proof and a practical
means of calculation.
Another body of OR theory addressing
competition and originating early in World War
I is Lanchester’s Theory of Combat.
Considerable effort has been expended to
apply it to real situations, and to develop it
beyond its original scope, but most studies
appear to have been struggles to adapt data to
fit the theory, rather than the reverse. A
completely different approach to combat or
competition has been to simulate the real
process in a war (or business) game, in which
the judgment of players determines the
actions. The description and simulation of real
processes by mathematical models has become
as common in operational research as to be
almost synonymous with OR in some people’s
mind. It provides a powerful means of testing
the effects of prospective alternations without
the cost of a real experiment.
The mathematical theory of queues has
occupied a great deal of time and ingenuity of
OR workers. In 1909, Agner Krarup Erlang, a
Danish engineer who worked for the
Copenhagen Telephone Exchange, published
his first paper on the subject. Although
Erlang's model is a simple one, the
mathematics underlying today's complex
telephone networks is still based on his work.
The first use of the term "queuing system"
occurred in 1951 in an article by David G.
Kendall in the Journal of the Royal Statistical
Society. In 1953, David G. Kendall introduced
the A/B/C type queuing notation. The book
"Queues, Inventory, and Maintenance" by
Philip M. (McCord) Morse, was published in
1958 and is considered the first textbook on
queuing. Frank Haight introduced the concepts
of balking, reneging, and parallel queues in
1958. Also in 1958, H. White and LS Christie
considered server breakdown. The proof of
Little's formula was published in 1961. Lajos
Takacs effectively applied combinatorial
methods to queuing theory in 1962. Marvin
Mandelbaum and Benjamin Avi-Itzhak
introduced the concept of the split-and-match
(a.k.a. fork-join) queue in 1968. The use of
queuing for computer performance evaluation
began around 1970, with IBM's historical
contributions to performance modeling. Harry
Kosten used the supplementary variables
technique in 1973 to analyze queues. Percy
Brill developed the level crossing method in
1975. Ronald W. Wolff proved and popularized
the PASTA principle in 1982. Richard Larson
developed a "queue inference engine" in 1990.
Gelembe introduced the concept of negative
customers in 1991. The best known textbooks
in queuing theory are those by Don Gross and
Carl Harris (1998, 1985, 1974), Leonard
Kleinrock (1975), Cooper (1981). The journal
Queuing Systems started publication in 1986.
Queuing process has been found in many areas
of intensely practical concern, to which
operation research has been applied with
considerable successes. These include the
management of inventories, arrangement for
maintenance, repair, and replacement, and
scheduling of industrial process.
The success of queuing theory could not do
without the development of applied probability
theory. However, in the parallel retrospect,
applied probability theory runs through much
more modeling and theoretical work in OR. A
variety of stochastic process, Markov process
in particular, has received a lot of attention. In
the late 1940s, scientists at Los Alamos
National Laboratory programmed their early
computers to create random combinations of
known variables to simulate the range of
possible nuclear-explosion results. They
nicknamed the program Monte Carlo, after that
city's famous roulette wheels, and used it to
find patterns that would let them plot the
probability of different outcomes. Thanks to
the constructive works by S. M. Ulam and J. von
Neumann from late 1940s to early 1950s, the
methods of Monte Carlo simulation was
developed for obtaining numerical solutions to
problems that are too complicated to solve
analytically. Nicolas Metropolis also made
important contributions to the development of
such methods. Closely related to the
probability and statistics is the development of
decision theory. There could be disagreement
as to whether decision theory belongs to OR.
However, pioneer works by W. Edwards, R.
Luce, H. Raiffa, R. Howard, R. Keeney in 1950s
paved the way toward establishment of
measures of effectiveness, the theories of
value, utility, behavior of groups of people with
different interests, and the logic of choice with
multiple objectives.
Recalling Flood’s address on the subject “New
operation research Potential” for the tenth
anniversary meeting of ORSA in 1962, OR is not
only the development in applied science and
mathematics that promises to shed light on
complex operations and systems problems, it
is also the companion development that
interacts with the very active fields like human
factor, economy, cybernetics, information
technology (IT), computer science, artificial
intelligence, management science, systems
engineering, and other scientific approaches to
understand, design and operate complex man-
machine systems. Today, we could confirm
Flood’s comment with pride. Today, many of
the large systems analyses are being
undertaken by teams that are more
interdisciplinary; many methods, concepts are
integrated from different sciences to a great
extent. For OR, this represents the change
from individual to group, from workshop to
organization, from academic to business.

2. OR Today

Software

Today, faced with such far-flung business


issues, an unsophisticated executive will likely
exclaim: "Have the computer find the answer!"
Unfortunately, straightforward computer
methods will fail dismally on these problems.
Take the warehouse example: Suppose a
company has 100 possible warehouse
locations and wants to find the best 10
locations. Further, suppose that you can
evaluate a specific set of warehouse locations
in a single clock cycle of a fast PC. It would
take two hours to check all possibilities. That's
not so bad. But when the number of choices is
increased to 15 warehouses the solution time
goes up considerably. Starting with 200
possible locations would increase the
processing time a thousand fold. These
problems can't be solved with just more
computing power, but OR can provide the
needed capability.
Today, operations research has become a
collection of techniques based on mathematics
and other scientific approaches to model and
solve real business problems. Linear
programming, which revolutionized the field
50 years ago, lets business planners find
solutions to problems that involve hundreds of
thousands of decision points and an equal
number of constraints on those decisions. With
a linear-programming model, all of the system
constraints and objectives are linear functions.
While these algorithms have been known for
decades, it's only been in the last 15 years
that we've had effective software
implementations of simplex algorithm and
interior point’s methods.
Today, we are glad to see optimization
software for constraint-programming systems
that can generally handle much more
complicated limits on decisions than before.
We are also glad to see that for more
complicated systems, heuristic or self-learning
methods are implemented to find near-optimal
solutions in a reasonable amount of time.
Every year, academic and business
researchers improve OR methods, making
them even more useful. Better and more
available software has brought OR into many
companies via packaged applications such as
Microsoft's Excel, which includes the user-
friendly optimization solver and Matlab’s
optimization toolbox, which provides
command-in-line optimization subroutines.
Other applications include optimization tools
such as analytic, decision-support, ERP, and
financial-planning software.

Applications

Manufacturers were early adopters of


optimization techniques to design their factory
floors, schedule production lines, and ensure
that products reach customers in as short a
time as possible. For example, a midsize
manufacturer spent millions of dollars on an
ERP implementation and eliminated common
problems such as missing orders and
insufficient quantities of raw materials. But
the factory system consistently ran behind,
with lead-times approaching four weeks. This
happened even after the company invested in
new machines to automate the production
line. The shop consisted of a mix of old and
new machines, each with its own operating
characteristics. Every morning, the ERP
system generated a list of products to be
made that day. Human schedulers, generally
experienced machine operators, assigned
products to machines and sequenced the
products to minimize changeover times.
One key issue was whether to use the new
machines for large or small product runs. The
operators argued that the automated-
changeover process on the new machines
meant that time between products was
shorter, so they assigned small product runs
to the new machines. This increased the
number of automated changeovers, and not
coincidently, minimized operator effort. The
alternative was to use the new machines,
which were inherently faster, for longer jobs
to minimize the speedup-slowdown effects of
changing products. Which approach was
actually more efficient?
Answering this question was far beyond the
capability of the ERP system, which offered
only rudimentary scheduling analysis. Instead,
an OR model that contained key factors in the
situation, including changeover times,
speedup/slowdown effects, and job-mix effects
was used. It was calibrated and measured
system performance and was then used to
optimize many days' worth of data. The result
was unequivocal: The operators had made the
wrong scheduling decision. Assigning large
product runs to the new machines was much
better, gaining almost 20% more throughput
from the system. With this optimal-scheduling
tool providing guidance, product-
manufacturing lead-time was cut drastically.
Airlines have also profited greatly from OR in
scheduling complex aspects of their
operations. Airlines have been heavy users of
OR since the mid-1980s when innovative
models reduced crew and airplane costs by
clever scheduling within hub-and-spoke-style
networks. The resulting schedules saved
airlines millions of dollars per year while
increasing compliance with Federal Aviation
Administration (FAA) and union regulations.
A few years ago, Continental Airlines, together
with Caleb Technologies started by Gang Yu in
University of Texas at Austin began building
disaster-recovery models. They posed a
question, such as: If Chicago's O'Hare Airport
were shut down for a day due to a snowstorm,
how could Continental get back on schedule
quickly? Such models are generally difficult to
solve because of their size and the complexity
of the issues involved. For Continental,
contingency plans had to accommodate 1,400
daily flights, 5,000 pilots, and 9,000 flight
attendants, and had to meet a confusing mix
of FAA regulations and union contracts. A
model that contains these details can be
enormous and far beyond the capabilities of a
spreadsheet optimizer. The resulting system
lets Continental react to adverse weather
quickly and economically, while minimizing an
adverse effect on passengers. This system
easily saves millions of dollars per year while
increasing customer-service responsiveness in
difficult situations. The model proved its worth
in the days after 9/11. Never before had
airlines experienced such massive disruption
to their planned operations. Continental's
model, developed for snowstorms, worked
equally well in handling the federally
mandated airport closures. Continental was
the first airline to resume normal operations
after the government gave airlines permission
to resume flights.
Sales departments have also used OR
effectively. Television network NBC faced
myriad issues when trying to develop sales
plans for its advertisers. Creating a sales plan
requires a complicated balance of network and
client needs. Furthermore, NBC wanted to
price its ad spots to maximize revenue. Under
the existing method, where salespeople
created ad-hoc sales plans, advertising space
during parts of the TV schedule went unsold
while other parts were oversold. By the time
prices could be readjusted, vast parts of the
schedule were already fixed, resulting in
massive amounts of lost revenue.
Again, an OR approach was clearly called for.
The model had two primary aspects. The first
involved developing an ad campaign that met
the budget and audience goals of each
advertiser within the set inventory of spots
available for sale. The second was to identify
opportunities for price changes in time to
affect those advertising campaigns. If a
particular spot proved too expensive for the
audience delivered, the price for that spot
could be cut dynamically. In contrast, the price
of a popular spot could be raised to extract
more revenue from the advertiser. The results
were advertising campaigns that better met
advertisers' goals while allowing the network
to sell more available spots at a higher price.
NBC increased its advertising revenue by more
than $200 million in the first four years of
using this system while increasing the
productivity of its sales force, which no longer
had to spend time painstakingly putting
together sales plans. Most importantly, NBC
saw an uptick in advertiser satisfaction.

Trends

The biggest trend in OR over the last few


years has been ubiquity. Optimization
techniques are increasingly common in all
aspects of business. Twenty years ago, these
were used to route oil tankers worth millions
of dollars. Now they're used to route
limousines worth tens of thousands. This is
partly a matter of software availability.
Twenty years ago, the only software available
was highly specialized and designed for the
mathematical elite. Now, optimization
software is embedded in every spreadsheet
package sold and in supply-chain management
systems. There's now a better general
understanding of the underlying modeling
principles. Previously, almost all OR models
involved some sort of manufacturing or
distribution system. Now, models are used to
plan medical testing, schedule sports leagues,
plan agricultural harvesting, and countless
other applications. The benefits of OR are all
around us, even if it is usually invisible.
Another important trend in OR is increased
emphasis on flexibility. The airline-crew
schedules generated in the mid-1980s saved a
lot of money, but were fragile: Any change or
disruption in the schedule had a cascading
effect on the rest of the system. As the
Continental Airlines example shows, airlines
and others are working to create models that
can adapt to changing requirements.
Optimization methods are also allowing people
to better handle uncertainty—a common
element in business. Traditional OR models
assume the data is known and fixed.
Techniques such as scenario optimization and
stochastic programming relax this assumption,
creating models that adequately include
option value in the face of uncertainty.
OR is merging with related fields in artificial
intelligence and computer science to allow
even more complex planning analysis.
Methods such as constraint programming, ant-
system optimization, and metaheuristic search
have come out of artificial intelligence in the
last decade and are being integrated with
traditional OR approaches. The combination of
the sophisticated mathematics of OR and the
flexibility and creativity of these new
approaches is creating fast, flexible systems
for solving difficult practical problems.
Business optimization has never been as
pressing a goal as it is today. The drive for
operational excellence is causing all types of
companies to embrace OR to improve business
processes, manufacturing systems, and
partner-collaboration efforts. Without OR, IT
cannot provide data and information with a
clear sense of how that information should be
applied. OR lets business professionals take
that information to derive razor-sharp decision
making and optimal business outcomes.
Today, as a business-technology executive,
you won't likely embark on an operations-
research project on your own. Instead, your
role is to find situations where OR could be
applied to solve an intractable business
problem, evaluate the possible approaches
and software solutions, and deploy the
systems.

3. OR Tomorrow

Real Time Optimization

Tomorrow, we seek application of advanced


methods of mathematical optimization and
special purpose heuristics, on combinations of
real-time and historical data, to update
business plans and make decisions that are
communicated in real time for the purpose of
optimizing business objectives. Often these
plans and decisions involve the allocation of
scarce resources. Due to dramatic growth in
the parallel processing power of computer;
advances in software architectures; the
availability of robust software libraries,
increases in the availability of real time digital
data and fast, high-bandwidth communications
capability, we now have an unprecedented
opportunity to analyze and model complex real
time systems with increasing accuracy and
precision. Tomorrow, companies will engage in
ever more competitive e-business. The rapid
proliferation of e-commerce is providing
extensive real time data (demand, supply,
prices). In some industries like airline, railroad,
delivery, companies are developing the
infrastructure and technology components
required to support real time optimization.
These companies now have an opportunity to
take the lead in exploiting dynamic
management and to change the business rules
in their industries.
To achieve the benefit of real time
optimization, the algorithms, embodied as
proven, and the trusted software
implementations must be applied to up-to-
date, accurate data, with intuitive interfaces
providing visualization and the ability to
interact with both the input data and the
solutions. However, to achieve this state, the
analysis engines must be robust, and
configuring business processes, which use
analytic engines with other tools, must be
trivially easy.
Tomorrow, new business models will be
enabled through analytics that allow real-time
operational decision-making, explicit
consideration of uncertainty, and increased
efficiencies attainable through aggregation
and late binding of resources. To achieve this
goal, algorithms design must also migrate
toward robust, rather than strictly "optimal",
and we must develop models and methods for
determining robust plans and, more
importantly, models and methods for updating
plans in response to changes and disruptions
in business information. In addition, significant
research is required to more fully understand
the advantages, and limitations, of new
computing architectures, such as loosely
coupled distributed clusters (grids) and
massively parallel servers in the area of
mathematical optimization.
Tomorrow, emphasis in a mathematical
modeling and optimization will shift from
exploring clever and easily solvable, but
brittle, models and algorithms, toward the
development of new models for robust
optimization, including definitions of
"robustness" and new methods for dealing with
multiple objective functions and ranges on
constraint values. As a simple example
consider the problem of computing a shortest
path in a graph. Computing a shortest path is
quite simple, however the problem becomes
more complex if we wish to compute a "robust"
path such that the (possibly detoured) path
will remain short even if the length of up to k
edges are changed while we traverse the path.
Solutions those are easy to "repair" or update
as data changes along with fast algorithms for
performing the updates will be required to
support pervasive use of analytics for real time
optimization. The design of "incremental
algorithms" that makes use of an initial
solution in the original setting to reach a
solution in the up to date setting are essential
to guarantee reaction to changes in a timely
manner.
Challenges
The challenges of realizing real time
optimization fall into several categories. The
first is definition, dissemination, maintenance
and synchronization of mathematical models
representing the underlying problems to be
solved. The second is managing the data flows
through the models as they collect and
incorporate real time data into the modeling
framework and feed back the computing
results as new real time inputs. The third is
developing algorithms for maintaining a model
state representing a high quality solution while
cognizant of the constraints placed by data
transfer limitations and the dynamic nature of
the models themselves.
Definition, dissemination, maintenance and
synchronization of mathematical models will
begin with the definition of standard
representations of the static and structural
parts of models and their interfaces to the
collection of models into which they are
delivering information, as well as the message
and event frameworks into which they must fit.
Due to the asynchronous nature of real time
optimization, models will act as agents
collecting information, processing it and
pushing it on to their subscribers. Existing
standards and commercial software tools
should provide a starting point for the
necessary standardization, but a significant
amount of additional work is required,
including extensions required to deal with
ranges of data, compact descriptions of
alternative solutions, and stochastic data.
Additional standards are also required to
specify how different tools (both current and
to be developed) interact with one another.
Successfully Managing the data flows collected
and generated by the analytic engines and the
applications that use them will be critical to
the viability of any large real time optimization
system as growth in the number of participants
may produce tremendous growth in the volume
of information. Huge packages of time
sensitive data collected and generated must be
operated through distributed open data base
management system. This could involve
exploitation of the property of many
algorithms that allows the checking of a
solution to be done (possibly) much faster and
with much less data than the computation of a
new solution. Additional approaches that
exploit mathematical properties of models and
algorithms should also be developed.
The most important issue in realizing real time
optimization is the necessity of developing
hierarchical, incremental, and robust
algorithms. Real time optimization in many
cases requires modeling of complex decision
systems that are hierarchical in nature. This
will make us revisit classical decomposition
algorithms as well as develop new algorithms
for handling decomposition with asynchronous
messaging and dynamic collections of sub
problems. Algorithms that can capitalize on the
existing solution (or near solution) and
improve that solution using mature polynomial
algorithms or fast heuristics will dominate
most parts of the real time optimization model.
The requirement for implement ability of
solutions finding schemes as well as swiftness
and stability of the solutions processing will
require the development of approaches, which
produce good robust, but not necessarily
optimal, solutions. In fact, with the dynamic
nature of real time optimization, the lifetime of
a solution may be very short especially when it
is fragile.
Maybe no impact is as deep as September 11th
terrorist attack on the revision of today’s
operation research. Never before did OR face
such big challenge like recovering the affected
airlines back on schedule as quickly as possible
at the least possible cost. Tomorrow’s OR will
more and more encounter the deviation of
observed situations from the assumed
situations. Traditional methods with statistical
quantification of uncertainty must give way to
the new methodology denoted as disruption
management modeling that dynamically
adjusts the hierarchically structured models
and sub models to adapt real time data update
as well as changes in constraints and
objectives. Again, flexible, robust and
heuristics intensive algorithms will be called
for to produce online favored recovery or
deplaning solutions and counterattack the
snowball effect that extensively appears in all
kinds of highly interrelated complex systems.
4. Conclusion

In retrospect, people's aspiring for perfection


found expression in the theory and practice of
OR. It studies how to describe and attain what
is better or best, once one knows how to
measure and alter what is good or bad. We are
proud of what OR have done; we are confident
in what OR will do. Yesterday, OR helped us
win the war, send men to the moon, build the
prosperous era of new economy. Today, in a
highly changeable world, seeing off the leave-
taking of the chariot of the industrial economy;
living with the prosperity of the golden time of
information era; and saluting the deep impact
of the advent of the biotechnology economy,
OR helps us find solutions under uncertainty,
plan on sparse resource, and minimize the loss
from disaster. In the coming years, high tech
integrated OR will increasingly look low tech as
we shift from deterministic to robust, from
offline to real time, from parts to integration,
and from local to global. Looking into the
future, we expect, by painstaking endeavor,
that OR may arm us with the tools helping slow
down the environmental degradation, buy time
to make a global transition to new energy, and
more ambitiously, send men to Mars.
Yesterday, OR came to us as a duteous solver;
today, OR is ubiquitous; tomorrow, OR will be
with us, visible or invisible.
Reference

1 [1] A. G. De Kok, S. C. Graves, Handbooks in


Operations Research and Management
Science, Elsevier Science Ltd, total 752
pages, 2003
2 [2] Ackoff RL (1977). Optimization +
objectivity = opt out. European Journal of
Operational Research 1: 1-7.
3 [3] Ackoff, R.L., 1987, "President's
Symposium: OR, a Post Mortem," Operations
Research, Vol. 35, pgs. 471-474.
4 [4] Baruch Schieber, Continual optimization
and future of Operation research, IBM
research report, 2004
5 [5] Blackett, P.M.S., Studies of War, Nuclear
and Conventional, Hill and Wang, New York,
1962.
6 [6] Brothers, L.A., Operations Analysis in the
United States Air Force, Journal of the
Operations Research Society of America,
2(1), 1954.
7 [7] Clausen, Jens, Jesper Hansen, Jesper
Larsen and Allan Larsen, "Disruption
Management," 2001, OR/MS Today, October,
p. 40-43.
8 [8] DeTombe, D.J. (Ed.) (1996) Ifors ‘96, the
International Federation of Operational
Research Societies, July, 1996 Vancouver,
Canada. Special Session Methods and Tools
for Analyzing Complex Societal Policy
Problems. Volume 1 Delft: Delft University of
Technology
9 [9] Gang Yu, Panos Kouvelis, Robust Discrete
Optimization and Its Applications , Kluwer
Academic Pub 356 pages,1996
10 [10] Hitch, C.J., and McKean, R.N., The
Economics of Defense in the Nuclear Age,
Harvard University Press, Cambridge, 1960.
11 [11] Horner, Peter, 2000, "The Sabre
Story," OR/MS Today, Vol. 27. No. 3, pgs. 46-
47.
12 [12] J.P. Brans, Operation Research’ 81,
North Holland Pub. 1981
13 [13] J.P. Brans, Operation Research’ 84,
North Holland Pub. 1984
14 [14] K. Brian Haley, War and Peace, the
First Twenty Five Years of OR in Great
Britain” Operations Research Volume 50,
Number 1, January-February 2002
15 [15] K.B. Haley, Operation Research’ 78,
North Holland Pub. 1979
16 [16] Kent, G.A., "On Analysis," Air
University Review, 18(4), 1967.
17 [17] Morse, P.M., and Kimball, G.E.,
Methods of Operations Research, Peninsula
Publishing, Los Altos, CA, 1970.
18 [18] Nguyen Van Thoai, Panos M. Pardalos,
Reiner Horst, Introduction to Global
Optimization Paperback, 2001
19 [19] Reeves C R (Ed). Modern Heuristic
Techniques for Combinatorial Problems.
Oxford: Blackwell Scientific Publications,
1993.
20 [20] Richard Bronson, Govindasami
Naadimuthu, Covindasami Naadimuthu,
Schaum's Outline of Operations Research,
McGraw-Hill, total 456 pages, 1997
21 [21] Ronald L. Rardin, Optimization in
Operations Research, Prentice Hall, total 919
pages, 1997
22 [22] Tivoli Decision Support V2.1: New
Features and Performance Optimization IBM
Redbooks Paperback, 2000
23 [23] W.X. Xing, J.X.Xie. Modern methods in
optimization, Beijing, Tshinghua, 2000.
24 [24] W.X. Xing, J.X.Xie. Network
optimization, Beijing, Tshinghua, 2000.
25 [25] Xia, Y, M. Yang, B. Golany, S. Gilbert,
and G. Yu, “Real-Time Disruption
Management in a Two-Stage Production and
Inventory System,” IIE Transactions, Vol. 36,
No. 1, 1-15, 2004.
26 [26] Y.X.Yuan, W.Y.Sun Theory and method
of Optimization, Chinese Academy of
Sciences,2001
27 [27] Yu, G., Operations Research in the
Airline Industry. Kluwer Academic
Publishers, Boston, MA, total 460 pages,
1997.
28 [28] Zimmermann, H.J., 2000, "An
Application-Oriented View of Modeling
Uncertainty," European Journal of
Operational Research, pgs. 190-198.

Вам также может понравиться