Академический Документы
Профессиональный Документы
Культура Документы
Levent V. Orman
To cite this article: Levent V. Orman (1998) A Model Management Approach to Business
Process Reengineering, Journal of Management Information Systems, 15:1, 187-212, DOI:
10.1080/07421222.1998.11518202
Article views: 11
Download by: [Monash University Library] Date: 18 March 2016, At: 23:00
A Model Management Approach to
Business Process Reengineering
LEVENT V. ORMAN
Information Systems, and Acta Informatica. His current research interests are data
quality and integrity, business process reengineering, and electronic commerce.
KEY WORDS AND PHRASES: business process reengineering, decision models, hierar-
chies, model complexity, model management, organization design.
Journal ofManagement I"formation Systems / Summer 1998, Vol. 15, No. I, pp. 187-212.
© 1998 M. E. Sharpe, Inc.
0742-1222 / 1998 $9.50 + 0.00.
188 LEVENT V. ORMAN
are needed to take full advantage of the technology. The controversy includes both
macro- and micro-level changes. At the macro level, the most salient issue is the
change in the degree of centralization of decision making, with related questions about
the depth and shape of organizational hierarchies. At the micro level, the most salient
issue is job definition and content, with related questions about communication
patterns, employees' job satisfaction, and skill requirements.
There is a remarkable degree of disagreement on the impact of IT on organizations
in all of these areas. IT may be expected to increase centralization because it increases
the information-processing capacity of managers, allowing them to centralize more
decisions [35,42,43]. IT may also be expected to decrease centralization because it
reduces the cost of communication and coordination and allows decisions to be
delegated [6, 19,31,42]. IT may be expected to decrease the depth of organizational
Downloaded by [Monash University Library] at 23:00 18 March 2016
organizational processes. The objectives are to provide insight into the structure of
business processes and to provide guidance and analytical tools for the reengineering
efforts. In the process, a number of organizational issues are explained and quantified,
including the significance of hierarchies, how they concentrate power on top, and how
they may create employee alienation at the bottom, the need for business process
reengineering after the introduction of IT, the exact conditions under which IT may
and should lead to more or less centralized structures, and the exact location and nature
of those structural changes. An information processing-decision making paradigm of
organizations is adopted [15, 17, 38, 40]. Organizational processes are viewed as
collections of decision models within the general framework of organizational infor-
mation processing. Each decision model is identified by a type of decision which is
its output, and contains a sequence of information-processing tasks [26]. The infor-
Downloaded by [Monash University Library] at 23:00 18 March 2016
mation-processing tasks are the smallest identifiable units of analysis, and their
optimum arrangement is the critical design variable determining the efficiency of the
resulting structures. The structures will be evaluated in terms ofthe cost of information
processing and the cost of communication among tasks [21]. Both criteria are heavily
influenced by the arrangement oftasks, since those arrangements determine what tasks
need to communicate with each other, the direction and the content of communication,
and the possible sharing of tasks among models.
decision theory where costs are assumed to be given by a market place, and the
cost-based analysis of model management we employ. To some extent they are
substitutes since there is a considerable overlap in analysis, but they are also comple-
mentary since neither costs nor utility can be ignored in a complete analysis. Costs are
especially critical in a structural analysis since there is a more intuitive and direct
relationship between costs and structure, and the costs cannot be assumed given by
the market place since there is often no market internal to the organization to determine
the price of information. As a matter of fact, the cost of producing and disseminating
information appears to be the critical factor behind most organizational structure,
where utility plays a more restricted role of providing a threshold to determine what
information is worth producing and maintaining [15].
Each decision task Ii is characterized by two parameter types: The cost ci' and the
s; v
Downloaded by [Monash University Library] at 23:00 18 March 2016
selectivities for each input variable oftask i. The cost of a task is intuitively defined
as the resources used to execute the task, such as the value of the decision maker's
time for a typical decision task, or the value of the computer time for automated
decision tasks. Formally, the cost of a decision task is defined by the complexity
function of the implementing algorithm, which is a function of the size of its input
variables. Complexity functions are used widely in algorithm analysis [32], and they
are positive nondecreasing functions of the sizes of input variables, measuring the
amount of (computing) resources used to execute the algorithm. The size of the input
variable v of a task Ii is called the search space of the task ti with respect to the variable
v. Tasks facing large search spaces are more costly than others given the same
complexity function due to the nondecreasing characteristic of complexity functions.
Similarly, the tasks with higher-order complexity functions are more costly than
others, given the same search space. The cost of a task is also referred to as its
s;
"complexity." The second type of parameter is the selectivities of a task ti for each
input variable v, or Si when the variable involved is obvious. Selectivity is defined as
the remaining search space after the execution of the task as a percentage of the initial
search space or alternatively I-the percent reduction in search space affected by the
execution of the task.
Example 1
A commercial bank has a simple decision model with two tasks for processing
mortgage loan applications. Task tl enforces the constraint that requires the applicant
to be a homeowner. Task t2 involves the constraint requiring the applicant to have an
annual income over 30K. Those who satisfy both constraints are approved for a loan.
Mortgage loan decision model
t I: homeowner constraint
t2: salary> 30k constraint
The complexity function for both t I and t2 is given as ex where c is a constant and the
input variable x is the daily number of applicants. Assuming 100 applicants per day,
A MODEL MANAGEMENT APPROACH TO BPR 191
60 percent of whom are homeowners, and 20 percent of whom have a salary> 30K,
the resulting parameters are c i = c 2 = 100c, SI = 0.60, and s2 = 0.20.
The definition of complexity function implies that the cost of a task is dependent on
the search spaces of its input variables, and the reduction in any search space achieved
by executing a task influences the cost of subsequent tasks. In decision theoretic terms,
reduction in uncertainty achieved by executing a task reduces the uncertainty faced
by subsequent tasks. Such interaction among the tasks implies that some sequences of
tasks are better than others, and (at least) one sequence is the optimum for each model.
The structure of a single model is determined solely by the sequence of its tasks, and
the optimum structure is the sequence that minimizes the total cost of the model. The
extension to multiple interacting models is introduced in the next section, leading to
more complex hierarchical structures.
Downloaded by [Monash University Library] at 23:00 18 March 2016
Example 2
Case I
Given the mortgage loan decision model of example I, with two tasks t l , t2 , costs c i
= c 2 = 100c, and selectivities SI = 0.60 and s2 = 0.20, the cost of the sequence (tl' t2)
is c i + sIc2 = 100c + 60c = 160c, but the cost of(t2' t l ) is c 2 + S2cI = 100c + 20c = 120c.
Different sequences lead to different costs since the execution of a task reduces the
search space for the subsequent tasks (e.g., if an applicant does not meet the home-
owner test, then there is no need to check his or her salary). Clearly (t2, t l ) is the
optimum sequence with cost 120c, as opposed to 160c. Intuitively, (t2' t l ) has lower
cost since t2 is a more stringent test, and the costs are the same.
Case 2
Now assume that t2 is more costly to test than t l , with c2 =400c and c i = 100c, possibly
because of the need to verifY salaries, but home ownership is a public record. Now the
cost of(tl' t2) is c i + SIc2 = 100c + 240c = 340c, and the cost of(t2, t l ) is c 2 + s2CI =400c
+ 20c = 420c. Clearly now, (t I' t2 ) is optimal, demonstrating the critical importance oftask
complexity in structure. In general, the cost of a sequence tj , ... ,tj is
I "
where Vj are the set of variables obtained by substituting <EQ> or variable v, where
Vois the search space for variable v.
Case 3
Similarly, now further assume that sl = 0.90. Now the cost of (tl' t2) is c i + SIc2 =
100c + 360c = 460c, clearly higher than the cost of (t2' tl)' whose cost remains 420c.
192 LEVENT V. ORMAN
(t2' t 1) is once again the optimal solution, demonstrating the critical importance of
selectivity in determining structure.
So far, the selectivities of tasks were assumed independent of the structure, when in
fact selectivity of a task may depend on the uncertainty it faces and, hence, what tasks
have already been executed. Borrowing Bayesian terminology, Sij will refer to the Joint
selectivity of tasks ij (i.e., the selectivity achieved by executing tasks i andj); Si/j will
refer to the selectivity of task i given that task j has already been executed. If the
selectivities are independent, sij =s,s; follows from laws of probabilistic independence.
Similarly, sij =sill; follows from the definition of conditional selectivity. It is important
to note that independence of selectivities does not imply that the tasks are independent.
Task independence in common parlance implies no common variables, and that means
selectivity = I for all tasks that do not involve the variable in question.
Downloaded by [Monash University Library] at 23:00 18 March 2016
Example 3
An extended mortgage loan decision model is given with three tasks:
Case I
Case 2
Now assume interdependence ofSI and S2 since those with salary> 30k are more likely
"*
to be homeowners. Assuming S12 = 0.15 0.20 x 0.60, the optimum sequence is (t2 ,
t3 , t l ) with total cost of c2 + S2C3 + s23CI = 100c + 20c + 14c = 134c since S23 = 0.20 x
0.70 = 0.14. The second best sequence is (t2, fp (3) with total cost of c 2 + s2cI + s12c3
= 100 c + 20c + 15c = 135c since SI2 = 0.15. The cost of sequence (t2' fp t3) has
increased from 132c to 135c due to the interdependence between S I and s2' Clearly,
the interaction among the tasks is a critical variable in determining structure.
Example 4
A retailer has a simple decision model with three tasks to make an advertising decision
Ina given region:
A MODEL MANAGEMENT APPROACH TO BPR 193
The decision rule is to advertise if all three constraints are satisfied. The complexity
of each task is cx, where x is the number of regions.
Case 1
Given 100 regions c i = c 2 = c 3 = 100c. The probability Off l , f2' or f3 being satisfied in
a region is Sl' s2' or S3' respectively. sl = 0.60, s2 = 0.20, s3 = 0.70. Assuming
independence of tasks, the optimum sequence is (f2' fl' (3) with total cost of c 2 + S2CI
Downloaded by [Monash University Library] at 23:00 18 March 2016
Case 2
By changing the cost c 2 = 400c, the optimum sequence becomes (fl' f2' (3) with total
cost of c i + slc2 + sl2c3 = 100c + 240c + 12c = 352c.
Case 3
Assuming interdependence of Sl and s2 with SI2 = 0.15 "# 0.20 x 0.60, the optimum
sequence is (f2, f3' fl) with total cost of c2 + s2c3 + s23cI = 100c + 20c + 14c = 134c, as
in the previous example.
Some variables are created within tasks rather than being input from the environ-
ment. The task that creates a variable is called a "source," and the task that uses such
a variable is called a "target." Such derived variables are assigned an initial size of
arbitrarily large M to prevent target tasks executing before the source task in the
optimal solution. Only the source task with its especially small selectivity can bring
M down to its realistic size. This special case corresponds to the dependency graphs
of systems analysis [34].
The decision model approach to process design can be generalized to continuous
variables and infinitely large search spaces. So far, search space has been used to
measure the uncertainty faced by a task, and the reduction in search space has been
used as a measure of uncertainty reduction achieved by executing a task. These
measures are intuitive and useful when finite discrete search spaces are involved, but
they fail for continuous or infinitely large search spaces, which are commonly found
in decision models. A more general concept than the size of the search space is
provided by information theory to measure uncertainty, which is the concept of
entropy [16]. Entropy is commonly used to measure the uncertainty related to a random
variable, and can be used to measure the uncertainty faced by a task. Similarly, change
in entropy can be used to measure the change in uncertainty due to the execution of a
task. More importantly, for finite discrete search spaces with no additional information
194 LEVENT V. ORMAN
about the search space, entropy specializes to the size of the search space, justifying our
initial intuitive choice. Given a random variable with a probability distribution p(x), its
-J
entropy is defined as p(x) log (P(x)) dx.
Similarly, for a discrete random variable with
probabilities Pi associated with each distinct outcome i, its entropy is defined as
-L: PI log(P).
For the loan decision of example 1, a binary predicate (i.e., YESINO decision) defined
on a search space of I 00 customers leads to 2 100 possible outcomes. With no additional
information, each outcome is equally likely, that is,
I
Pi = 2100 .
Downloaded by [Monash University Library] at 23:00 18 March 2016
Example 5
i (i) dx =
18
14
log = 2.5 .
The next step is the computation of uncertainty reduction: When computing with
search spaces, we have used reduction in search space as a percentage of the initial
search space to measure the reduction in uncertainty. This measure fails with entropy
since the initial entropy may be zero or negative. The appropriate measure of
uncertainty reduction is the change in entropy due to a task as a percentage ofthe total
change in entropy due to all tasks, since entropy changes can be meaningfully
compared, but comparisons between changes and absolute entropies are not very meaningful
[16]. Most significantly, this measure of entropy change again specializes to search-space
reduction as a percentage ofinitial search space, with the assumption that initial search spaces
Downloaded by [Monash University Library] at 23:00 18 March 2016
are much larger than the final search spaces remaining after all the tasks have been executed.
Selectivity is still defined as a one-percent reduction in uncertainty.
Example 6
The product pricing model of example 5 leads to the following selectivities by each
task:
Total change in entropy = 1--1 = 2.
Selectivity sI2 of t l , t2 = 0
where the selectivities are consistent with the intuition about uncertainty reduction.
In summary, entropy reduction as a measure of uncertainty reduction specializes to
search-space reduction when:
a. The search space is finite and discrete.
b. No prior information about the search space is available.
c. The initial search space is much larger than the final search space remaining
after the execution of all tasks (or, alternatively, the final search space is nUll).
We will continue to use search space as a measure of uncertainty because ofits intuitive
and practical value, whenever these assumptions can be reasonably expected to hold.
196 LEVENT V. ORMAN
Multimodel Structures
AN ORGANIZATION CAN BE VIEWED AS A COLLECTION OF INTERACTING decision
models. Structuring each model independently, without considering its interaction
with others, is likely to be suboptimal. Complex organizations containing such
multiple interacting models require a complex architecture involving multilevel,
multi component structures to minimize information-processing costs [17]. The prin-
cipal tool of structuring is decomposition. Decomposition is often touted as the basis
of all structure [1]. It has been studied extensively, but there is no consensus about
why and when systems should be decomposed into smaller components. The oldest
theory is information overload. It suggests that organizational activities are decom-
posed into smaller tasks and assigned to different employees to prevent the complexity
Downloaded by [Monash University Library] at 23:00 18 March 2016
of a job from exceeding employees' complexity tolerance [39]. This theory is simple
and quite effective in explaining the cognitive limitations of humans, but it does not
capture the essence of structure. It does not explain the structure often found within
an individual job, nor does it explain extensive structure imposed on relatively idle
components of an organization (e.g., when large processors are assigned to relatively
small tasks in computerized systems).
The second theory proposes a reduction in the complexity of a system by dividing
complex units into small relatively independent units [34, 41]. The reduction is
claimed to follow from the multiplicative effect on complexity of the variables within
a unit, as opposed to the additive effect on complexity when the variables are split into
multiple independent units. This theory elegantly explains the effect of interdependent
variables on complexity in contrast with the effect of independent variables. It further
suggests that complexity could be reduced by splitting interdependent variables into
separate independent units, but only by ignoring their interdependence and producing
a suboptimal solution. That tradeoff between reduction of complexity and sub-
optimization has not been clearly quantified and is addressed in this paper.
The third theory suggests that processors (human or machine) are more efficient in
repetitive execution of the same task than in switching from task to task, leading to
savings when the tasks are decomposed and processors are specialized [2, 25, 40].
This theory is related to information theory and information economics since it deals
with the information content of each unit. However, the theory acknowledges that,
while saving switching costs, decomposition introduces communication and coordi-
nation costs in all cases except when the tasks are completely independent. Moreover,
specialized processors introduce additional risk associated with processor failure,
since any failure by a specialized processor has an impact on all tasks that share it
[19]. This theory also has difficulty quantifying the tradeoffs, and demonstrating why
and when the switching costs would be so significantly higher than the communication
and failure costs to provide the basis of fundamental structures.
This article adopts a variant of the last two theories and proposes "sharing" as a
fundamental basis for structure. Interacting models of a complex organization have
many common tasks, and avoiding duplication of a common task, for every model
that needs it, can be a significant source of efficiency. However, tasks exist in a
A MODEL MANAGEMENT APPROACH TO BPR 197
context; they are rarely identical in multiple contexts, which makes sharing difficult.
More specifically, tasks are optimized to fit into the context of a specific model, and
shared tasks cannot be optimized as effectively since they need to serve mUltiple
objectives. This tradeoffbetween savings resulting from sharing and the costs resulting
from suboptimization of shared tasks is critical in determining optimum structure.
The relationship to structure is through decomposition. Decomposition creates
smaller components that are more readily shareable by more models. The savings
produced by more sharing, and the costs resulting from suboptimization of the shared
components, determine the optimum level of decomposition and the variety of
structures it causes.
Downloaded by [Monash University Library] at 23:00 18 March 2016
Example 7
The commercial bank in example I has two decision models associated with each
mortgage loan application: one for a short-term loan and one for a long-term loan.
Each model is executed for each loan applicant, and each consists of two tasks as
shown below:
Shorf-ferm loan decision Long-ferm loan decision
f I: homeowner constraint fl : homeowner constraint
f2: salary> 30k constraint f3: life expectancy> 20 years constraint
The complexity function for fl' f2' and f3 is given as cx where c is a constant, andx
is the number of applicants. Assuming 100 applicants per day, 60 percent of whom
are homeowners, 20 percent of whom have a salary> 30K, and 30 percent of whom
have a life expectancy> 20 years; the resulting parameters are c I = c 2 = c3 = 100c, s I
= 0.60, s2 = 0.20, s3 = 0.30.
Case I
If the two models were executed independently, the optimum sequence would be (f2'
f l)and (f3' f l ), respectively, leading to the costs of c2 + S2CI = 100c + 20c = 120c, and
c3 + s3cI = 100c + 30c = l30c, respectively. Total cost = 120c + 130c = 250c.
Case 2
Alternatively, consider the decomposition of these two models into three models, each
containing only one task, with fl shared by the two decisions. To achieve the benefits
of sharing, fl has to be executed first, otherwise f2 and f3 will produce different sets of
customers to be processed, and the benefits of sharing will be lost! With decomposi-
tion, fl will be executed only once at a cost of c I = 100c, and f2 and f3 will be executed
next each at a cost of s l c2 = s l c3 = 60c. Total cost 100c + 60c + 60c = 220c. Clearly,
decomposition pays since 220c < 250c.
198 LEVENT V. ORMAN
Case 3
Case 4
Now also change the cost of tI by reducing it to 50c. The cost of un decomposed models
now is c2 + S2CI = 100c + IOc = 110c and c3 +s3CI = 100c + 15c = 115c, respectively,
with the total cost of225. The cost of decomposed models is c i + sIc2 + sIC3 = 50c +
Downloaded by [Monash University Library] at 23:00 18 March 2016
Example 8
The retailer in example 4 has two decision models to make an advertising decision
and a production increase decision in each region:
The decision rule is to advertise if (I' (2' t3 are satisfied, and to increase production
ift3, (4 are satisfied. c i = c2=c3=c4= lOOc. sl =0.60'S2 = 0.20'S3 = 0.72'S4 = 0.30. The
critical question is whether to decompose (3 as a separate model to be shared by the
two decisions.
Case I
Assuming independence, the undecomposed models have the optimum sequence (t2,
t l , (3) and (t4, (3), respectively, and their costs are c2 + s2cI +s12c3 = 100c + 20c + 12c
= 132c, and c4 + s4c3 = 100c + 30c = 130c, respectively, with total cost of262c. The
A MODEL MANAGEMENT APPROACH TO BPR 199
decomposed models would lead to the optimum sequence of(t3' t2, t l ) and (t3' t4), with
the total cost of c3 + s3c2 + s3c4 + S23cI = 100c + 72c + 72c + l4c = 258c. Clearly
decomposition is optimal.
Case 2
Now assume interdependence of t2 and t3 where s23 = 0.20 < 0.72 x 0.20. The cost of
undecomposed models remains the same at 262c, but the cost of decomposed models
is now c 3 + S3C2 + S3C4 +S23cI = 100c+ 72c + 72c+ 20c =264c. Clearly, decomposition
is no longer optimal, demonstrating the critical importance of task interdependence in
determining structure.
Downloaded by [Monash University Library] at 23:00 18 March 2016
Example 9
The continuous version of example 8 has the same decision models where the
advertising decision requires the tasks offorecasting demand, determining the adver-
tising cost function, estimating the production cost function, and maximizing yield;
and the production decision model requires forecasting demand, estimating the
production cost function, applying the short-term capital equipment constraint, and
minimizing cost. Past sales and production data are input into both models.
Advertising decision model Production decision model
t I: forecast demand tl: forecast demand
t2: determine advertising cost function t3: estimate production cost function
t3: estimate production cost function 15: apply capital equipment constraint
I ...
t4: maximize yield 6: minimize cost
Each task imposes a constraint on the decision and reduces uncertainty. Assuming
(I and (3 convert large input spaces to a small number of alternatives, and hence have
very small selectivities, and the task complexities are approximately the same, the
optimum solution would involve a decomposition and sharing of tl and t3. Many
organizational processes have such clear decompositions, leading to intuitive and
simple designs without formal analysis.
by E 1, ••• ,En' and the output of Ml can be viewed as a directive since it restricts the
activities of E 1, ••• , En' Ml in return, can share a model T with other models M2, •••
,Mk• Call Tthe "top manager" for the same reasons, and possibly continue to create
higher-level managers in the same fashion. The resulting multilevel hierarchy is a very
common structure in both automated systems and human organizations. The structure
is constrained to be hierarchical since every model has only one manager whose output
is the starting point for that model's information processing. Each model can have
only one manager, since each model can share only a starting sequence of its tasks.
Sharing from the middle of the sequence will not lead to savings, since the shared task
will face different search spaces in each model due to the execution of different tasks
preceding it as demonstrated in example 7. (This assumption will be relaxed in the
muItimodel formulation presented later, leading to nonhierarchical structures). Obvi-
Downloaded by [Monash University Library] at 23:00 18 March 2016
ously, the manager itself can share a starting sublist of its list of tasks, leading to a
multilevel hierarchy so common in structural design. Hierarchies follow directly from
the assumptions that models execute their tasks in a linear sequence and share from
the top of the sequence to realize the benefits of sharing.
instructed by the shared model, he or she could make the decision much more
efficiently by changing the sequence of tasks, and reducing the cost from 160c to
120c. Consequently, the employee is likely to feel restrained by his or her manager
and unable to contribute his maximum to the organization, leading to alienation.
Of course, his narrow specialization limits his vision and prevents him from
observing the long-term loan decision model; hence, he is unable to see and
appreciate the sharing and its benefits to the organization. Sharing, and the
consequent benefits to the organization, put a straitjacket on individual employees
and prevent them from maximizing their own impact on the organization, as they
perceive it.
and the effects may propagate over many models as a result of sharing and the strong
interaction it creates. Example 7 demonstrated in case 2 that decomposition was
undesirable when all tasks were equally difficult to execute. However, when one task
was (presumably) automated, and its cost dropped 50 percent, decomposition became
desirable, making the optimum organizational structure dependent on the cost of
executing each task.
High-level managerial decision models are more likely to be i11-defined, fuzzy models,
with large amounts of external information processed in cursory and intuitive fashion
to focus the organization quickly and efficiently around some well-defined goals.
These are low-selectivity models by virtue of the fact that they receive large amounts
of unorganized, high uncertainty, external information, and produce small search
spaces focused around well-defined internal goals with little uncertainty. The desir-
ability oflow-selectivity models at high levels in an organization can be shown easily
within the model management paradigm. Example 7 informally demonstrated this
proposition since a task tl with high selectivity (80 percent) was not shareable, but
when its selectivity was reduced (60 percent), it became shareable, which led to a
decomposition, which in turn pushed tl upward in the task sequence and in the
structural hierarchy to realize the benefits of sharing. In general, a task t I is more likely
to be shared with all the consequences listed above, as its selectivity is reduced,
assuming that all complexity functions are monotonically nondecreasing. Given two
models (t2' tl) and (t3' tl) where t2 and t3 correspond to all tasks preceding tl in the two
models respectively, let the complexity functions of tl' t2, and t3 be cl(x), c2(x), and
c3(x) respectively, where x is the initial search space faced by the two models, and let
the selectivities be sl' s2' and s3' respectively. The cost of the undecomposed models
is given as:
since t2 and t3 are executed first with the search space x, and tl is executed next in each
model with an appropriately reduced search space. Similarly, the cost of the decom-
posed model is:
since tl is executed once with the initial search space, and t2 and t3 are executed next
in each model for the reduced search space. The proposition is that cd - Cu becomes
smaller (i.e., decomposition more desirable) as the selectivity sl becomes smaIJer. In
other words (cd-c u) ands l change in the same direction, or
A MODEL MANAGEMENT APPROACH TO BPR 203
d
= ~ (CI(X) + C2(SIX) + C3(SI X) - C2(X) - C3(X) - CI(S2X) - CI(S3X)
I
The derivatives are nonnegative since the complexity functions have been assumed
Downloaded by [Monash University Library] at 23:00 18 March 2016
reductions for various organizational tasks. Similarly, the business process reengin-
eering literature, although explicit in the need to restructure, has great difficulty prescribing
what structural changes may be appropriate following automation and the consequent
reduction in various costs. The model management paradigm suggests four possibilities
depending on two factors. Task uncertainty refers to the uncertainty reduction achieved
by a task. Task specialization refers to the frequency of occurrence ofthe task withm the
organization (i.e., whether it is specific to a small set of decision models, or common to
many). High uncertainty-high occurrence tasks such as statistical decision models lead to
increased centralization when their costs are reduced; high uncertainty-low occurrence
tasks such as individualized decision support models lead to decentralization upon cost
reduction; low uncertainty-high occurrence tasks such as communication and coordina-
tion models lead to decentralization; and low uncertainty-low occurrence tasks such as
clerical decision models lead to increased centralization upon cost reduction. The intuition
Downloaded by [Monash University Library] at 23:00 18 March 2016
behind these results can be summed up as a shift in resources and power toward those
models that become more efficient through cost reduction and automation.
Top-level, highly uncertain, and highly common tasks, when they are automated,
concentrate power and decision making in those models. Since they are common
within the organization, there are efficiencies to be gained from sharing these tasks,
and those who execute such tasks are in a position to make major contributions to the
organization by reducing their costs and derive the benefits by centralizing the
common task and consolidating the power resulting from the dependence of many
decision models on the common task. High uncertainty-low occurrence tasks such as
individualized decision support models, on the other hand, lead to decentralization of
structures and decisions, since these models, while gaining power, cannot wield
organization wide influence because of the specialized nature of their contribution,
and hence contribute to the dispersion of power. This is commonly observed in the
rise of staff functions and consequent dispersion of power in professional organiza-
tions such as universities and hospitals. Low uncertainty-high occurrence tasks also
lead to decentralization under cost reduction, since they push power and decisions
toward lower-level (i.e., low-uncertainty) models. This is commonly observed in
decentralization that results from communication and coordination technologies. Low
uncertainty-low occurrence tasks, on the other hand, increase centralization by
reducing their costs, since they push the power away from themselves to higher levels
in the organization. This is often observed when clerical tasks such as word processing
are automated, often leading to central control of these functions through secretarial
pools and word-processing departments. In sum, high uncertainty refers to top-level
tasks in the organizational hierarchy and low uncertainty to low-level tasks. High
occurrence allows the models to exert organization-wide influence; low occurrence
prevents it by pushing the power away to other models.
In model management terminology, assume that two models are given with tasks
(fl' (2) and (tl' (3), cost functions cl(x), cix), and c 3(x) and the selectivities sl' s2' s3 for
the tasks fl' f2' f3' respectively. Cost reduction may lead to four distinct outcomes,
depending on uncertainty and task specialization. Generalization to multiple models
with multiple tasks is straightforward by analyzing two models and two tasks at a time.
A MODEL MANAGEMENT APPROACH TO BPR 205
more likely to exchange positions and lead to structural change than distant tasks in
the sequence. The difference of the two costs is:
cd-c u = c1(x) + CiS IX) + CiS IX) - czCx) -c 3(x) -c 1(s2x) -c 1(s3x).
Ifwe assume positive cost functions with nonzero search spaces, the following four
cases correspond to the four types of tasks experiencing cost reduction:
Structural Design
THE STRUCTURAL DESIGN PROBLEM INVOLVES CHOOSING THE OPTIMAL decomposi-
tion by balancing the benefits of sharing with the costs of communication (that is,
model suboptimization to accommodate shared models) [9, 18]. To determine the cost
of model suboptimization, we will formulate the problem of finding the optimum
sequence of information processing within a single model, and then formulate the
multimodel problem-that is, balancing the costs of deviating from the optimum
sequence in each model against the obvious benefits of sharing.
A model Twith tasks t l , ••• , tn is given. Each tj has a complexity function c;, and
selectivity Sj. Extension to multiple variables is straightforward by treating cj and Sj as
A MODEL MANAGEMENT APPROACH TO BPR 207
vectors. The selectivity si is defined as l-ri where ri is the percent reduction in search
space by executing the task Ii' The complexity of each task Ii is assumed to be
proportional to its search space, and it is reduced by the joint selectivity of all the tasks
that have been executed prior to Ii' The cost of information processing for a particular
sequence of tasks is the total cost of executing all the tasks in that sequence. The
single-model problem is to find the optimum sequence of execution for the tasks II" .. ,In
to minimize the cost of information processing. This problem is a special case of the
sequential decision making problem [24, 26] and can be formulated as a dynamic
programming problem [32, 38]. It is simpler to solve than the general sequential
decision making problem, since the tasks comprising the model (process) are given,
and they all need to be executed to satisfy the model. The only decision is the sequence
of execution that determines the amount of sharing possible. In the general sequential
Downloaded by [Monash University Library] at 23:00 18 March 2016
Single Model
Given the tasks II' ... ,In E T; selectivities sl' ... ,sn E S; and costs cl' ... ,cn E C,
where 1'; and Si are a set of tasks and the corresponding set of selectivities, respectively.
°
The objective is to find the optimum costj(<I» wherej(1) = and <I> is the null set.
Intuitively,j(T;) represents the optimum cost of the partial model T-TI' that is, the
minimum remaining cost after the execution of the tasks Ti . It is equal to the cost of
the next task Ip + the minimum remaining cost after Ti v Ip' The remaining cost after
all tasks have been executed is j(1) = 0, and the minimum remaining cost before any
task is executed j(<I» is the objective function. The problem is computationally
solvable with complexity of O(n!).
Multimodel Fonnulation
The single-model formulation can be extended in a straightforward fashion to accommodate
208 LEVENT V. ORMAN
Given n tasks f" ... , fn E T, with selectivities s), ... ,s n E S, respective costs c), ... , en E
i,j=r
t;= ~
wheref(M), ... ,Mr) =0 and the optimum solution is given by f(<1>, ... ,<1».
Downloaded by [Monash University Library] at 23:00 18 March 2016
Intuitively, the formulation is similar to the single-model case.fiT), ... ,Tr) repre-
sents the optimum cost of the partial model M)-T) , ... ,Mr-Tr' that is, the mimmum
remaining cost after the execution of T" ... ,Tr • It is equal to the cost of the next step
(f), ... ,fr) + the remaining cost after the execution of(TIUf l ,. " ,TrUfr). The tasks
that are common to multiple models at the beginning of their sequences are executed
only once, leading to the subtraction of their costs in the formulation for every
duplicate count. The remaining cost after the execution of all tasks of all modelsf(MI'
... Mr) = 0, and the remaining cost before the execution of any tasksfi<1>, .... <1» is
the objective function. The problem is computationally tractable with complexity
O(n!r).
. . .
ts: receive invOice
t6: audit invoice
t7: pay
t3 t4, and ts can be combined into a single task to since their sequence is fixed and not
controllable. The vendor selection in a department takes on average 60 minutes of
search through vendor files and eliminates 80 percent of all vendors as inappropriate
for the item needed. RFQ forms are more complex. They take about 120 minutes to
fill, duplicate, and send to the selected vendors. An expected 90 percent of vendors
are eliminated through the bidding process since they submit clearly noncompetitive
bids or do not respond to the RFQ at all because they are unable to fill the order at this
time. Selecting the best vendor, preparing the purchase order, and receiving the
Downloaded by [Monash University Library] at 23:00 18 March 2016
Case I
The dynamic programming formulation returns the sequence t l ,t2,tO,t6h as the opti-
mum solution with the cost 60+24+ 1.2+0.6+0.5 = 86.3
Case 2
Now consider two separate departments executing the identical tasks, except that
selecting vendors (t l ) is specific to each department, since the relevant vendors are
different for each department. The dynamic programming formulation of the multi-
model problem returns the sequence t2,tl ,to,t6h as the optimum sequence in each
department with t2 shared, corresponding to the centralization of the bidding process.
The total cost is 120 + 2 x ( 6+ 1.2+0.6+0.5) = 136.6, clearly less than the decentralized
bidding solution with the cost 2 x 86.3 = 172.6.
Case 3
departments 2 x (30+24+ 1.2+0.6+0.5) = I 12.6, which is clearly less than the cost of
centralized bidding solution of case 2 which is now 130.6.
Case 4
Consider the implementation ofEDI technology to link the company with its suppliers,
largely automating the request for quotations and the bidding process, reducing its cost
to 60. This reduces the cost of the decentralized bidding process of case 3 for the two
departments to 2 x (30+ 12+ I .2+0.6+0.5) =88.6.
However, the optimum solution changes again to the centralization of the bidding
process. The optimum sequence for each department is again t 2,t1,tO,t6h, with t2
centralized and shared. The total cost for the two departments is 60 + 2 x
(3+ 1.2+0.6+0.5) = 70.6, which is clearly less than the cost of decentralized solution
Downloaded by [Monash University Library] at 23:00 18 March 2016
88.6.
Conclusions
THERE IS CONSIDERABLE EVIDENCE THAT IT HAS A SIGNIFICANT IMPACT on organi-
zational structure, although the exact nature and characteristics of that impact are not
weII understood. On the prescriptive side, IT creates significant opportunities for
increased efficiency through business process reengineering, although again the
precise nature and the direction ofreengineering efforts are not weII established. The
decision model paradigm of organizations provides an effective methodology to
describe and quantifY the organizational impact of IT, and to direct and specifY the
appropriate reengineering efforts to maximize the benefits from IT. Decision models
provide a simplified analytical model of organizations as decision-making entities,
and organizational structure as the arrangement of decision models and the decision
tasks comprising them. An analytical model quantifies a variety of organizatIOnal
issues, including the need for business process reengineering after the introduction of
IT, and the exact conditions under which IT may lead to more or less centralized
structures. Reoptimization of business processes following the introduction of IT is
formulated as a dynamic programming problem and shown to be computationaIIy
tractable.
Like aII analytical models, the formulation emphasizes some characteristics of tasks
that are heavily influenced by IT, but assumes away some other characteristics.
Behavioral characteristics such as employee skiIIs, motivation, incentives, and resis-
tance are not part of the mathematical model, although they can significantly affect
the feasibility of transition from the current structure to a more efficient structure
envisioned by BPR. The emphasis in this paper is on the determination of the efficient
structures, but not on the transition strategies from the current state to the more
efficient state. Effective transition strategies are left for future work.
A major assumption in the model is the view of organizations as collections of
decision models that process information and reduce uncertainty. This information-
processing view of organizations has been extensively studied in the literature, and its
A MODEL MANAGEMENT APPROACH TO BPR 211
REFERENCES
Downloaded by [Monash University Library] at 23:00 18 March 2016