Академический Документы
Профессиональный Документы
Культура Документы
Georgiy M. Levchuk
Yuri N. Levchuk
Jie Luo
Fang Tu
Krishna R. Pattipati
Abstract
This paper presents a library of algorithms to solve a broad range of optimization problems arising
in the normative design of organizations to execute a specific mission. The use of specific
optimization algorithms for different phases of the design process leads to an efficient matching
between the mission structure and that of an organization and its resources/constraints.
This library of algorithms forms the core of our design software environment for synthesizing
organizations that are congruent with their missions. It allows an analyst to obtain an acceptable
trade-off among multiple objectives and constraints, as well as between computational complexity
and solution efficiency (desired degree of sub-optimality).
1. Introduction
1.1 Motivation
The optimal organizational design problem is one of finding both the optimal organizational
structure (e.g., decision hierarchy, allocation of resources and functions to decision-makers
(DMs), communication structure, etc.) and strategy (allocation of tasks to DMs, sequence of task
execution, etc.) that allow the organization to achieve superior performance while conducting a
specific mission ([Levchuk et al., 1999a]). Over the years, research in organizational decision-
making has demonstrated that there exists a strong functional dependency between the specific
structure of a mission environment and the concomitant optimal organizational design.
Subsequently, it has been concluded that the optimality of an organizational design ultimately
depends on the actual mission parameters (and organizational constraints). This premise led to
the application of systems engineering techniques to the design of human teams. This approach
advocates the use of normative algorithms for optimizing human team performance (e.g., [Pete et
al., 1993, 1998], [Levchuk et al., 1996, 1997, 1999a,b]).
________________________________________________________
* This work was supported by the Office of Naval Research under contracts #N00014-93-1-0793, #N00014-98-1-0465 and
#N00014-00-1-0101
1.2 Related Research
When modeling a complex mission and designing the corresponding organization, the variety of
mission dimensions (e.g., functional, geographical, terrain), together with the required depth of
model granularity, determine the complexity of the design process. Our mission modeling and
organizational design methodology allow one to overcome the computational complexity by
synthesizing an organizational structure via an iterative solution of a sequence of smaller and well-
defined optimization problems ([Levchuk et al., 1997]). The above methodology was used to
specify an organizational design software environment, outlined in [Levchuk et al., 1999b], to
assist a user in representing complex missions and synthesizing the organizations. The component
structure of our software environment allows an analyst to mix and match different optimization
algorithms at different stages of the design process.
Our mission modeling and a three-phase iterative organizational design process, first proposed in
[Levchuk et al., 1997] and later enhanced in [Levchuk et al., 1998], is graphically represented in
Figure 1.
Organization
Dependency Allocation Allocation Hierarchy
Modeling
Mission
Graph
DM-DM
Task-Resource DM-Resource
Command
assignment assignment
Hierarchy
The 3-phase design process of Figure 1 solves three distinct optimization sub-problems:
On-line Adaptation Phase. In case of an asset or a decision node failure, the application of a
branch-and-bound method to the task-resource allocation-preference matrix generates the next
best assignments (the new task-resource allocation strategy). This method provides a quick and
efficient search for adaptation options. The dynamic scheduling accounts for on-line changes
without having to completely resolve the problem. If the newly obtained task-resource assignment
matrix violates the organizational constraints, Phases II and III of the algorithm are used to
generate the new organizational structure. In this case, Phase II is completed in an evolutionary
mode (platform clusters are obtained by regrouping the old platform groups, rather than
generating entirely new ones from scratch). Finally, if the process of generating a feasible
organizational structure fails, the mission must be aborted (see [Levchuk et al., 98] for details).
Recently, we have begun the process of implementing our modeling and design methodology in
software to fully automate the organizational design process, while allowing for iterative user-
defined modifications at various stages of the design process. To assist an analyst, our software
environment is designed to display the metrics of organizational performance, characterize the
attainment of mission objectives, and specify the workload distribution across the organizational
elements of interest.
Our modeling and design environment ([Shlapak et al., 2000]) includes the following seven key
components (Fig.2):
Software Environment
Resource 2
Resource Asset DM Translation of
Capability Asset DM Translation of
Capability Description Profiler Different Data
Definition Description Profiler Different Data
Definition Structures
Structures
3 Mission Modeling
Algorithm Testing
Task
Mission Task Task
Mission Dependency Task
Decomposition Dependency Parameters Other
Decomposition Graph Parameters Other
Graph Organizational
Organizational
Design Algorithms
Design Algorithms
Optimization Components
4
Performance
Performance Dynamic Simulation
Criteria / Measure
Criteria / Measure
Optimization Results
After the Performance Criteria/Measures component is used to define objective functions for the
design process, the last three components of our software environment (Schedule Generation,
Resource Allocation, and Hierarchy Construction) allow an analyst to perform a step-by-step
design of the organizational structure, while implementing, if desired, the user-defined design
modifications at various stages of the design process to adjust the metrics of organizational
performance (e.g., weights on objective function, workload distribution, etc.). These design
optimization components present a step-by-step visualization of our organizational design
process. Specifically, the Schedule Generation component produces the task-resource allocation
schedule that corresponds to Phase I of our organizational design algorithm. The Resource
Allocation component (Phase II) defines DM functionality by grouping platforms and provides a
balance between internal and external coordination. Finally, the Hierarchy Construction
component (Phase III) derives organizational hierarchy to minimize the workload due to indirect
external coordination induced by the hierarchy structure.
In general, the three sub-problems of Schedule Generation, Resource Allocation, and Hierarchy
Construction are NP hard (optimal algorithms take exponential time). Thus, efficient (near-
optimal) heuristics need to be explored to effectively solve large-scale organizational design
problems. The modular structure of our software environment allows one to apply different
algorithms (both optimal and heuristic) at different stages of the design process to handle the
complexity of a specific problem at hand. The iterative application of the corresponding
algorithms allows us to simultaneously optimize multiple performance criteria, subject to an
acceptable trade-off among design objectives.
The organizational structure, an outcome of the design process, prescribes the relationships
among the organizational entities by specifying:
• Task-resource schedule;
• DM-resource access/allocation;
• DM organizational hierarchy;
• Inter-DM coordination structure.
The organizational structure defines each individual DM’s capabilities (by assigning each DM a
share of the information and resources) and specifies the rules that regulate inter-DM
coordination. The organizational structure, together with a set of thresholds constraining a DM
workload, determines the boundaries of the space of feasible organizational strategies (i.e., all
feasible DM-task-resource assignments and coordination strategies), from which the organization
can choose a particular strategy for implementation. The feasible strategy space delimits the
strategy adaptations that an organization can undertake without having to undertake major
structural reconfigurations.
3. Library of Optimization Algorithms
In this section, we present the library of optimization algorithms used in our organizational design
software environment. The library is constantly evolving and new algorithms and performance
measures are being added to enlarge the scope of applicability of our software environment.
3.1 Scheduling
Scheduling concerns the allocation of limited resources to tasks over time. The resources and
tasks may take many forms. The resources may be platforms, human teams, surveillance assets,
information sources, etc. The tasks may be landings or take-offs, evaluations or executions,
operational or informational. They can be aggregated or independent, defensive or offensive. Each
task may have a different priority level and opportunity window.
Scheduling is a decision-making process that has as its goal to optimize one or more objectives.
The objectives may take many forms. One possible objective is the minimization of mission
completion time, and another is task deadline violation.
The scheduling phase of the organizational design process can be generally described as follows.
A set of tasks with specified processing times, resource requirements, locations and precedence
relations among them need to be executed by a given set of platforms with specified resource
capabilities, ranges of operation and velocities. Resource requirements and resource capabilities
are represented via vectors of the same length with each entry corresponding to a particular
resource type. Tasks are assigned to groups of platforms in such a way that, for each such
assignment, the vector of task’s resource requirements is component-wise less than or equal to
the aggregated resource capability of the group of platforms assigned to it. The task can begin to
be processed only when all its predecessors are completed and all platforms from the group
assigned to it arrive at its location. A resource can process only one task at a time. Platforms are
to be routed among the tasks so that the overall completion time (called Mission Completion
Time –the completion time of the last task) is minimized.
3.1.2 Example
A joint group of Navy and Marine Forces is assigned to complete a military mission that includes
capturing a seaport and airport to allow for the introduction of follow-on forces. There are two
suitable landing beaches designated "North" and "South", with a road leading from the North
Beach to the seaport, and another road leading from the South Beach to the airport. From
intelligence sources, the approximate concentration of the hostile forces is known, and counter-
strikes are anticipated. The commander devises a plan for the mission that includes the completion
of tasks shown in Figure 3. The following 8 resource requirements/capabilities are modeled:
AAW (Anti-Air Warfare), ASUW (Anti-Submarine Warfare), ASW (Anti-Sea Warfare), GASLT
(Ground Assault), FIRE (Firing Squad), ARM (Armory), MINE (Mine Clearing), DES
(Destroyer). In Figure 4, mission tasks, the assets (platforms) available for operation, resource
requirement vector for each task, resource capability vector for each platform and other relevant
parameters are presented.
CVBG
= aggregated defend task, showing possible subtasks
• Silk*
• Air(S)*
• Sea(Pb)* = aggregated encounters task, with possible subtasks
• Sea(Sub)
= mission tasks (that must be done); known in advance
Resupply
PORT No.
• Sea(Pb)*
Defend N. BCH
TAKE • ARTY
• FROG CLEAR
HILL SAMs*
• Helos
Encounters
Encounters on S/P road
• SMine TAKE Clear: TAKE
START • Sea(P b)* • GMINE
N. BCH PORT
• TANK
Encounters END
on A/P road
TAKE Clear: TAKE
Resupply • GMINE
S. BCH A/P
PORT So. • TANK
• Sea(Pb)*
Defend S. BCH
ARG • ARTY
CLEAR
• FROG
• Silk* • Helos
SAMs*
• Air(S)*
• Sea(Pb)*
• Sea(Sub)
Encounters
BLOW
• GTL* BRIDGE
* indicates that these must be distinguished from neutral (or decoy) counterparts
The scheduling problem arising in organizational design extends to a large set of well-known
problems. When there exists only one platform, it is related to the Traveling Salesman Problem
(TSP) and its extensions (such as Time-dependent TSP, TSP with precedence relations, etc. –for
review, see [Lawler et al., 1985], for latest results, see [Mingozzi et al., 1997], [Zweig et al.,
1995], [Fischetti et al., 1997], [Franca, 1995]). When any platform can process any task, the
problem simplifies to Multiple TSP with precedence relations. If, in addition, the processing of a
task can be separated in time among different platforms, our problem is related to the Vehicle
Routing problem and its extensions (for review, see [Malandraki et al.,1992], [Golden et al.,
1988], for latest results, see [Fisher et al., 1994], [Dumas et al., 1995], [Taillard et al., 1997]).
Another related useful problem is the Dial-a-Ride problem (see [Madsen et al., 1995]). In the case
when travel times among task locations are much smaller than the task processing times (and
therefore can be ignored), the problem reduces to a Multiprocessor Scheduling problem with
Precedence Constraints (for review, see [El-Rewini, 1994], [Cheng et al., 1990], for recent
studies see [Chan, 1998], [Van De Velde, 1993], [Baruah, 1998]). For a review of general
scheduling problems, see [Pinedo, 1995], [El-Rewini, 1994].
Other variations of problem formulation are possible. For example, there may exist a delay
between processing of two tasks on the same platform (“adjustment” delay). The opposite of this
situation is when the delay occurs only when tasks are processed on different platforms
(communication delays) with no delay for processing by the same platform. This has relevance in
Multiprocessor Scheduling with inter-processor communication delays (see [Baruah, 1998],
[Selvakumar, 1994]). Another variation is the existence of time windows for processing each task
(that is, the earliest start times, called release times, and the latest end-times, called deadlines,
define opportunity windows for tasks). In this case, the objective function involves the
minimization of earliness-tardiness penalties (that is, the penalties resulting from processing tasks
outside of their time-windows). In our problem, we assume that task-processing times are fixed.
In real life, situations may arise when task-processing times depend on the amount of resources
allocated to them. The objective then is to achieve a tradeoff between processing tasks as fast as
possible and using as little resources as possible [Turek et al., 1992]. Another complication is that
a task can begin to be processed when the assigned platforms are within a specified distance of
this task (depending on the task and the range of the platform). In this case, the problem assumes
the form of the shortest covering path problem (see [Current, 1994]). Other realistic constraints,
such as the ability of tasks to move during the mission and platforms having expendable resources
(such as fuel, firepower, supplies, etc.), can be included.
All of these instances of our scheduling problem are proven to be NP-hard, meaning that no
known polynomial algorithms exist for finding their optimal solutions. Therefore, research in this
area has primarily focused on the development of near-optimal algorithms and local search
techniques.
3.1.4 Mathematical Formulation of the Scheduling Problem
The scheduling problem associated with the phase I of our 3-phase organizational design process
is defined by the following parameters:
Assignment variables:
1, if platform m is assigned to task i
wim =
0, otherwise
Traversing variables:
1, if platform m is assigned
xijm = to process task j after processing task i
0, otherwise
si = start time of task i.
Y = mission completion time (time when the last task is completed).
The problem constraints can be formulated as follows. Task i can be assigned to a platform m
only if platform m travels to i directly from some other task j (including the depot task 0) and
travels from this task i to some other task. The traveling of platform m is described by variables
x . A platform can arrive at a task location (leave a task location) only once. Note that variables
ijm
x = 0 for i=1,..,N (except for x which can be 1 if the platform is idle during the entire mission).
iim 00m
Therefore, the following constraints on the problem variables (called assignment constraints) are
introduced:
N N
∑x
j =0
ijm = ∑ x jim = wim , i = 0,..., N ; m = 1,.., K
j =0
If task i must precede task j (that is, p =0), then task j can begin to be processed only after task i
ij
is completed, that is
si + t i ≤ s j
This is true for all predecessors of task j. Also, if any platform m travels directly from task i to
task j (that is, x =1), then task j can begin to be processed only after task i is completed plus the
ijm
span of time needed for platform m to travel from i to j (this travel time is equal to d /v ), that is
ij m
d ij
si + t i + ≤ sj
vm
Combining these together and noting that T>s +t for any i, we obtain the following constraints
i i
d ij
s i − s j + x ijm ⋅ + p ij ⋅ T ≤ p ij ⋅ T − t i
vm
These constraints also eliminate cycling. When p =1 and x =0, the precedence constraints are
ij ijm
redundant.
Since the aggregated resource capability vector of a platform group assigned to a task should be
greater than or equal to the task resource requirement vector, we obtain the following constraints
(called resource requirement constraints):
∑r
m =1
ml ⋅ wim ≥ Ril , i = 1,.., N ; l = 1,.., S ;
These constraints also ensure that at least one platform is assigned to any task. The mission
completion time is equal to the maximum among the completion times of all tasks. It is also not
greater than the solution obtained by a heuristic algorithm. Therefore, the following constraints
are introduced (called mission completion time constraints):
t i + si ≤ Y ≤ T , i = 1,.., N ;
The objective is to minimize the mission completion time. Then, the problem is formulated as
follows:
min Y
N
∑ xijm − wim = 0, i = 0,..., N ; m = 1,.., K ;
j =0
N
∑ x jim − wim = 0, i = 0,..., N ; m = 1,.., K ;
j =0
d ij
si − s j + xijm ⋅ + a ij ⋅ T ≤ aij ⋅T − t i ; i, j = 1,.., N ; m = 1,.., K ;
vm
K
∑ rml ⋅ wim ≥ Rml , i = 1,.., N ; l = 1,.., S ;
m=1
s − Y ≤ −t , i = 1,.., N ;
i i
This is a mixed-binary (i.e., containing continuous and binary variables) linear programming
(MLP) problem (which is NP-hard). Moreover, even relaxing the constraints on the binary
variables w , x (that is, making them real numbers in the [0,1] range) produces a linear
im ijm
programming problem (LP) with the number of variables equal to K ( N + 1) 2 + N + 1 , the number
of equality constraints equal to 2 K ( N + 1) and the number of inequality constraints equal to
KN ( N − 1) + S ( N + 1) . This creates “curse of dimensionality” and makes it hard to find solutions
to even average-sized and relaxed scheduling problems.
The optimal algorithms are based on the mixed-binary linear programming formulation described
in the previous section. For more information on solving integer (binary) linear programming
problems, see [Wosley, 1998], [Fang et al., 1993], [Nemhauser, 1988], [Bertsekas, 1997]. The
primary computational methods for solving mixed-integer programming problems optimally
include the branch-and-bound algorithm, dynamic programming, column generation, and
decomposition algorithms. The dynamic programming formulation for this problem is equivalent
to the branch-and-bound algorithm with the following bounding rule: the nth level of the branch-
and-bound tree corresponds to the assignment of n tasks.
Define a state ( M , LT1 ,.., LTK , f1 ,.., f K ) , where M ⊂ {1,.., N } , LT j is the task last processed by
platform j, LT j ∈ {0} ∪ M , and f j is its completion time. We associate with our problem a state
space Φ of states ( M , LT1 ,.., LTK , f1 ,.., f K ) , where each state represents a feasible schedule of
tasks from set M on platforms 1,…,K such that the last task to be processed on platform j is LT j ,
and it is completed at time f j . The state space Φ can be decomposed as: Φ=Φ1∪…∪ΦN, where
Φ m = {( M , LT1 ,.., LTK , f 1 ,.., f K ) ∈ Φ, | M |= m}.
M ' = M ∪ { j}
LTi , if i ∉ [i1 ,.., iq ]
LT 'i =
j, if i ∈ [i1 ,.., i q ]
f i , if i ∉ [i1 ,.., iq ]
f 'i = d LTi j
t j + max zmax
∈IN ( j )
f j , max f i +
z∈[ i1 ,.., iq ]
, otherwise
vz
The state space can be reduced by using the following two dominance and bounding tests.
Test 1: Dominance.
A state ( M , LT1 ,.., LTK , f1 ,.., f K ) is said to dominate state ( M ' , LT '1 ,.., LT ' K , f '1 ,.., f ' K ) if
f j ≤ f ' j for each j=1,..,K. Clearly, the state ( M ' , LT '1 ,.., LT ' K , f '1 ,.., f ' K ) can be discarded if
such a state ( M , LT1 ,.., LTK , f1 ,.., f K ) exists.
Test 2. Bounding.
Let lb{( M , LT1 ,.., LTK , f1 ,.., f K )} denote a lower bound on the solution of the scheduling
problem given that the assignments from state ( M , LT1 ,.., LTK , f1 ,.., f K ) are fixed. It can also be
considered as the solution to the scheduling problem with tasks { j : j ∉ M } and each platform m
becoming available at time f m . If this lower bound is greater than T (which is an upper bound on
the optimal completion time), then this state can be discarded.
Example (continued).
In Figure 5, the state ( M , LT1 ,.., LTK , f 1 ,.., f K ) ∈ Φ 9 and its possible propagation is shown.
m=9 m=10
M= 1 2 3 4 5 6 7 8 9 M = 1 2 3 4 5 6 7 8 9 17
Remaining tasks:
Platform LT f Platform LT f
1 1 30 10 11 12 13 14 15 1 1 30
2 2 90.1 16 17 18 19 20 2 2 90.1
3 2 90.1 3 2 90.1
4 0 0 Tasks that can be 4 0 0
5 8 37.3 assigned: 5 8 37.3
6 9 40 10 11 12 13 17 6 9 40
7 8 37.3 7 8 37.3
8 9 40 assign 8 9 40
9 7 20 Task considered: T17 to 9 7 20
17
10 3 10 P10,P17 10 17 45.4
11 4 10 11 4 10
12 4 10
# of possible platform 12 4 10
13 4 10 groups for T17 = 129 13 4 10
14 5 30 2-platform groups: 14 5 30
15 1 14 15 1 14
7 15 7 16 7 17 8 15 8 16 8 17
16 2 90.1 16 2 90.1
17 37.3
9 15 9 16 9 17 1015 1016 1017 17 45.4
8 17
18 6 10 18 6 10
19 7 20
Group considered: 19 7 20
20 7 20 1017 20 7 20
NO
Choose a feasible group of platforms
OK
NO
State Reduction
Dominance
OK
NO
Bounding
OK
Different bounds can be used in Test 2. It was found that LP relaxation solution provides a close
bound to the optimal (although the variables at which it is attained are not binary). In addition,
relaxation techniques such as Lagrangian relaxation (described in [Levchuk et al., 2000]) are
used. Graphically, the dynamic programming algorithm is illustrated in Figure 6. This version of
dynamic programming is equivalent to a breadth-first search in a branch-and-bound tree. It should
be noted that dense precedence structures as well as tight lower and upper bounds substantially
reduce the search space.
3.1.6 Sub-optimal Algorithms
The dynamic list scheduling (DLS) heuristic has two main Parts:
The following notation (together with notations from section 3.1.1) are used throughout this
section.
Critical Path Algorithm (CP). Critical paths CP(i) are calculated for each task given the task
precedence graph and the task processing times. In the list scheduling algorithm, a task from
READY is selected with the largest CP(i). When ties occur, task with the largest number of direct
successors is chosen (or ties are broken arbitrarily). Priority values are set as P(i) = CP (i ) .
Level Assignment Algorithm (LA). Levels are defined for each task based on the task precedence
graph in a sequential manner. All predecessors of a task can be located only on lower levels (no
task can have a direct successor in the same or lower level). The LA algorithm assigns tasks level
by level. In the scheduling algorithm, a task from READY is chosen with the smallest level. When
ties occur, task with the largest CP(i) is selected. Priority values are set as
P(i) = max {l ( j )} − l (i ) .
j
Weighted Length Algorithm (WL). As described in [Shirazi et al., 1990], the following coefficient
is used to select the tasks:
∑ CP ( j )
WL(i ) = CP (i ) + max CP ( j ) +
j∈OUT (i )
j∈OUT (i ) max CP ( j )
j∈OUT (i )
While scheduling, task with the largest WL(i) is selected. If ties occur, task with the largest CP(i)
is chosen (or ties are broken arbitrarily). Priority values are set as P(i ) = WL(i ) .
In Part 1 of the DLS algorithm, an assignment is considered whenever a task (or a group of tasks)
is completed. At that time all the platforms processing the completed task become free (enter
FREE set). All the tasks for which this task was the last processed predecessor become ready
(enter READY set). Then, if there exists a task in READY set such that the aggregated capability
vector of FREE set is component-wise more than or equal to this task requirement vector, an
assignment can be made. Otherwise, the next completion time is considered.
In Part 2 of the DLS algorithm, we select a group of platforms to allocate to a task selected for
processing in Part 1. The idea is to select platforms such that the amount of resources that are
consumed by the task selected in Part 1 should affect the processing of other tasks in the READY
set as little as possible. In addition, we want to choose the “closest” platforms in that the selected
group of platforms can arrive at this task’s location the fastest so as to minimize the completion
time of the selected task. Each platform is assigned a coefficient and assignments are made in
ascending order of these coefficients. The following coefficients were used (if task i is selected for
processing in Part 1):
B(m, i )
V1 (m ) =
BR(m ) − B(m, i )
d l (m ),i B(m, i )
V2 (m ) = s l (m ) + t l (m ) +
v m BR(m ) − B(m, i )
d l (m ),i B(m, i )
V3 (m ) = s l (m ) + t l (m ) + +
vm BR(m ) − B(m, i )
Here, s is the starting time of task j (with s0=0 and t0=0). After an initial group of platforms is
j
found, it is then pruned by eliminating platforms from this group in descending order of these
coefficients. The final group (which is irreducible) is allocated to task i and is denoted as G(i).
When the platforms are assigned, the starting time for selected task i is computed as
d l (m ),i
s i = max f , max s l (m ) + t l (m ) + .
m∈G (i )
v m
DLS Algorithm.
FT←FT\{f}
Let FG be the corresponding group of tasks.
FREE←FREE∪ G(FG)
for each i∈FG
for each j∈OUT(i)
nIn(j) ← nIn(j)-1;
if nIn(j)=0
READY←READY∪ {j}
end if
end for
end for
if ∀ i∈READY ∃s : ∑r
m∈FREE
ml ≤ Ril
GO TO Step 1.
else GO TO Step 3
end if
READY←READY\{i}
n = arg max {V 2 (m )}
m∈FREE 1
FREE1←FREE1\{m}
TG←TG∪{n}
end do
TG1=TG
while TG1≠∅
n = arg min {V2 (m )}
m∈TG1
TG1←TG1\{n}
if ∑r ml
m∈TG \{n}
≥ Ril , ∀l = 1,.., S
TG←TG\{n}
end if
end while
yes
Step 3.
NO Task Selection
Initialization
yes
Step 4.
Platform Group Selection
Step 5.
Update Completion Times
Platform Group Pruning
Example (continued).
Consider the steps of DLS algorithm with the scheduling results shown as a Gantt chart in Fig. 8.
Gantt-Chart for platform scheduling-allocation.
20 P20 T6
P19 T6
18 P18 T6
P17 T18
16 P16 T17 T2
P15 T1
14 P14 T5
P13 T5
12 P12 T5 T7
Platforms.
P11 T5 T7
10 P10 T7
P9 T17
8 P8 T4 T2
P7 T3
6 P6 T2
P5 T6 T7
4 P4 T18
P3 T6 T9
2 P2 T1 T2
P1 T1
0
0 10 20 30 40 50 60 70 80 90
time
The new schedule is shown in Figure 10 and the next completion time to be considered is 40 time
units corresponding to the completion time of task 9. By completing task 9, task 13 becomes
available for processing. Note that, although the current mission processing time is 90.1, we are
“working inside” the mission.
The final scheduling results obtained by DLS are shown in Figure 11.
1 1.7
4 0.2
5 0.3
7 0.2
9 0.1
10 0.5
11 0.1
12 0.1
13 0
17 0.4
18 0.2
19 0.2
20 0.2
Figure 9. Platforms that can be used in processing task 8 and their preference coefficients.
Gantt-Chart for platform scheduling-allocation.
20 P20 T6
P19 T6
18 P18 T6 T8
P17 T18
16 P16 T17 T2
P15 T1
14 P14 T5 T11
P13 T5 T8
12 P12 T5 T7
Platforms.
P11 T5 T7 T8
10 P10 T7
P9 T17 T8
8 P8 T4 T2
P7 T3 T11
6 P6 T2
P5 T6 T7
4 P4 T18
P3 T6 T9
2 P2 T1 T2
P1 T1
0
0 10 20 30 40 50 60 70 80 90
time
20 P20 T6 T15
P19 T6 T15
18 P18 T6 T8 T16
P17 T18 T15
16 P16 T17 T2
P15 T1 T13 T14
14 P14 T5 T11 T12
P13 T5 T8
12 P12 T5 T7
Platforms.
P11 T5 T7 T8
10 P10 T7 T13
P9 T17 T8 T14
8 P8 T4 T2
P7 T3 T11 T12 T16
6 P6 T2
P5 T6 T7 T16
4 P4 T18
P3 T6 T9 T10
2 P2 T1 T2
P1 T1
0
0 20 40 60 80 100 120
time
The DLS algorithm of subsection 3.1.5.1 produces sub-optimal solutions. It is expected that the
sequence with which the tasks are assigned according to DLS is near-optimal. Suppose that the
sequence of scheduling obtained from DLS is i1 ,.., i N . Then, the following algorithm is used to
improve the scheduling results.
for n=1:N-1
do
Select j∈{n+1,..,N} such that the scheduling sequence i1 ,..., in −1 , i j , in +1 ,..., i j −1 , i n , i j +1 ,..., i N is
feasible and the schedule obtained using platform allocation from DLS algorithm is the shortest one.
Then i1 ,..., i N ← i1 ,..., in −1 , i j , in +1 ,..., i j −1 , i n , i j +1 ,..., i N (permute tasks in and ij in the scheduling
sequence).
end for
Analyzing the scheduling results and resource requirement/capability data for example 1 proves
that the schedule in Fig. 11 is optimal. It follows from the fact that tasks 1 and 2 have to be
processed on platform 2. Therefore, the fastest way to process these tasks is such that their finish
times are 30 and 90.14 time units (since platform 2 must travel from one of them to another).
When finish time of task 2 is 90.14, the earliest finish time for task 16 is 135.14 time units (which
is equal to the mission completion time in Fig. 11). When finish time of task 1 is 90.14, then the
earliest finish time for task 15 is equal to the mission completion time of Fig. 11, making it
impossible to create a shorter schedule.
Given the data from phase I, platforms are clustered into groups to be assigned to DMs. The
objective is to minimize the DM coordination workload associated with DM-platform-task
assignment. The workload is defined as a weighted sum of the internal and direct one-to-one
external coordination, as well as the task workload.
A DM n is assigned to a task i if and only if it is assigned to some platform which was assigned to
process this task (this information was obtained in phase I). Therefore, the following constraints
are introduced (called DM assignment constraints):
dt ni ≥ wim ⋅ dp nm ∀m = 1,.., K
Inequality in this formulation would become tight for some platform m. That is, we would have
dt ni = max wim ⋅ dp nm after optimization (which is exactly the definition of variable dt ). ni
m =1,.., K
Two DMs n and m coordinate through task i if and only if they are assigned this task. It means
that ddt nmi = min (dt ni , dt mi ) . Therefore, the following constraints are introduced (called DM
external coordination constraints):
ddt nmi ≥ dt ni + dt mi − 1
The right-hand side is equal to 1 if and only if dt ni = dt mi = 1 . Whenever this is not true, DMs n
and m are not coordinating over task i and the variable ddt nmi = 0 (and the right-hand side is <0).
N
The number of tasks assigned to a DM n is equal to ∑ dt
i =1
ni . An internal DM workload is
K D N
∑ dt
i =1
ni ≤ BT
K
∑ dp
m =1
nm ≤ BI
D N
∑ ∑ ddt
z =1, z ≠ n i =1
nzi ≤ BE
K D N
The maximal weighted coordination workload is max W I ⋅ ∑ dp nm + W E ⋅ ∑ ∑ ddt nzi .
n =1,.., D
m =1 z =1, z ≠ n i =1
K D N
CW ≥ W I ⋅ ∑ dp nm + W E ⋅ ∑ ∑ ddt nzi , ∀n = 1,.., D
m =1 z =1, z ≠ n i =1
The objective of Phase II is to minimize C . This results in a binary linear programming problem:
W
min CW
dt ni ≥ wmi ⋅ dp nm , m = 1,.., K ; n = 1,.., D; i = 1,.., N
ddt nmi ≥ dt ni + dt mi − 1, m = 1,.., K ; n = 1,.., D; i = 1,.., N
N
dt ≤ B T , n = 1,.., D
∑i =1
ni
K
∑ dp nm ≤ B , n = 1,.., D
I
m =1
D N
∑ ∑ ddt nzi ≤ B , n = 1,.., D
E
z =1, z ≠ n i =1
K D N
CW ≥ W I ⋅ ∑ dp nm + W E ⋅ ∑ ∑ ddt nzi , n = 1,.., D
m =1 z =1, z ≠ n i =1
dt , dp , ddt ∈ {0,1}
ni nm nzi
Note that the variables [dp ] determine all the parameters (other variables and all the constraints)
nm
in the problem. This kind of problem structure makes it easier to apply optimal algorithms. Again,
as in section 3.1.2, optimal algorithms such as dynamic programming and decomposition
algorithms can be used to find the optimal solution.
3.2.2 Sub-optimal Algorithm: Hierarchical Clustering
Assume that two DMs (n and m) are assigned the platform sets {n1 ,..., nU } and {m1 ,..., mV }
respectively with the corresponding internal workloads U and V. We define the assignment
signature vector for each such DM (group of platforms):
Qn = [q n , I n1 ,..., I nN ]
Here, variables I ni are determined as I ni = max win . Then, U=q , V=q n m and external
n∈ DM
N
communication between DMs n and m is ∑ max (I
i =1
ni , I mi ) .
Suppose that two platform groups C1 = {n1 ,..., nU } and C 2 = {m1 ,..., mV } are to be combined into
a new cluster at the next step of the algorithm. This would produce a decrease in external
coordination for other DMs (because the coordination with one of them is eliminated). The
decrease would be the most if vectors [I n1 ,..., I nN ] and [I m1 ,..., I mN ] were the same and equal to
[1,…,1].
Clearly, we would want to combine the groups which are “close” under these conditions (carry
close assignment signatures). Note that if these vectors have all distinct entries, then other DMs’
external workloads would not decrease after these two groups are joined together. The
“closeness” is defined as the number of 1’s in the same places in the signature vectors [I n1 ,..., I nN ]
and [I m1 ,..., I mN ] . Also, when two groups are combined, the number of platforms (that is, internal
workload) in the new group is the sum of the two old group sizes. We want to obtain a tradeoff
between maximizing the “closeness” between the groups and minimizing the new group size. It is
done by minimizing a weighted function using weights for internal and external coordination. The
distance between two clusters {n1 ,..., nU } and {m1 ,..., mV } is then defined as
The method of combining clusters in this way is called hierarchal clustering. The algorithm is as
follows.
Step 1. Begin by assigning each platform to a distinct cluster. Define assignment signature vector for each cluster
m=1,..,K as Qm = [1, w1m ,..., w Nm ] (where wim are platform-task assignment variables obtained in phase I).
Define the distance between any two clusters as in (∗).
Step 2. Choose two clusters with minimum distance between them and combine them into a single cluster. Update
the signature vectors and the distance matrix. If two clusters with signature vectors Qn = [qn , I n1 ,..., I nN ] and
Qm = [qm , I m1 ,...,I mN ] are to be joined together, the new cluster with the following signature is obtained:
Q = [qn + qm , max(I n1 , I m1 ),...,max(I nN , I mN )] .
Step 3. If the number of clusters is equal to D (numbers of available DMs), the algorithm terminates.
Example (continued).
Given the results obtained in the scheduling phase, platforms are hierarchally clustered into D=5
clusters to be assigned to 5 available DMs. For internal and external coordination workload
weights WI=1 and WE=2, the resulting coordination network is shown in figure 12. For workload
weights WI=3 and WE=2, the resulting coordination network is a tree shown in Figure 13. DM-
platform assignment and cluster signature vectors for these two examples are shown in Figure 14.
- DMs
T6 T6
P1 P2 P3 P16 P5 P11P12P18
DM1 T6 DM3
T17 T7
T 1 T2 T5 T 8 T16
P6 P8 P10P15 P7 P9 P13P14
DM4 T14 DM5
Figure 12. DM-platform allocation and Required Coordination for WI=1, WE=2
P13P18 - DMs
- platforms assigned to DMs
P3 P5 P11P12
- tasks communication is conducted over
DM2 - communication edge
P15P16 T7 T8 T6 P20
P1 P2 P9 P10 P4 P17 P19 P20
DM1 DM3
T5
T2
P6 P8 P7 P14
DM4 DM5
Figure 13. DM-platform allocation and Required Coordination for WI=3, WE=2
Clusters Platforms Signature vector
C1 1 2 3 16 4 1 1 0 0 0 1 0 0 1 1 0 0 0 0 0 0 1 0
C2 4 17 19 20 4 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1
WI=1, WE=2: C3 5 11 12 18 4 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 1 0 0
C4 6 8 10 15 4 1 1 0 1 0 0 1 0 0 0 0 0 1 1 0 0 0 0
C5 7 9 13 14 4 0 0 1 0 1 0 0 1 0 0 1 1 0 1 0 1 1 0
C2 3 5 11 12 13 18 6 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 1 0 0
WI=3, WE=2: C3 4 17 19 20 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1
C4 6 8 2 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
C5 7 14 2 0 0 1 0 1 0 0 0 0 0 1 1 0 0 0 1 0 0
In the problem formulation, we introduce the dummy node 0 that would serve as a single-link root
node. After the optimization is done, it is deleted from the tree while maintaining the tree
structure.
The fact that we would use “direct” links accounts for the need to structure the hierarchy level by
level. Then, direct links exist only from the higher level to the next lower level. In fact, the level
structure of the hierarchy would be changed afterwards to place the DM with the smallest
workload at the root of the tree.
The following parameters are used (with the outputs of phase II):
N
cmn = required coordination between DMs n and m given by c mn = ∑ ddt mni .
i =1
dmn = 1 if DMs n and m must communicate given by dd mn = max ddt mni
i =1,..,N
K
in=internal workload of a DM n given by i n = ∑ dp nm
m =1
The number of edges in the tree is equal to the numbers of nodes minus 1. Because of the fact that
we have a dummy root-node, the number of nodes in our graph is D+1. The deletion of the
dummy node should not disconnect the network. Since the structure on the nodes 1,..,D should
also be a tree, the following additional constraint is introduced:
∑x
i , j =1
ij = D −1
As mentioned earlier, a node at any level (except for the root) has a single connection to a node in
the previous level. This means that there is only one link into each non-root node (for each i there
exists only one j such that x =1). Root node does not have any in-links. Therefore, the following
ji
D D
∑ x j 0 = 0, ∑ x ji = 1, i = 1,.., D
j =0 j =0
If node i is at level l and there is a directed edge from i to j (that is, x =1), then node j is at level
ij
cannot be more than the number of edges, hence the right-hand side is <0). These constraints are
also “non-cycling” implying that they impose a tree-structure on the organization.
If DMs i and j must coordinate, they either are connected directly, or through some other DM k.
This connection is unique. Therefore, we obtain the following constraints:
D
x ij + ∑ z ijk ≥ dd ij , i, j = 1,.., D
k =1
Whenever z =1, then there are links between i and k and between j and k (in some direction). We
ijk
have an edge between nodes i and k if and only if xik + x ki = 1 . Since level-constraints prohibit
having more than two edges (in different directions) between any two nodes, we have the
following relation between variables x and z :ij ijk
x ik + x ki + x jk + x kj ≥ 2 z ijk , i , j , k = 1,.., D
The total external workload is found adding indirect external workload to the direct external
coordination workload of phase II. Therefore,
The objective is to minimize W . Combining all constraints, our problem is a linear binary
MAX
∑ j 0 x = 0 , ∑ x ji = 1, i = 1,.., D
j=0 j =0
l ≥ l + 1 + (x − 1)(D + 1), i , j = 0 ,.., D
j i ij
D
When the solution to this problem is found, the “dummy” root node is discarded. Then the node
with the smallest workload is found (workload of DM n being calculated as in + e n + ∑ z ijn cij )
i< j
and selected to be at the root of the organizational hierarchy. Other choices lead to different
organizational structures. The levels are then updated accordingly.
In this section, we present the optimal algorithm due to [Hu 82]. The objective is one of
minimizing the additional coordination (indirect) introduced by the hierarchy. When two DMs i
and j coordinate (their coordination equal to c ) and an edge (a communication link) between i
ij
and j exists in the hierarchy tree, coordination is direct and is added to each of the coordinating
DMs. The overall coordination in this case would be 2⋅c . When there is no direct link,
ij
coordination (indirect) is also added to all the DMs on the path between them (there will be #
edges-1 nodes on this path). Therefore, overall coordination is
D D
COM (T ) = ∑ ∑ c ⋅ (# of edges between i and j in the tree T + 1)
ij
i =1 j = i +1
A tree T that minimizes the function COM(T) is called Gomory-Hu tree (also called optimal
coordination tree). The following algorithm computes the Gomory-Hu tree (from [Hu 82]).
Initialization. Start with |T|=1, a tree T containing a single clique which consists of all nodes of the original
network from Phase II.
Step 1. Select a clique G in T which consists of more than one node of the original network. Disconnect this
clique in T (remove all edges incident to this clique in T). It breaks T into several connected components.
Step 2. Create a residual network by condensing each connected component into one clique (node) and expanding
selected clique.
Step 3. Pick any two nodes i and j (original nodes) from the selected clique and find minimum cut (X , X ) in the
residual network, i ∈ X , j ∈ X .
Note: X (and X ) consist of condensed cliques of T and of nodes of the original network (from clique G).
Step 4. Create two new cliques G1, G2 in the tree T replacing selected clique with them:
{ }
G1 = {i ∈ G i ∈ X }, G2 = j ∈ G j ∈ X .
Note that G = G1 ∪ G2 . The following edges are created in T between these new cliques and other (old)
cliques of T:
For each clique N∈T connected to G in T:
a) if N∈X, then an edge between N and G1 is created;
b) if N∈ X , then an edge between N and G2 is created;
The edges are updated as described.
Step 5. If all cliques of T contain only single nodes of the original network, STOP.
Graphically, the algorithm is represented in Figure 15.
OK
Update T
The complexity of the algorithm is polynomial in the number of nodes of the original network
(number of DMs). In step 3, a min-cut algorithm (min cut=max flow) is used. Algorithms for min-
cut problems include Ford-Fulkerson Algorithm (which can be exponential in the worst case but
performs good in practice), DMKM, and other more sophisticated algorithms with polynomial
complexity (see [Bertsekas, 1998]). When the tree is found, the node with the smallest overall
workload is placed at the root of this tree.
Example (continued).
The network constructed from the coordination data obtained using workload parameters WI=1,
WE=2 is given in Figure 16.
The step-by-step hierarchy structuring process is shown in the Figure 17. In the final step, the
node with minimal workload is chosen to be at the root of the tree. The resulting organizational
structure is shown in Figure 18. (Other choices will result in different organizational structures.)
Choosing the node with smallest weighted workload to be placed at the root of the tree is only
one of the ways to structure the organization. Note that workload parameters WI=3, WE=2
produce tree-structured coordination network. Choosing the node with smallest workload to be a
root results in a hierarchy depicted in Figure 18. On the other hand, choosing DM 3 would result
in a hierarchy which can be viewed as more “responsive” because the commander (the root node)
has closer access to other DMs. Selecting DM 2 to be a root results in an even better hierarchy
with commander having direct access to all but one DM (Figure 21).
2
1 1
1
1 3
1 1
2 3
1
4 5
2
1 3 4 5 2
5 2
1 3 4 5 2
5 2
1 3 5 2
5
5 2
1 3 2
4
- clique
4
- node
- information flow
P5 P11P 12P18
EW =11
old
EW =6 DM 3
P1 P2 P3 P16 P7 P 9 P13P14
DM1 DM 4 DM5
EW =5 EW =5
EW old=5 P6 P8 P 10P15 EW old=5
EW =4
EW old=4
P6 P8
EW =1 DM 4
EW old=1
P 15P16
P1 P2 P9 P10
DM 1
EW =3
old
EW =3
P13P18
EW =5 P3 P 5 P11P12
EW old=5 DM 2
P 20
P4 P17P 19P20 P7 P14
DM 3 DM 5
EW =1 EW =2
EW old=1 EW old=2
P 20
P4 P17P 19P20
EW =1
DM 3
EW old=1
P13P 18
EW =5 P3 P5 P 11P12
EW old=5 DM 2
P 15P16
P1 P2 P9 P 10 P7 P 14
DM1 DM 5
EW =2
EW =3
EW old=2
EW old=3
P6 P8
EW =1 DM4
EW old=1
Figure 20. Organizational hierarchy for WI=3, WE=2 (with DM 3 as the root node).
P13P18
EW =5 P3 P5 P11P12
EW old=5 DM2
P15P 16 P20
P1 P2 P9 P10 EW =1 P4 P17P19P20
EW old=1
DM1 DM3
EW E=3
EW old=3 P7 P14
DM5 EW =2
EW old=2
P6 P 8
DM4
EW =1
EW old=1
Figure 21. Organizational hierarchy for WI=3, WE=2 (with DM 2 as the root node).
An alternative is to use maximal spanning tree algorithm to construct the organizational hierarchy
tree. We obtain the tree T that maximizes ∑ cij , where E(T) denotes the set of edges of the
(i , j )∈E (T )
tree T. This can be done by applying the minimum spanning tree algorithm. Note that the
maximum spanning tree problem with edge weights c transforms into a minimum spanning tree
ij
problem with edge weights a =c -c , where c =max{c }. Methods for finding the minimal
ij max ij max ij
spanning tree include Kruskal, Jarnik-Prim-Dijkstra, and Bor’uvka (see [Bertsekas,98], [Hu,82]).
Step 1. Select an edge with maximum coordination such that doesn’t create cycles in the network.
Step 2. If ties occur, select the coordination link connected to the DM with minimal workload.
The idea behind the algorithm is that we try to include the largest coordination links and to make
DMs with largest workload to be at the lowest level of the hierarchy tree.
Example 1 (continued).
Constraining the depth of command to be at most 2, for the workload weights WI=1, WE=2, we
obtain the tree shown in Figure 22.
In this paper, we have presented the formulations and algorithms for three distinct phases of our
organizational design process. Strict mathematical problem formulations provide the foundation
P4 P 17P19P20
EW =6
DM 2
EW old=2
P1 P2 P 3 P16 P5 P11P12P18
DM 1 DM3
EW =7 EW =8
EW old=5 EW old=6
P6 P8 P10P15 P7 P 9 P13P14
DM4 DM5
EW =4 EW =5
EW old=4 EW old=5
Figure 22. Organizational hierarchy for WI=3, WE=2 using maximal spanning tree.
for exploring ways to solve these problems with a required degree of optimality and choosing the
specific algorithmic approaches according to available computational resources. Discussed
problems are NP-hard, but their formulations allow one to introduce near-optimal polynomial
algorithms.
Our current efforts are focuses on conducting a comparative analysis of various optimization
algorithms in solving specific design problems and defining criteria for classifying multi-objective
optimization problems into groups that require particular optimization sequence. This would
allow us to reduce solution complexity for large-scale organizational design problems.
Quantifying a set of user-defined performance measures provides the criteria for evaluating an
organizational design. The above measures are aggregated to define an objective function for the
design procedure. They also define measures of organizational robustness (i.e., the ability of an
organization to maintain the required level of performance despite variations in its task
environment) and of adaptability (i.e., the ability of an organization to adapt to environmental
changes and functional failures). Developing fast algorithms for real-time analysis of feasible
adaptation options, suggesting suitable forms of adaptation and appropriate transition sequence
for reconfiguration would provide a computational framework for on-line adaptation in C2
systems.
5. References
[Baruah, 1998] S.K. Baruah. The Multiprocessor Scheduling of Precedence-constrained Task Systems in the
Presence of Interprocessor Communication Delays. Operations Research, Vol. 46, No. 1, January-February, 1998,
65-72
[Barnhart et al., 1998] C. Bernhart et al. Branch and Price: Column Generation for Solving Huge Integer
programs. Operations Research, Vol. 46, No. 3, May-June 1998, 316-329
[Bertsekas, 1998] D.P. Bertsekas. Network Optimization: Continuous and Discrete Models. 1998
[Bertsekas, 1997] D.P. Bertsekas and J.N. Tsitsklis. Introduction to Linear Optimization. 1997
[D. Bertsekas et al., 1992] D. Bertsekas et al. Data Networks. 1992
[Burns et al., 1993] W.J. Burns and R.T. Clemen. Covariance structure models and influence diagrams. Manag.
Sci,, vol. 39, 1993, 816-833
[Chan et al., 1998] L.M.A. Chan et al. Parallel Machine Scheduling, Linear Programming, and Parameter List
Scheduling Heuristics. Operations Research, Vol. 46, No. 5, Sept-Oct 1998, 729-741
[Carley et al., 1995] K.M. Carley and Z. Lin. Organizational Design Suited to High Performance Under Stress.
IEEE Transactions SMC, Vol. 25, 1995, 221-231
[Cheng et al., 1990] T.C.E. Cheng and C.C.S. Sin. A State-of-the-art Review of Parallel-Machine Scheduling
Research. European Journal of Operational Research, 47, 1990, 271-292
[Cheng et al., 1994] T.C.E. Cheng and Z.-L. Chen. Parallel-Machine Scheduling Problems with Earliness and
Tardiness Penalties. Journal of Operations Research Society, Vol. 45, No. 6, 1994, 685-695
[Current et al., 1994] J. Current et al. Efficient Algorithms for Solving the Shortest Covering Path Problem.
Transportation Science, Vol. 28, No. 4, November 1994, 317-325
[Curry et al., 1997] M.L. Curry, K.R. Pattipati, and D.L. Kleinman. Mission Modeling as a Driver for the Design
and Analysis of Organizations. Proceedings of 1997 Command and Control Research and Technology Symposium,
Monterey, CA, June 1997
[Diday et al., 1987] E. Diday et al. Recent Developments in Clustering and Data Analysis. Proceeding of the
Japanese-French Scientific Seminar. March, 1987
[Dumas et al., 1995] Y. Dumas et al. An Optimal Algorithm for the Traveling Salesman Problem with Time
Windows. Operations Research, Vol. 43, No. 2, March-April 1995, 367-371
[El-Rewini et al., 1994] H. El-Rewini et al. Task Scheduling in Parallel and Distributed Systems. 1994
[Franca, 1995] P.M. Franca. The m-Traveling Salesman Problem with Minmax Objective. Transportation Science,
Vol. 29, No. 3, August 1995, 267-275
[Fischetti et al., 1997] M. Fischetti et al. A Branch-and-cut Algorithm for the Symmetric Generalized Traveling
Salesman Problem. Operations Research, Vol. 45, No. 3, May-June 1997, 378-394
[Fisher et al., 1997] M.L. Fisher et al. Vehicle Routing with Time windows: Two Optimization Algorithms.
Operations Research, Vol. 45, No. 3, May-June 1997, 488-492
[Fisher, 1994] M.L. Fisher. Optimal solution of Vehicle Routing Problems using minimum K-trees. Operations
Research, Vol. 42, No. 4, July-August, 1994, 626-642
[Graham et al., 1986] D. Graham and H.L.W. Nuttle. A Comparison of Heuristics for a School Bus Scheduling
Problem. Transportation Research, Part B, Vol. 20, No. 2, 1986, 175-182
[Golden et al., 1988] B.L. Golden and A.A. Assad. Vehicle Routing: Methods and Studies. 1988
[Hoffman et al., 1993] K.L. Hoffman and M Padberg. Solving Airline Crew Scheduling Problems by Branch-and
Cut. Management Science, Vol. 39, No. 6, June 1993, 657-682
[Hu, 1982] T.C. Hu. Combinatorial Algorithms. 1982
[Jain, Dubes, 1988] A.K. Jain and R.C. Dubes. Algorithms for Clustering Data. 1988
[Kempel, 1996] W.G. Kempel, D.L. Kleinman and M.C. Berigan. A2C2 Experiment: Adaptation of the Joint
Scenario and Formalization. Proc. 1996 Command and Control Research and Technology Symposium, Monterey,
CA, 1996.
[Kempel, 1996] W.G. Kempel, S.G. Hutchins, D.L. Kleinman, K. Sengupta, M.C. Berigan and N.A. Smith. Early
Experiences with Experimentation on Dynamic Organizational Structures. Proc. 1996 Command and Control
Research and Technology120 Symposium, Monterey, CA, 1996.
[Kempel, 1997] W.G. Kempel, J. Drake, D.L. Kleinman, E.E. Entin, and D. Serfaty. Experimental Evaluation of
Alternative and Adaptive Architectures in Command and Control. Proceedings of the 1997 Command and Control
Research and Technology Symposium, Washington, DC, June 1997.
[Kleinman et al., 1996] D.L. Kleinman, P. Young, and G.S. Higgins. The DDD-III: A Tool For Empirical
research in Adaptive Organizations. Proceedings of the 1996 Command and Control Research and Technology
Symposium, Monterey, CA, June 1996.
[Lawler, 1976] E.L. Lawler. Combinatorial Optimization: Network and Matroids. 1976
[Lawler et al, 1985] E.L. Lawler et al. The Traveling Salesman Problem. 1985
[Levchuk et al., 1996] Y.N. Levchuk et al. Design of Congruent Organizational Structures: Theory and
Algorithms. Proceedings of 1996 Command and Control Research and Technology Symposium, Monterey, CA,
June 1996
[Levchuk et al., 1997] Y.N. Levchuk et al. Normative Design of Organizations to Solve a Complex mission:
Theory and Algorithms. Proceedings of the 1997 Command and Control Research and Technology Symposium,
Washington, DC, June 1997
[Levchuk et al., 1998] Y.N. Levchuk, K.R. Pattipati , and D.L. Kleinman. Designing Adaptive Organizations to
Process a Complex Mission: Algorithms and Applications. Proceedings of the 1998 Command & Control Research
& Technology Symposium, NPS, Monterey, CA, June 1998.
[Levchuk et al., 1999a] Y. Levchuck, K.R. Pattipati and D.L. Kleinman. Analytic Model Driven Organizational
Design and Experimentation in Adaptve Command and Control. Systems Engineering, Vol. 2, No. 2 , 1999.
[Levchuk et al., 1999b] Y.N. Levchuk, Jie Luo, Georgiy M. Levchuk, K.R. Pattipati, and D.L. Kleinman. A Multi-
Functional Software Environment for Modeling Complex Missions and Devising Adaptive Organizations.
Proceedings of the 1999 Command & Control Research & Technology Symposium, NPS, Newport, RI, June 1999.
[Levchuk et al., 2000] G.M. Levchuk, Y.N. Levchuk, Jie Luo, Fang Tu, and K.R. Pattipati. A Library of
Optimization Algorithms for Organizational Design. Dept. of ECE, Univ. of Connecticut, Cyberlab TR-00-102,
Storrs, CT 06269-2157.
[Luenberger, 1984] D.G. Luenberger. Linear and Nonlinear Programming. 1984
[Madsen et al., 1995] O.B.G. Madsen et al. A Heuristic Algorithm for a Dial-a-ride Problem with Time
Windows, Multiple Capacities, and Multiple Objectives. Annals of Operations Research, 60, 1995, 1993-208
[Malandraki et al., 1992 ] C. Malandraki and M.S. Daskin. Time Dependent Vehicle Routing Problems:
Formulations, Properties, and Heuristic Algorithms. Transportation Science, Vol. 26, No. 3, August 1992, 185-
199
[Martello, Toth, 1990] S. Martello and P. Toth. Knapsack Problems: Algorithms and Computer Implementations.
1990
[Mingozzi et al., 1997] A. Mingozzi et al. Dynamic Programming Strategies for the Traveling Salesman
Problem with time window and Precedence Constraints. Operations Research, Vol. 45, No. 3, May-June 1997,
365-377
[Nemhauser et al., 1988] G.L. Nemhauser and L.A. Wolsey. Integer and Combinatorial Optimization. 1988
[Papastavrou, 1992] J.D. Papastavrou and M. Athans. On Optimal Distributed Detection Architectures in a
Hypothesis Testing Environment. IEEE Transactions on Automatic Control, Volume 37, 1992, 1154-1169.
[Perdu, Levis, 1997] D.M. Perdu and A.H. Levis. A methodology for Adaptive Command and Control Teams
Design and Evaluation. Proceedings of the 1997 Command and Control Research and Technology Symposium,
Washington, DC, June 1997.
[Pete et al., 1994] A. Pete, D.L. Kleinman, and K.R. Pattipati. Structural congruence of tasks and organizations.
Proceedings of the 1994 Symp. on Command and Control Research and Decision Aids, NPS, Monterey, CA, 1994,
168-175
[Pete et al., 1995] A. Pete, D.L. Kleinman, and K.R. Pattipati. Designing organizations with congruent
structures. Proceedings of the 1995 Symposium on Command and Control Research and Technology, Washington,
DC, 1995
[Pete et al., 1996] A. Pete, K.R. Pattipati, and D.L. Kleinman. Optimization of decision networks in structured
task environments IEEE Trans. Syst., Man, Cybern., Nov. 1996.
[Pete et al., 1995] A. Pete, D.L. Kleinman, and P.W. Young. Organizational performance of human teams in a
structured task environment Proceedings of the 1995 Symposium on Command and Control Research and
Technology, Washington, DC, 1995.
[Pete et al., 1998] A. Pete, K.R. Pattipati, D.L. Kleinman, and Y.N. Levchuk. An Overview of Decision Networks
and Organizations. IEEE Trans. Syst., Man, Cybern. May. 1998, pp. 172-192.
[Pinedo, 1995] M. Pinedo. Scheduling: Theory, Algorithms, and Systems. 1995
[Reibman et al., 1987 ] A. Reibman and L.W. Nolte. Design and performance comparison of distributed
detection networks. IEEE Trans. Aerosp. and Electr. Syst., vol. 23, November, 1987, 789-79
[Shachter, 1986] R.D Shachter. Evaluating influence diagrams. Oper. Res., vol. 34, 1986, 871-882
[Shirazi et al., 1990] B. Shirazi et al. Analysis and evaluation of Heuristic methods for static task scheduling. J.
of parallel and distributed computing 10, 1990, 222-232
[Shlapak et al., 2000] Yurij Shlapak, Jie Luo, Georgiy M. Levchuk, Fang Tu, and Krishna R. Pattipati. A Software
Environment for the Design of Organizational Structures. Proceedings of the 2000 Command & Control Research
& Technology Symposium, NPS, Monterey, CA, June 2000.
[Tang et al., 1993] Z.B. Tang, K.R. Pattipati, and D.L. Kleinman. Optimization of Distributed Detection
Networks: Part II. Generalized Tree Structures. IEEE Transactions on Systems, Man and Cybernetics , vol. 23,
1993, 211-221.
[Turek et al., 1992] J. Turek, J. Wolf, K. Pattipati, and P. Yu. Scheduling Parallelizable Tasks: Putting it all on
the Shelf. 1992 ACM Sigmetrics Conference, Newport, R.I., June1-5, 1992.
[Selvakumar et al., 1994] S. Selvakumar and C.S.R. Murthy. Scheduling Precedence Constrained Task Graphs
with Non-Negligible Intertask Communication onto Multiprocessors. IEEE Transactions on Parallel and
Distributed Systems, Vol. 5, No. 3, March 1994, 328-336
[Solomon, 1987] M.M. Solomon. Algorithms for the Vehicle Routing and Scheduling Problems with Time Window
Constraints. Operations Research, Vol. 35, No. 2, March-April 1987, 254-265
[Taillard et al., 1997] E. Taillard et al. A Tabu Search Heuristic for the Vehicle Routing Problem with Soft Time
Windows. Transportation Science, Vol. 31, No. 2, May 1997, 170-186
[Van de Velde, 1993] S.L. Van de Velde. Duality-Based Algorithms for Scheduling Unrelated Parallel Machines.
ORSA Journal on Computing, Vol. 5, No. 2, Spring 1993, 182-203
[Wolsey, 1998] L.A. Wolsey. Integer Programming. 1998
[Fang, 1993] S. Fang and S. Puthenpura. Linear Optimization and Extensions. Theory and Algorithms. 1993
[Wright, 1998] P.L. Wright and N.J. Asgford. Transportation Engineering: Planning and Design. 1998
[Xu, 1993] J. Xu. Multiprocessor Scheduling of Processes with Release Times, Deadlines, Precedence, and
Exclusion Relations. IEEE Transactions on Software Engineering, Vol. 19, No. 2, Feb 1993, 139-154
[Ying et al., 1997] Ying Jie, Y.N. Levchuk, M.L. Curry, K.R. Pattipati, and D. L. Kleinman. Multi-Functional
Flow Graphs: A New Approach to Mission Monitoring. Proceedings of the 1997 Command and Control Research
and Technology Symposium, Washington, DC, June 1997.
[Zweig, 1995] G. Zweig. An Effective Tour Construction and Improvement Procedure for Traveling Salesman
Problem. Operations Research, Vol. 43, No. 6, November-December 1995, 1049-1057