Академический Документы
Профессиональный Документы
Культура Документы
CECILIA R. ARAGON
University ofCalifornia, Berkeley, California
LYLE A. McGEOCH
Amherst College, Amherst, Massachusetts
CATHERINE SCHEVON
Johns Hopkins University, Baltimore, Maryland
(Received February 1988; revision received January 1989; accepted February 1989)
In this and two companion papers, we report on an extended empirical study of the simulated annealing approach to
combinatorial optimization proposed by S. Kirkpatrick et al. That study investigated how best to adapt simulated
annealing to particular problems and compared its performance to that of more traditional algorithms. This paper (Part
I) discusses annealing and our parameterized generic implementation of it, describes how we adapted this generic
algorithm to the graph partitioning problem, and reports how well it compared to standard algorithms like the KernighanLin algorithm. (For sparse random graphs, it tended to outperform Kernighan-Lin as the number of vertices become
large, even when its much greater running time was taken into account. It did not perform nearly so well, however, on
graphs generated with a built-in geometric structure.) We also discuss how we went about optimizing our implementation,
and describe the effects of changing the various annealing parameters or varying the basic annealing algorithm itself.
t claSSifications: Networks/graphs, heuristics: algorithms for graph partitioning. Simulation, applications: optimization by simulated annealing.
. tions Research
.37, NO.6, November-December 1989
865
0030-364X/89/37060865 $01.25
1989 Operations Research Society of America
866 /
JOHNSON ET AL.
867
<1-_1>
<1
1>
Figure 2. Bad but locally optimal partItIOn with
respect to pairwise interchange. (The dark
and light vertices form the two halves of the
partition.)
868 /
JOHNSON ET AL.
OPTIMIZATION PROBLEM
Feasible Solution
Cost
Optimal Solution
Local Search
Simulated Annealing
The simulated annealing approach was first developed by physicists, who used it with success on the
Ising spin glass problem (Kirkpatrick, Gelatt and
Vecchi), a combinatorial optimization problem where
the solutions actually are states (in an idealized model
of a physical system), and the cost function is the
amount of (magnetic) energy in a state. In such an
application, it was natural to associate such physical
notions as specific heat and phase transitions with the
simulated annealing process, thus, further elaborating
the analogy with physical annealing. In proposing that
the approach be applied to more traditional combinatorial optimization problems, Kirkpatrick, Gelatt
and Vecchi and other authors (e.g., Bonomi and
Lutton 1984, 1986, White 1984) have continued
to speak of the operation of the algorithm in these
physical terms.
Many researchers, however, including the authors
of the current paper, are skeptical about the relevance
of the details of the analogy to the actual performance
of simulated annealing algorithms in practice. As a
consequence of our doubts, we have chosen to view
the parameterized algorithm described in Figure 3
simply as a procedure to be optimized and tested, free
from any underlying assumptions about what the
parameters mean. (We have not, however, gone so far
as to abandon such standard terms as temperature.)
Suggestions for optimizing the performance of simulated annealing that are based on the analogy have
been tested, but only on an equal footing with other
promising ideas.
1.4. Mathematical Results, With Reservations
Although simulated a
economic value in pI
mentioned in the Intr
is truly as good a gem
first proponents. Like
applicable, even to pr<
very well. Moreover, :
ter solutions than loca
applications should pI
certain areas of potem
First is the quest
researchers have obs(
needs large amounts e
and this may push i
approaches for some c
tion of adaptability.
which local optimizat
tic, and even ifone is p
of running time to sin
that the improvemen
results. Underlying be
the fundamental ques
Local search is not
binatorial optimizati(
problems it is hopele
structive technique or
tation. In this approa
is successively augmeJ
This, in particular, is
ciently solvable optir
Minimum Spanning
ment Problem, are S4
is also the design prin
tics that find near-opt
Furthermore, even
method of choice, t
improve on it beside~
sophisticated backtral
running the local opti
from different startil
solution found.
The intent of the eJo
Reservations
imulated annealing
nalogy upon which
rous mathematical
seen, for instance,
ily and Federgruen
~5), Gidas (1985),
fitra, Romeo and
hese formalize the
thematically as the
ov chain, and show
that yield limiting
e space of all soluprobability is conical results provide
;tributions can be
that has explicitly
(Sasaki and Hajek
onential even for a
results do not seem
to provide much direct practical guidance for the realworld situation in which one must stop far short of
the limiting distribution, settling for what are hoped
to be near-optimal, rather than optimal solutions. The
Il1athematical results do, however, provide intuitive
support to the suggestion that slower cooling rates
(and, hence, longer running times) may lead to better
solutions, a suggestion that we shall examine in some
detail.
1.5. Claims and Questions
869
paper and its companions has been to subject simulated annealing to rigorous competitive testing in
domains where sophisticated alternatives already
exist, to obtain a more complete view of its robustness
and strength.
PROBLEM-SPECIFIC
1. What is a solution?
2. What are the neighbors of a solution?
3. What is the cost of a solution?
4. How do we determine an initial solution?
GENERIC
1. How do we determine an initial temperature?
2. How do we determine the cooling ratio r?
3. How do we determine the temperature length L?
4. How do we know when we are frozen?
ulated annealing.
870
JOHNSON ET AL.
READ_'NSTANCEO
Reads instance and sets up appropriate data structures;
returns the expected neighborhood size N for a solution
(to be used in determining the initial temperature and the
number L of trials per temperature).
INITIALSOlUTIONO
Constructs an initial solution So and returns cost (So),
setting the locally-stored variable S equal to So and the
locally-stored variable c to some trivial upper bound on
the optimal feasible solution cost.
PROPOSE_CHANGEO
Chooses a random neighbor S' of the current solution S
and returns the difference cost (S ') - cost (S), saving S'
for possible use by the next subroutine.
CHANGE-SOlUTIONO
Replaces S by S' in local memory, updating data structures as appropriate. If S' is a feasible solution and cost
(S') is better than c, then sets c = cost (S') and sets
the locally stored champion solution S to S'.
FINALSOlUTIONO
Modifies the current solution S to obtain a feasible solution
S". (S" = S if S is already feasible). If cost (S") .. c,
outputs S"; otherwise outputs S.
ITERNUM
The number of annealing runs to be performed with this
set of parameters.
INITPROB
Used in determining an initial temperature for the current
set of runs. Based on an abbreviated trial annealing run, a
temperature is found at which the fraction of accepted
moves is approximately INITPROB, and this is used as the
starting temperature (if the parameter STARTTEMP is set,
this is taken as the starting temperature, and the trial run
is omitted).
TEMPFACTOR
This is a descriptive name for the cooling ratio r of
Figure 3.
SIZEFACTOR
We set the temperature length L to be N*SIZEFACTOR.
where N is the expected neighborhood size. We hope to
be able to handle a range of instance sizes with a fixed
value for SIZEFACTOR; temperature length will remain
proportional to the number of neighbors no matter what
the instance size.
MINPERCENT
This is used in testing whether the annealing run is frozen
(and hence, should be terminated). A counter is maintained
that is incremented by one each time a temperature is
completed for which the percentage of accepted moves is
MINPERCENT or less, and is reset to 0 each time a solution
is found that is better than the previous champion. If the
counter ever reaches 5, we declare the process to be
frozen.
finds significantly 1
details, see Section
using ideas from Fidu
Kernighan-Lin algori1
tice (Dunlop and Ke
sents a potent compl
and one that was i!
and Vecchi when the)
annealing for graph p
partially rectified ir
limited experiments v
tion of Kernighan-Lin
This section is org
discuss the problem-sl
tation of simulated aJ
and in 3.2 we describe
our experiments were
the results of our com
the Kernighan-Lin alg
ing. Our annealing im
forms the local optim
based, even if relative
account. The compari5
problematic, and depe
3.1. Problem-Specific
VI, V2 ) =
Illu, vI
+ a(1 VI!
here I X I is the num!
a parameter called tl
though this scheme a)
r of'
~ to be N*SIZEFACTOR,
lrhood size. We hope to 'c,
;tance sizes with a fiXed'
ature length will remain'
lighbors no matter what
described in Section
esearch over the years,
) circuit design and
)eals to researchers as
s. It was proved NPld Stockmeyer (1976),
:rs had become coni hence, concentrated
s for finding good but
1
IS.
Although the neighborhood structure for graph partitioning described in Section 2.1 has the advantage of
simplicity, it turns out that better performance can be
obtained through indirection. We shall follow
Kirkpatrick, Gelatt and Vecchi in adopting the following new definitions of solution, neighbor, and cost.
Recall that in the graph partitioning problem we
are given a graph G = (V, E) and are asked to find
that partition V = VI U V2 of V into equal sized sets
that minimizes the number of edges that have endpoints in different sets. For our annealing scheme, a
solution will be any partition V = VI U V2 of the
vertex set (not just a partition into equal sized sets).
Two partitions will be neighbors ifone can be obtained
from the other by moving a single vertex from one of
its sets to the other (rather than by exchanging two
vertices, as in Section 2.1). To be specific, if (VI, V2 )
is a partition and u E VI, then (VI - Iu}, V2 U Iu})
and (VI, V2 ) are neighbors. The cost of a partition
(VI, V2 ) is defined to be
c( VI, V2 )
= I II u,
+
u lEE: u E VI & u E V2 11
a( I VI
I - I V2 1)2
871
872 /
JOHNSON ET AL.
150
100
50
o
2bo
2~O
2~
SIMULATED ANNEALING
150
100
:----dI
I
200
220
240
KERNIGHAN-LI
Figure 9. Histograms
graph G500
Annealing,
(The X-axis
Y-axis to th
was encounl
iAnnealing) was 232, '
,iOf the annealing runs
ithis simulated anneal
cally more powerful tl
;~ristic on which it is ba
~ltaken into account.
Somewhat less conl
ance of Annealing ar
irithm. Here the histogJ
placed on the same a
~j)ther order statistics fc
. corresponding statistic:
nnealing is by far th'
's time by a factor
verage running time
,deally we should com
versus one run of j
ns versus the best of
Fortunately, there is
an estimate of the expe
to repeatedly perform
the average of the bes
,'jr,
i'
150
100
50
o ,
200
, i!!!:::===;=,=~~
240
2~0
220
SIMULATED ANNEALING
2~0
,
3!xJ
320
LOCAL OPTIMIZATION
ISO
)00
50
,
200
~.
i
220
240
260
280
300
320
KERNIGHAN-LIN
873
Table I
Comparison of Annealing and Kernighan-Lin on
G soo
k
I
2
5
10
25
50
100
Anneal
(Best of k)
K-L
(Best of k)
K-L
(Best of lOOk)
213.32
211.66
210.27
209.53
208.76
208.20
207.59
232.29
227.92
223.30
220.49
217.51
215.75
214.33
214.33
213.19
212.03
211.38
210.81
210.50
210.00
874 /
JOHNSON ET AL.
IVI
2.5
5.0
10.0
20.0
124
250
500
1,000
13
29
52
102
63
114
219
451
178
357
628
1,367
449
828
1,744
3,389
Table III
Average Algorithmic Results for 16 Random
Graphs (Percent Above Best Cut Ever Found)
1
Average Algorithmic
for 16 R
Expected A
2.5
5.0
10.0
20,0
Algorithm
124
87.8
18.7
4.2
24.1
6.5
1.9
9.5
3.1
0.6
5.6
1.9
0.2
Local Opt
K-L
Annealing
250
lOlA
21.9
10.2
26.5
8.6
1.8
11.0
4.3
0.8
5.5
1.9
102.3
23.4
10.0
32.9
11.5
2.2
12.5
5.8
4A
2A
0.9
0.5
106.8
22.5
7.4
31.2
10.8
2.0
12.5
4.8
0.7
6.3
2.7
IVI
500
1,000
OA
OA
IVI
2.5
5.0
124
0.1
0.8
85.4
0.2
1.0
82.8
Local Opt
K-L
Annealing
250
0.3
1.5
190.6
2.0
163.7
Local Opt
K-L
. Annealing
500
0.6
2.8
379.8
0.9
3.8
308.9
Local Opt
K-L
Annealing
1,000
2A
3.8
8.5
661.2
7.0
729.9
OA
't,.
'J
:JJ:
Table IV
Average Algorithmic Running Times in Seconds
for 16 Random Graphs
Expected Average Degree
2.5
5.0
10.0
20.0
0.1
0.8
85.4
0.2
1.0
82.8
0.3
1.4
78.1
0.8
2.6
104.8
250
0.3
1.5
190.6
0.4
2.0
163.7
0.8
2.9
186.8
1.3
4.6
223.3
500
0.6
2.8
379.8
0.9
3.8
308.9
1.5
5.7
341.5
3.2
11.4
432.9
1,000
2.4
7.0
729.9
3.8
8.5
661.2
6.9
14.9
734.5
14.1
27.5
853.7
IVI
124
Algorithm
Local Opt
K-L
Annealing
Local Opt
K-L
Annealing
Local Opt
K-L
Annealing
Local Opt
K-L
Annealing
j:i
875
Table VI
Average Algorithmic Running Times in Seconds
for 8 Geometric Graphs
IVI
500
1,000
10
20
40
1.0
3.4
293.3
1.6
4.8
306.3
3.2
7.4
287.2
7.2
11.1
209.9
2.2
7.6
539.3
3.7
11.9
563.7
7.2
18.9
548.7
18.0
28.7
1038.2
Algorithm
Local Opt
K-L
Annealing
Local Opt
K-L
Annealing
876 /
JOHNSON ET AL.
300
Table VII
Estimated Performance of Algorithms When
Equalized for Running Times (Percent Above
Best Cut Ever Found)
IVI
10
20
40
Algorithm
500
647.5
184.5
271.6
169.6
3.3
70.6
11.3
0.0
11.3
0.0
0.0
15.5
Locals
K-L's
5 Anneals
1,000
3217.2
908.9
1095.0
442.7
44.4
137.2
82.5
1.3
15.7
6.4
0.0
7.2
Locals
K-L's
5 Anneals
80
60
40
20
do
200
250
300
350
400
,------
280
U 260
T
S
I
~ 240
220
200
'----,,00""176',00'"'3'-;--1-',006"-=""3-
700
~L1
N
N
I 600
N
G
T
I
500
400
300
200
,0016
'
,0031
,0063
o graph partitioning if
ger or different in char.
The main experiments
dard graph Gsoo , with,~
small selection ofother'J
:arm of validation.) As"
ers for us to investigate
values, we studied just
1 hope of isolating their
,,
280
5T 260
S
I
~ 240
220
200
,.-------------------
800
700
~
N
600
500
W 400
I---
300
200
,0016
,0031
,0063
,0125
,02S
,os
,I
,2
-.
,8
IMBALANCE FACTOR
.
Figure 12. Effect of imbalance factor a on running
time in seconds for Gsoo .
877
~--------------
600
C
U
T
500
S
1
400
300
200
~_ _-----'500
~~~~~~
1000
1500
(NUMBER OF TRIALS)/N
878 I
JOHNSON ET AL.
r---------_-----------,
300
A
L
280
o
P
T
E
D
260
240
S
I
Z
E 220
200
lL.-
- L_ _
500
_ _L _
1000
-'----.J
1500
(NUMBER OF TRlALS)!N
500
~-----------------,
230
U
T
400
S
1
~ 300
200
225
tL-
---'500
--'1000
1
~
--"L-------.l
1500
(NUMBER OF TRIALS)!N
215
F :
tj! .
1
E?
210
205
0.1
0.2
0.3
i
'~:
n: .' ,.
:BE?g$l?BEJQ
,,
"
INITPROB
879
880 /
JOHNSON ET AL.
450
400
U
T
S
I
Z
E
350
300
250
Ii
.*./'
'-.......
200
200
400
800
600
1000
(NUMBER OF TRlALS)/N
1200
1400
1600
Figure 18. Evolution of cutsize when TEMPFACTOR = 0.6634 and SIZEFACTOR = 128.
terminate a ruil if there are roughly 5 I V I . SIZEFACTOR consecutive trials without an improvement,
and the smaller SIZEFACTOR is, the higher the probability that this might occur prematurely. For this
paper, we have chosen to stick with our standard
parameter values, rather than risk solution degradation for only a small improvement in running time.
In practice, however, one might prefer to gamble on
the speedup. In Section 6 we shall discuss how such
speedups (in conjunction with others) might effect the
basic comparisons of Section 4.
4.4. A Final Time/Quality Tradeoff
Table IX
Dependence of Average Cutsize Found on SIZEFACTOR and TEMPFACTOR
SIZE
FACTOR
0.25
0.5
1
2
4
8
16
32
64
128
256
512
1,024
TEMPFACTOR
0.4401
228.1
229.5
219.6
216.1
216.2
215.2
210.6
0.6634
229.9
223.3
220.8
217.0
213.4
212.0
212.6
0.8145
232.4
223.8
219.6
215.3
212.9
212.3
211.3
0.9025
230.1
225.1
218.7
215.0
214.7
211.0
211.9
0.9500
235.0
224.9
220.3
215.2
213.6
211.9
211.4
0.9747
222.1
225.2
220.1
216.0
214.2
210.8
211.0
Results of Experi
Allowing More Til
More Runs for thl
(Cutsize as a F
TEMPFACTOR
0.99920
0.99893
0.99678
0.99358
0.98730
0.97470
0.95000
NORMAl
RUNNI
TIMI
100.(
53'<
28.:
15A
8.f
4.S
2.9
0.9873
234.8
230.9
221.4
217.2
214.1
212.4
209.7
5. MODIFYING THE
FACTOR
Table X
Results of Experiment Comparing the Effect of
. Allowing More Time Per Run Versus Performing
More Runs for the Geometric Graph of Figure 8
(Cutsize as a Function of Time Available)
TEMPFACfOR
NORMALIZED
RUNNING
TIME
0.99920
0.99893
0.99678
0.99358
0.98730
0.97470
0.95000
100.0
53.0
28.5
15.4
8.6
4.9
2.9
TIME AVAILABLE
25
64.6
61.5
64.6
70.2
71.6
50
100
200
400
55.8
53.2
55.6
55.3
62.1
63.8
51.1
45.8
43.5
46.2
47.1
54.6
56.3
42.7
37.8
36.6
37.0
40.0
47.0
49.6
35.5
32.1
30.8
30.7
33.7
40.1
43.5
230
0.9873
234.8
230.9
221.4
217.2
214.1
212.4
209.7
881
C
U
T
Varying CUTOFF
Varying INITPROB
225
220
215
I
Z
210
205
200
400
600
800
882 /
JOHNSON ET AL.
700 , -
235
230
600
.0
U
T
S
I
225
220
215
210
O.
00
til
ADAPTIVE
NON-ADAPTIVE
U 500
T
S
I
0
0
.00 00
1J'l0 ". 00
.W 0 0
ll
8
0
q;o.. 8 o~ ..
0
..00
00.
O!!
o tlo
000
400
~
300~*
200
20
205
500
1000
RUNNING TIME IN SECONDS
1500
2000
p,
generic algorithm to
mperatures adaptive/yo
l the following loop.
]00
ADAPTIVE
...
NON-ADAPTIVE
1000
G TIME IN SECONDS
ISOO
2000
Figure 21. Correlation between cutsize and percentage of acceptance for the annealing run
depicted in Figure 13.
A be the size of an uphill move accepted with probability INITPROB at temperature To. The ith temper
ature T; is then chosen so that the probability Pi thal
an uphill move of size A will be accepted equal~
C - i)C) * INITPROB. (This is guaranteed by set
ting T i = -Aj(lnC - i)C) * INITPROB), or more
precisely the minimum of this figure and 0, since the
solution may still be improving at temperature Two,
in which case, we will need additional temperature~
to ensure freezing according to our standard criterion
of five temperatures without improvement.)
This cooling method is intriguing in that it yielill
time exposures that approximate straight lines (see
Figure 23), further confirming our hypothesis that the
shapes of such time exposure curves are determined
by the cooling schedule. Based on limited experiments, it appears that this cooling technique is far
more sensitive to its starting temperature than is geometric cooling. For To = 1.3 (corresponding to INIT
PROB = 0.4 for the graph Gsoo ), we were unable to
distinguish the results for this cooling method from
those for the geometric method when the parameten
were adjusted to equalize running time. In particular,
for C = 100 and SIZEFACTOR = 8, the running time
for linear probability cooling was roughly the same ~
that for geometric cooling under ourstandard parameters (346 seconds versus 330), while averaging a
cutsize of 213.5 over 50 runs, compared to 213.3 for
the standard parameters. However, if we set To = 11.3
(corresponding to INITPROB = 0.9), the average cut
increased significantly (to 220.4), even if we doubled
C to take account of the fact that we were starting at
a higher temperature. Recall that under geometric
cooling, increasing INITPROB above 0.4, although it
led to increased running time, had no significant effect
on cutsizes found.
884 I
JOHNSON ET AL.
LINEAR PROBABILITY COOLING
700
r-------------------,
600
5T soo
S
I
~ 400
300
200
soo
1000
I SOO
2000
(NUMBER OF TRIALSjlN
700 , - - - - - - - - - - - - - -_ _--------,
600
5T 500
S
I
400
300
200
SOO
1000
I SOO
2000
(NUMBER OF TRlALSjlN
random permutation e
of each block, and USil
the next N moves.
Using this modifica
changes from the Stal
time, we performed I(]
ard geometric graph (
was essentially uncha
found was 212.5, as co
of the standard algo:
because none ofthe l(
up that 1,000 yielded ~
view of the results sUI
duction in cutsize obu
generate moves seems
would obtain by doub]
ing with the standard
further supported by e
: graph of Figure 8. H
during 100 trials drop]
- method to 20.4 using p
',. a significant change in
.Yt mutation generation t
" FACTOR from 16 t(
fi"back up to 22.8 but,
_the standard implemel
.~ experiments, it seem
,f permutation is a prom
Permutations
~better-than-random
su
annealing process in
on Sf to be tested is
Ie set of all neighbors
of graph partitioning,
a candidate for movm to the other, inde~s. Although this has
re reasons to think it
or instance, that just
11 will yield an accept:n there is a nonneglilave to perform sub)re we encounter that
.uggested that, instead
otly at each iteration,
Jendence so that each
mccessive block of N
Ie still maintaining a
mply by choosing a
885
Table XI
Average Cutsizes Found for Line, K-L, and
Annealing (the Latter Two With and Without
Good Starting Solutions Compared to Best Cuts
Ever Found a
IVI
10
20
40
Algorithm
500
26
178
412
(Best Found)
38.1
36.1
12.5
89.4
89.7
45.0
328.4
221.0
200.8
627.8
436.3
442.8
Line
K-L
Line
21.1
22.2
12.9
65.8
52.4
48.6
247.0
192.4
217.0
680.3
507.4
501.1
Anneal
K-L + Anneal
Line + Anneal
3
47.2
70.7
15.7
737
(Best Found)
123.7
157.5
64.6
381.8
316.7
272.0
1128.7
857.8
825.5
41.2
38.1
15.8
120.4
99.9
54.7
355.3
295.0
262.5
963.8
833.8
839.6
1000
39
222
Line
K-L
Line
+ K-L
+ K-L
Anneal
K-L + Anneal
Line + Anneal
886 I
JOHNSON ET AL.
887
Table XII
Average Speedups Using Approximate
Exponentiation, Permutation Generation, and
Smoother Cooling (Running Time Ratios)
Expected Average Degree
IVI
2.5
5.0
10.0
20.0
124
250
500
1000
0.29
0.31
0.34
0.34
0,28
0.36
0.39
0.37
0.38
0.35
0.41
0.39
0.40
0.41
0.47
0.49
888 1
JOHNSON ET AL.
Table XIII
Comparison of K-L and C-K-L With Sped-up
Annealing (Percent Above Best Cut Ever Found)
Expected Average Degree
IVI
2.5
5.0
10.0
20.0
Algorithm
K-L's
C-K-L's
5 Anneals
124
0.0
0.0
0.0
0.0
0.1
0.4
0.0
0.0
0.1
0.0
0.0
0.2
250
0.1
0.0
1.8
0.9
0.4
0.6
0.5
0.6
0.3
0.2
0.2
0.0
K-L's
C-K-L's
5 Anneals
500
4.0
1.9
5.7
3.3
2.2
0.8
1.1
1.2
0.2
0.7
0.8
0.2
K-L's
C-K-L's
5 Anneals
1,000
5.2
2.0
3.2
4.5
3.5
0.8
1.8
1.6
0.2
1.0
1.1
0.1
K-L's
C-K-L's
5 Anneals
889
890 /
JOHNSON
ET AL.
+'
LAARHOVEN. 1985.
g Schedule. In Proc.
D 85), pp. 206-208,
891
892 /
JOHNSON ET AL.
DYNAMIC NE
SASAKI, G. H., AND B. HAJEK. 1988. The Time Complexity of Maximum Matching by Simulated Annealing. J. Assoc. Comput. Mach. 35, 387-403.
Statistical Mechanics Algorithm for Monte Carlo
Optimization. 1982. Physics Today 35:5, 17-19
CONT
(May).
VAN LAARHOVEN, P. J. M., AND E. H. L. AARTS.
(Received .
eperations Research
01. 37, No.6, November-Dect