Академический Документы
Профессиональный Документы
Культура Документы
S
afety and reliability are paramount con- use top-down, often one-dimensional modeling
cerns in rocket motor design because of of components and systems based on gross ther-
the enormous cost of typical payloads momechanical and chemical properties, com-
and, in the case of the Space Shuttle and bined with engineering judgement based on
other manned vehicles, for the crew’s safety. In many years of experience, rather than detailed,
the spring of 1999, for example, a series of three bottom-up modeling from first principles.
consecutive launch failures collectively cost more Moreover, there has been a tendency to study in-
than US$3.5 billion. The most notorious launch dividual components in isolation, with relatively
failure, of course, was the tragic loss of the Space little emphasis on the often intimate coupling
Shuttle Challenger and its seven crew members. between the various components. For example,
Thus, there is ample motivation for improving SPP1—an industry-standard code for analyzing
our understanding of solid rocket motors solid propulsion systems—includes a fairly de-
(SRMs) and the materials and processes on tailed model of the propellant thermochemistry,
which they are based, as well as the methodol- but no structural analysis and no detailed model
ogy for designing and manufacturing them. of internal flow.
The use of detailed computational simulation One of our primary goals at the Center for
in the virtual prototyping of products and de- Simulation of Advanced Rockets (CSAR) is to
vices has heavily influenced some industries— develop a virtual prototyping tool for SRMs
for example, in automobile and aircraft design— based on detailed modeling and simulation of
but to date, it hasn’t made significant inroads in their principal components and the dynamic in-
rocket motor design. Reasons for this include the teractions among them. Given a design specifi-
market’s relatively small size and the lack of suf- cation—geometry, materials, and so on—we
ficient computational capacity. Traditional de- hope to be able to predict the entire system’s re-
sign practices in the rocket industry primarily sulting collective behavior with sufficient fidelity
to determine both nominal performance char-
acteristics and potential weaknesses or failures.
1521-9615/00/$10.00 © 2000 IEEE
Such a “response tool” could explore the space
of design parameters much more quickly,
MICHAEL T. HEATH AND WILLIAM A. DICK cheaply, and safely than traditional build-and-
Center for Simulation of Advanced Rockets, UIUC test methods. Of course, we must validate such a
capability through rigorous and extensive com- ing, as well as the design and manufacturing
parison with data for known situations to have processes required to make them reliable, safe,
confidence in its predictions for unknown situa- and effective.
tions. Although it is unlikely that simulation will Figure 1 shows a schematic drawing of a typi-
ever totally replace empirical methods, it can po- cal SRM—the major parts are indicated along
tentially dramatically reduce the cost of those with the types of mathematical models that
methods by identifying the most promising ap- might be used to represent them. Reality, of
proaches before building actual hardware. course, is considerably more complex than this
2D picture, and we note the following major
challenges in achieving our goals.
Challenges in rocket simulation
Solid propellant boosters are the “heavy • The complex behavior of SRMs requires
lifters” of the space launch industry. Most of the fully 3D modeling to capture the essential
world’s large, multistage launch vehicles—in- physics adequately. Examples include the
cluding the Ariane, Delta, Titan, and Space combustion of composite energetic materi-
Shuttle—employ two or more SRBs in the ini- als; the turbulent, reactive, multiphase fluid
tial stage to provide 80% or more of the im- flows in the core and nozzle; the global
mense thrust needed to lift a payload in excess structural response of the propellant, case,
of 10,000 pounds off the launch pad and propel liner, and nozzle; and potential accident sce-
it the first few tens of miles above Earth. Beyond narios such as pressurized crack propaga-
this point, subsequent stages—typically liquid- tion, slag ejection, and propellant detona-
fueled—take over into orbit and beyond. tion.
SRMs are notably simpler than liquid rocket • The coupling between components is
engines.2 The latter have far more moving parts strong and nonlinear. For example, the load-
(pumps, valves, and so on) and require storing ing due to fluid pressure deforms the solid
and handling of liquids that might be cryogenic propellant, which changes the geometry of
or potentially hazardous. SRMs, though, have the fluid flow, which in turn affects pressure,
almost no moving parts (often only a gimballed and so on. Similarly, the burn rate increases
nozzle for thrust vector control), and the com- with pressure and vice versa.
posite solid propellant (containing both fuel and • The geometry is complex and changes dy-
oxidizer) forms the combustion chamber. The namically as the rocket consumes propel-
main disadvantage of SRMs is that once ignited, lant. The inclusion of slots and fins, which
combustion is essentially uncontrollable: the forms a star-shaped cross-section, enhances
propellant burns at maximum rate until ex- the amount of burning surface area. What-
hausted. Thus, solid motors are ideal for the ini- ever its initial shape, the propellant surface
tial stages of flight, when raw power is more im- regresses at a pressure-dependent rate as the
portant than finesse, and then liquid-propellant propellant burns, and discrete representa-
rockets take over for the portions of flight re- tions of the solid and fluid components, as
quiring more delicate maneuvering. Despite well as the interface between them, must
their relative simplicity, SRMs are still fiendishly adapt accordingly.
complex in terms of the chemical and thermo- • The spatial and temporal scales are ex-
mechanical processes that take place during fir- tremely diverse. For example, processes
Geometrical complexity
crosecond time scales, or less, which are en- Joints GEN2 ment follows
family
tirely infeasible to treat a two-minute burn this staged
Star grain
of a 125-foot-long rocket. approach with
• Manufacturing and transportation con- 3D GEN1 increasing
straints necessitate the use of numerous family complexity in
2D
joints, including field joints where motor component
GEN0
segments are assembled at the launch site. 1D models and
This significantly complicates the geometry coupling.
and structural response of the motor and in- Weakly Fully Detailed
coupled coupled
troduces potential points of failure.
Physical complexity
• Modeling and simulating each component
is challenging both methodologically and
computationally. Although there is consid- Computing Initiative program) provided let us
erable experience in the field in modeling assemble a team of over 100 researchers, in-
the various rocket motor components, a cluding roughly 40 faculty, 40 graduate stu-
more fundamental understanding of the dents, and 20 staff (research scientists, pro-
constitutive and energetic properties of ma- grammers, and postdoctoral associates), that
terials and of the processes they undergo re- represented 10 departments across our univer-
quires much greater detail along with teras- sity. This diverse group provided the broad ex-
cale computational capacity. pertise needed in combustion, fluid dynamics,
• Modeling and simulating component cou- structural mechanics, and computer science,
pling is even more demanding because it re- but it also presented the additional challenge of
quires not only still greater computational coordinating a large collaborative project that
capacity, but it also demands that the corre- cuts across traditional departmental boundaries.
sponding software modules interact in a We organized our effort along basic discipli-
manner that is physically, mathematically, nary lines without regard to the academic de-
and numerically correct and consistent. partments of the participants. This has had the
When data are transferred between compo- salutary effect of inducing collaboration among
nents, they must honor physical conserva- faculty and students in a given discipline, such
tion laws, mutually satisfy mathematical as fluid dynamics, regardless of which depart-
boundary conditions, and preserve numeri- ment they might occupy. Cross-cutting
cal accuracy, even though the correspond- teams—such as System Integration and Valida-
ing meshes might differ in structure, reso- tion and Specification—draw members from all
lution, and discretization methodology. four disciplinary groups and require an addi-
• Integrated, whole-system SRM simulation tional level of collaboration.
requires enormous computational capac-
ity, currently available only through mas-
sively parallel systems that have thousands Staged approach
of processors. Thus, the software integra- We realized from the outset that a project of
tion framework, mesh generation, numer- this complexity would require a staged approach:
ical algorithms, input/output, and visual- we would need to learn to walk before we could
ization tools necessary to support such run (much less fly). The primary axes of com-
simulations must be scalable to thousands plexity in our problem are physical and geomet-
of processors. ric (see Figure 2). Physical complexity refers to
the detail and sophistication of physical models
In September 1997, CSAR embarked on an employed and the degree of coupling among
ambitious plan to tackle these daunting chal- them. Geometric complexity refers to the di-
lenges and produce a virtual prototyping tool for mension of the problem and the degree of detail
SRMs.3 This article is a progress report almost and fidelity in representing a real SRM. In
two years into our five-year plan. essence, we wish to move along the diagonal of
Our initial plans seemed audacious, but the this diagram over time. In this spirit, we defined
substantial resources our sponsor (the US De- three successive generations of integrated rocket
partment of Energy’s Accelerated Strategic simulation codes:
MARCH/APRIL 2000 23
Space Shuttle Reusable Solid Rocket Motor
We chose the Space Shuttle Reusable Solid Rocket Motor as resulting composite solid is similar to that of a pencil eraser.
our primary simulation target for a variety of reasons, includ- Igniter: Solid rocket pyrogen igniter mounted in forward end,
ing its national importance, its public visibility, its fairly typical 47.5-in. long and containing 134 lb of TP-H1178 propellant
design, and the availability of detailed specifications and Total launch weight: 4.5 million lb (including two SRBs, ex-
extensive test data. We outline here the basic technical facts ternal tank, orbiter, and payload)
about the Space Shuttle RSRM.1,2 Figure A shows a composite Maximum thrust: 3,320,000 lb. force (each SRB)
drawing of the RSRM. Acceleration: Lift-off 1.6 g (maximum 3.0 g)
Height: 126.11 ft Launch timeline:
Diameter: 12.16 ft Liquid engines fire: –6.0 sec
Weight: 149,275 lb empty; 1,255,415 lb full SRB igniter initiated: 0.0 sec
Case: High-strength D6AC steel alloy, 0.479 in. to 0.506 in. Lift-off pressure: 564 psia reached at 0.23 sec
thick All exposed propellant ignited: 0.3 sec
Nozzle: Aluminum nose-inlet housing and steel exit cone, Maximum operating pressure: 914 psia reached at 0.6 sec
with carbon-cloth phenolic ablative liners and glass-cloth Roll program begins: 10 sec
phenolic insulators. Nozzle is partially submerged and is Star grain burnout: 21 sec
movable for thrust vector control. Liquid engines throttled down: 30 sec
Insulation: Asbestos-silica-filled nitrile butadiene rubber Mach 1 reached: 40 sec
Propellant (material percent by weight): Solid propellant burnout: 111 sec
Ammonium perchlorate oxidizer: 70 SRB separation: 126 sec
Powdered aluminum fuel: 16 Velocity at separation: 3,100 mph
Polybutadiene polymer (PBAN) binder: 12 Altitude at separation: 25 nmi
Epoxy curative agent: 2
Ferric oxide burn rate catalyst: trace References
Propellant grain: 11-point star-shaped perforation in head 1. Design Data Book for Space Shuttle Reusable Solid Rocket Motor, Thiokol
end of forward segment, aft-tapered cylindrical perforation Space Operations, Publication No. 930480, TRW-16881, Revision A,
in remaining segments. Liquid and solid ingredients are first Brigham City, Utah, 1997.
thoroughly mixed into a thick paste, then curative agent is 2. A.J. McDonald, “Return to Flight with the Redesigned Solid Rocket Motor,”
added before mixture is vacuum-cast into a mold and then Proc. AIAA/ASME/SAE/ASEE 25th Joint Propulsion Conf., AIAA Paper No. 89-
cured in a “slow” oven for several days. Consistency of 2404, AIAA Press, Reston, Va., 1989, pp. 1–15.
• GEN0: 2D ideal rocket with steady-state flow. Combustion model assumes homoge-
burning at chamber pressure, power law for neous surface burning and pressure-depen-
propellant regression, Euler equations for dent regression rate. There is full, two-way
fluid flow, a rigid case, linearly elastic pro- aeroelastic coupling between fluid and solid
pellant, and one-way coupling from fluid to components. Development of GEN1 was
solid. We based its physical parameters on expected to span the first three years of the
the Space Shuttle reusable solid rocket mo- five-year project.
tor (see sidebar). GEN0 was intended pri- • GEN2: Fully capable rocket simulation tool
marily as a warm-up exercise. with detailed component models, complex
• GEN1: Fully 3D whole-system simulation component interactions, and support for
code using relatively simple component subscale simulations of accident scenarios
models, two-way coupling, and reasonably such as pressurized crack propagation, slag
realistic geometry approximating that of the accumulation and ejection, and potential
Space Shuttle RSRM. Star grain of Shuttle propellant detonation. GEN2 includes
RSRM is included, but not joints, inhibitors, more detailed geometric features, such as
or cracks. Solid components include vis- joints and inhibitors, and also includes more
coelastic propellant and linearly elastic case. detailed and accurate models for materials
Fluid component is an unsteady, viscous, and processes based on separate subscale
compressible flow, with a large-eddy simu- simulations. GEN2 was expected to span
lation turbulence model but with no parti- the last three years of the five-year project,
cles, radiation, or chemical reactions in the overlapping with the final year of GEN1.
MARCH/APRIL 2000 25
Figure 3. Solids and fluids codes Rocsolid Rocflo
have different approaches to • Finite element • Finite volume
multicomponent simulation. • Linear elastodynamics • Unsteady, viscous, compressible flow
• Unstructured hexahedral meshes • Block-structured meshes
• ALE treatment of interface regression • ALE moving boundaries
• Implicit time integration • Explicit time integration
• Multigrade equation solver • 2nd order upwind total variation
• F90, MPI parallelism • F90, MPI parallelism
System integration issues Figure 6. Gas temperature computed by Rocflo in star grain region
A number of technical issues arise in building of Space Shuttle RSRM near onset of steady burning, visualized by
an integrated, multicomponent code such as Rocketeer. Values range from 3,364 K (magenta) to 3,392 K (red).
GEN1. First is the overall integration strategy, Temperature is represented as (a) tint on interior surface of propel-
where the fundamental choice is between mod- lant and as (b) a series of translucent colored isosurfaces in interior
ular and monolithic approaches. In building the at slightly later time. Rocket is cut in half along lateral axis to
GEN1 code, we chose a modular or partioned improve visibility.
approach in that we used separately developed
component modules and created an interface to
tie them together. In such an approach, sepa- component modules by specialists in the respec-
rately computed component solutions might re- tive areas. The modular approach not only ex-
quire subiterations back and forth between com- pedites separate development and maintenance,
ponents to attain self-consistency. This approach it also allows swapping of individual modules
contrasts with a more monolithic strategy in without replacing the entire code or even affect-
which all the physical components are incorpo- ing the other modules. The modular strategy
rated into a single system of equations and all the seemed to offer clear practical advantages in our
relevant variables are updated at the same time, somewhat dispersed organizational setting, as
thereby obviating the need to iterate to self-con- well as potentially letting users include com-
sistency. Although it has some theoretical ad- mercial modules when appropriate.
vantages, a monolithic approach impedes sepa- However, even less coupled approaches are
rate development and maintenance of individual commonly used in practice, in which entirely in-
MARCH/APRIL 2000 27
dependent codes interact only offline (often with associated with each element (facet) of the solid
human intervention), perhaps through exchange interface mesh;7 we then use the local coordinates
of input and output data files. By contrast, in our of the associated element to interpolate relevant
modular GEN1 code, the component modules field values in a physically conservative manner.
are compiled into a single executable code and Yet another thorny issue in component inte-
they exchange data throughout a run with sub- gration is partitioning the component meshes
routine calls and interprocessor communication. for parallel implementation in distributed mem-
Another important issue is the physical, math- ory. The block-structured fluid mesh is relatively
ematical, and geometric description of the inter- easy to partition in a highly regular manner, but
face between components, which in our case in- the unstructured solid mesh is partitioned by a
cludes the jump conditions combustion induces. heuristic approach, currently using Metis, which
A careful formulation of the interface boundary often yields irregular partitions (visit www-
conditions is necessary to satisfy the relevant con- users.cs.umn.edu/~karypis/metis for further in-
servation laws for mass and linear momentum, as formation). Moreover, because we partition the
well as the laws of thermodynamics. two component meshes separately, there is no
Time-stepping procedures are another signifi- way to maintain locality at the interface—adja-
cant issue in component integration. Here, the cent partitions across the interface may not be
time steps for the fluid are significantly smaller placed on the same or nearby processors. In our
than those for the solid. Thus, we employ a pre- current approach, this effect complicates the
dictor-corrector approach in which the fluid is ex- communication pattern and might increase com-
plicitly stepped forward by several (say, 10) time munication overhead, but it has not been a seri-
steps, based on the current geometry the solid de- ous drag on parallel efficiency so far. Neverthe-
termines. The resulting estimate of fluid pressure less, we plan to explore more global, coordinated
at the future time is then available for taking an partitioning strategies that will preserve locality
implicit time step for the solid. However, the re- and perhaps simplify communication patterns.
sulting deformation and velocity of the solid
change the fluid’s geometry, so the time-stepping
of the fluid repeats and so on, until we attain con- Software integration framework
vergence, which usually requires only a few subit- Our overarching goal in CSAR is not only to
erations. Unless iterated until convergence, this develop a virtual prototyping tool for SRMs, but
scheme is only first-order accurate, and it is “ser- also to develop a general software framework and
ial” in that only one component computes at a infrastructure to make such integrated, multi-
time. Parallel time-stepping schemes that are sec- component simulations much easier. Toward this
ond-order accurate without subiterations are pos- end, we have initiated research and development
sible,6 and we plan to investigate these. But be- efforts in several relevant areas of computer sci-
cause we map both fluid and solid components ence, including parallel programming environ-
onto each processor, our current scheme does not ments, performance monitoring and evaluation,
prevent us from utilizing all processors. parallel input/output, linear solvers, mesh gen-
As we mentioned briefly earlier, data transfer eration and adaptation, and visualization.
between disparate meshes of different compo- Our work in parallel programming environ-
nents is another highly nontrivial issue in com- ments has focused on creating an adaptive soft-
ponent integration. In our approach, we let the ware framework for component integration
meshes differ at the interface in structure, reso- based on decomposition and encapsulation
lution, and discretization methodology, and in- through objects. This environment provides au-
deed this is the case in GEN1, because the fluid tomatic adaptive load balancing in response to
mesh is block-structured, relatively fine, and dynamic change or refinement, as well as high-
based on cell-centered finite volumes, whereas level control of parallel components. It is pur-
the solid mesh is unstructured, relatively coarse, posely designed to maintain compatibility with
and based on node-centered finite elements. Al- component modules in conventional languages
though in principle the two interface meshes such as Fortran 90, and it provides an automated
should abut because they discretize the same sur- migration path for the existing parallel MPI code
face, in practice we can’t assume this because of base. This work is in part an extension of the pre-
discretization or rounding errors. Thus, we have viously developed Charm++ system, which has
developed general mesh association algorithms successfully built a large parallel code for molec-
that efficiently determine which fluid points are ular dynamics, NAMD.8 Although this frame-
work is not yet mature enough to serve as the ba- of programming effort required to manage I/O
sis for the current GEN1 code, results for pilot explicitly. For large-scale rocket simulations, we
implementations of the GEN1 component mod- need good performance for collective I/O across
ules using the framework show great promise. many processors for purposes such as periodi-
In performance evaluation, we are leveraging cally taking snapshots of data arrays for visual-
ongoing development of the Pablo performance ization or checkpoint/restart. Because we use ge-
environment,9 which provides capabilities for dy- ographically dispersed machines, we also need
namic, multilevel monitoring and measurement, efficient and painless data migration between
real-time adaptive resource control, and intelli- platforms and back to our home site. Panda11
gent performance visualization and steering of dis- provides all these services—it runs on top of
tributed, possibly geographically dispersed, appli- MPI and uses the standard file systems provided
cations. We used these tools to instrument the with the platforms we use. Its library offers
GEN1 code, view the resulting performance data, server-driven collective I/O, support for various
and relate it back to the application code’s call types of data distributions, self-tuning of perfor-
graph to identify performance bottlenecks. We mance, and integrated support for data migra-
also updated the popular ParaGraph performance tion that exploits internal parallelism. We’ve al-
visualization tool10 to display the behavior and per- ready incorporated Panda into Rocflo, and are
formance of parallel programs using MPI, and we doing the same in Rocsolid. Again, early results
used it to analyze the GEN1 component modules. are promising, and we plan to use Panda to han-
Parallel I/O is often a bottleneck in large-scale dle parallel I/O within our main visualization
simulations, both in terms of performance and tool, Rocketeer (see sidebar).
MARCH/APRIL 2000 29
Figure 7. The propellant surface model employs two sizes of AP particles imbedded in a fuel/binder matrix. Mixture mod-
els for this matrix account for smaller AP particles. (a) 3D flames supported by AP and binder decomposition product
gases for configuration appear at midline region. (b) Primary flame results from burning binder/AP mixture.
T
cross-flow, and they provide a significant source he individual problems just discussed
of heat as they burn to form Al oxides. Near-wall are themselves examples of fluid−
turbulence plays an important role in the dis- solid interactions, albeit on finer
persion of Al droplets, and as a result heat re- scales, for which we will be able to
lease is volumetrically distributed, although use the same software integration framework as
dominant mainly in the near-wall region. Al2O3 for the global simulation of the SRM. In this
is the primary product of combustion, and it ap- way, we hope to leverage much of our current
pears either as fine powder of micron size or as software development effort in component in-
larger residual particles. The deposition of Al2O3 tegration, and to be able to spin off these sub-
in the form of slag in the submerged nozzle can scale simulations from the larger-scale system
adversely affect motor performance. simulation in a highly compatible manner.
MARCH/APRIL 2000 31
Advanced Solid-Rocket Flow Simulation Program ROCFLO,” Proc.
38th AIAA Aerospace Sciences Meeting and Exhibit, AIAA Press, Re-
ston, Va., 2000.
6. C. Farhat and M. Lesoinne, “Two Efficient Staggered Procedures
for the Serial and Parallel Solution of Three-Dimensional Nonlin-
ear Transient Aeroelastic Problems,” Computer Methods in Ap-
plied Mechanics and Eng., Vol. 182, Nos. 3 and 4, 2000.
7. X. Jiao, H. Edelsbrunner, and M.T. Heath, “Mesh Association:
Formulation and Algorithms,” Proc. Eighth Int’l Meshing Round-
table, Tech. Report 99-2288, Sandia Nat’l Labs., Albuquerque,
New Mexico, 1999, pp. 75–82.
8. L. Kale et al., “NAMD2: Greater Scalability for Parallel Molecular
Dynamics,” J. Computational Physics, Vol. 151, No. 1, May 1999,
pp. 283–312.
9. L. DeRose et al., “An Approach to Immersive Performance Visu-
alization of Parallel and Wide-Area Distributed Applications,”
Proc. Eighth IEEE Symp. High-Performance Distributed Computing,
IEEE Computer Soc. Press, Los Alamitos, Calif., 1999, pp.
247–254.
10. M.T. Heath and J.A. Etheridge, “Visualizing the Performance of
Parallel Programs,” IEEE Software, Vol. 8, No. 5, Sept. 1991, pp.
29–39.
11. Y. Cho et al., “Parallel I/O for Scientific Applications on Hetero-
Figure 9. Results obtained from a time-dependent 3D simulation of geneous Clusters: A Resource-Utilization Approach,” Proc. 13th
flow and heat transfer from a spherical droplet in cross-flow. The ACM Int’l Conf. Supercomputing, ACM Press, New York, 1999,
pp. 253–259.
simulation employs a resolution of 81 × 96 × 32 points along the ra-
dial, circumferential, and azimuthal directions at Reynolds number
Re = U∞D/ν = 350, based on free stream velocity (U∞) and droplet di-
ameter (D). At this Reynolds number, flow is unsteady with time- Michael T. Heath is the director of the Center for Sim-
periodic vortex shedding. (a) Azimuthal velocity contours at an in- ulation of Advanced Rockets at the University of Illinois,
stant in time, where we can see an imprint of vortex shedding. (b) Urbana-Champaign. He is also a professor in the De-
Temperature contours, where approaching flow is hotter (red) than partment of Computer Science, the director of the
the droplet (blue), which is considered isothermal. Computational Science and Engineering Program, and
a senior research scientist at the National Center for
Supercomputing Applications at the university. His re-
search interests are in numerical analysis—particularly
Acknowledgments numerical linear algebra and optimization—and in par-
We thank our many colleagues at CSAR for their research allel computing. He wrote Scientific Computing: An In-
contributions to this article. This program is truly a troductory Survey (McGraw-Hill, 1997), and has served
collaborative effort based on the technical strengths of as editor of several journals in scientific and high-per-
many people. We thank Amit Acharya, Prosenjit Bagchi, formance computing. He received a BA in mathemat-
S. Balachandar, Dinshaw Balsara, John Buckmaster, ics from the University of Kentucky, an MS in mathe-
Philippe Geubelle, Changyu Hwang, Thomas L. Jackson, matics from the University of Tennessee, and a PhD in
and Biing-Horng Liou for their contributions to the “New computer science from Stanford University. Contact
research directions” section. The CSAR research program him at CSAR, 2262 Digital Computer Lab., 1304 West
is supported by the US Department of Energy through Springfield Ave., Urbana, IL 61801; m-heath@uiuc.edu;
the University of California under subcontract B341494. www.csar.uiuc.edu.