Академический Документы
Профессиональный Документы
Культура Документы
MISSION COMPUTERS
Brian Sutterfield, John A. Hoschette, Paul Anton
Lockheed Martin MS2 Tactical Systems
Eagan, Minnesota
Multiple Switched
Mission Birth of
Processor Network
Computing
Mission
Mission Mission ?
Computer
Computer Computer
Expanding
Expanding Role
Role for
for the
the Mission
Mission Computer
Computer Requires
Requires Increasing
Increasing Processing
Processing
1.A.4-2
Future Mission Requirements (MIPS). The latest generation of the F-16 requires
nearly 20 gigaflops of performance, the F-22
Future mission computer processing
requires nearly 10 gigaflops of performance and the
requirements are growing at an exponential rate.
latest 5th generation fighter, the F-35, peaks at just
Figure 3 depicts a rough order of magnitude
under 200 gigaflops of required performance from
estimate of mission computer requirements [2].
the mission processor.
Early F-16 mission computer requirements were
approximately 6 Million Instructions Per Second
100,000 MIPS/MFLOPS
10,000 MIPS/MFLOPS
F-22
F-16 60
Multiple Processors
Mission
1,000 MIPS/MFLOPS P4 x86
Cray-2
F-16 50
100 MIPS/MFLOPS F-16 30
F-16 Upg
10 MIPS/MFLOPS Pentium
Single Processor
TI-83 PSE
F-16A/B
TI-83 Calc
1 MIPS/MFLOPS
80486
Mission Time
.1 MIPS/MFLOPS
80386
MIPS
1970
1990
2010
1980
2000
MFLOPS
1.A.4-3
Technologies Needed for Increasing Reconfigurable and Scalable Computing
Architectures
MC Processing Capabilities
Much of the technology required to meet these
surfacing needs exists today in commercial and DWDM Optical Bus Technology
non-military applications. Focused efforts directed Shown in Figure 5 is the basic principle of
at packaging and ruggedizing these technologies in DWDM. Using laser diode technology, 32 channels
a systematic approach will yield a quantum leap in can be sent over a single optical fiber. The receivers
processing capability. These technologies are: are configured to receive many specific
wavelengths of light that each represents a separate
Dense Wave Division Multiplexing
channel. Each transmitter is tunable to a specific
(DWDM) Optical Bus Technology
transmit frequency that operates independently on
Field-Programmable Gate Array (FPGA) the fiber bus. Therefore, a system can have
Computing Technology multiple technologies on a single fiber including
Multi-Core Processors analog, digital, 2 to 40 Gbps, Fibre Channel or
Ethernet, each operating independently of each
Standardized Chassis with Integrated
other. Fiber optics offer high speed data
Optical and Electrical Back Plane
transmission using lighter weight, smaller cable size
Technology
and weight, and eliminate the need for specially
Standard Optical Module Form Factor tuned bulky cables used in copper wire systems. In
Multi-Level Security Hardware and addition, fiber optics is naturally resistant to EMI
Software and provides many features that directly support
Modular Software multiple levels of security and data separation.
1.A.4-4
evolved, the signal processing domains within the not clear that there currently exists a single cohesive
avionics systems were quick to leverage these new strategy to manage Multiple Levels of Security
capabilities. Signal processing applications (MLS), Certification Evidence Assurance Levels
typically involve the processing of large amounts of (CEAL) or safety certifications (DO-178B, DO-
data and, in turn, along with bounds on network 254, SEAL) using multi-core processors. In
speeds, further limit the amount of signal addition, not all multi-core processors are created
processing algorithms that can be parallelized. equal. Some rely heavily on shared resources,
Multi-core processors and networks speeds thus (shared cache for instance), while others can be
become part of the engineering tradeoff specified cleaved under hardware control to act as completely
by Amdahls law. independent entities. Regardless of the issues, there
is no doubt that multi-core processors will be a
Current general purpose processors have
significant part of most, if not all, solutions for the
somewhat stabilized around the 3 GHz mark.
foreseeable future [3]. Development of tools and
Avionics systems have already effectively focused
methods that allow avionics suppliers to easily
on leveraging the multi-core market. Avionics
integrate multi-core processors into their systems
systems are generally a good fit for multi-core
will be extremely important as we forge ahead on
processors because they have a generous contingent
this new path. Shown in Figure 6 is the multi-core
of requirements that can be satisfied with parallel
roadmap with the estimated number of cores per
processing or smaller serialized programs running
module [2].
on several processors. A new challenge arises when
typical safety and security partitions must be
applied to the multi-core processing paradigm. It is
1.A.4-5
Standardized Chassis with Integrated they become available. Shown in Figure 7 is an
approach for high density packaging. The modules
Optical and Electrical Back Plane are packaged side by side and come in standard
Technology sizes and thicknesses. Processor heat is drawn out
Standard chassis with standard rack sizes will of the modules along the edges through advanced
continue to be required. Use of a standard chassis thermal packaging and into liquid-filled chassis
will allow for ease of aircraft installation and use of walls. The locking mechanism which holds the
standard interfaces. High density packaging of modules in place also serves as a heat conduit. The
processing modules will be required to maximize chassis walls contain liquid cooling lines to remove
processing power in limited space, and this will heat. This way the construction of the module is
require liquid cooling. Standardization will allow easier since no liquid actually passes through it.
for swapping of cards and easy upgrading when
The key to high density packaging is module Standard Optical Module Form
standardization, and integrated electrical and optical
backplanes are critically important to bring down
Factor
the size of the chassis. Mission computer power By going to a standard shape and construction
supplies should have the same shape factor as technique the modules will be lower cost and more
processing modules. Integrated optical and producible. The module construction is shown on
electrical backplanes need to be routed in one high the left of Figure 9. At the center of the module is
density assembly to keep the chassis size down. the heat cooling core to remove heat from the
Shown in Figure 8 is an integrated electrical and modules to the edge for cooling. Processing cards
optical back plane. are mounted on either side of the cooling core.
Standard covers are used, and the connector
interface at the bottom of the card is standard.
1.A.4-6
Mezzanine Card
Standard
Processor Card 1
Cover
Processor Card 2
FPGA
FPGA
FPGA
Standard
Standard FPGA Standard Module for Processing
Cover
Interface
Standard Construction Techniques
1.A.4-7
customized, supporting software from scratch for between generic operating systems and
each new hardware system is very cost-prohibitive. generic hardware features (e.g.,
Many strategies for creating software must be used watchdog timers, standard busses, DMA
to support software that is reusable, reconfigurable, and encryption interfaces, etc.)
adaptable, testable, and maintainable. Developing software using tools that use
Key to these strategies is to build upon existing hardware, firmware, software, and
software solutions. Reusing existing code and operations models to automate the
creating reusable code is a first step. There is a hosting across multiple processors to
challenge even in this simple requirement as high balance mission processing loads.
performance embedded systems tend to prohibit Including debugging tools with standard
generalized software solutions, customers dont interfaces such as fault logging, start-up
typically want to support requirements that allow and run-time testing, cross-processor,
for generalized gold plated solutions, and the run-time, cluster debugging
rigorous testing required of avionics systems mechanisms for detecting, identifying
prohibit unused code. However, re-using code is and locating failures (that may be
only a small part of the solution. There is far more induced by hardware failures, software
cost-effectiveness in leveraging existing software design faults, interface incompatibilities,
tools: compilers, debuggers, profilers, and a myriad or unanticipated events).
of testing tools. Much of this construction on an Capturing and logging run-time
existing software support foundation is performance data both for adaptive and
accomplished through use of open software off-line performance improvements as
standards that use commercial solutions. Finally, well as both hardware and software
there is an enormous need to manage the software failure predictions.
and tools. Systems for keeping track of capabilities,
versions, compatibilities, testing coverage, as well Providing boot-up systems that allow for
as technical and user documentation. Industry complete, secure, field-reloadable software.
standards such as CMMI and SEI, as well as Developing and maintaining a comprehensive
configuration and data management methods set of regression tests to evaluate the accuracy of
continue to be refined to support this software automated code conversion mechanisms against
development and these software tools. legacy mission systems performance.
Even after tackling the issues associated with
developing and managing reusable code while Reconfigurable and Scalable
leveraging commercial software, there are still cost- Architectures
prohibitive limits associated with transporting the
The processing load of the mission computer
growing one to ten million lines of code in mission
nodes (i.e., Radar, Electronic Warfare (EW) or
system computers. There is the further need to build
Sensors) greatly varies during the mission. As
systems that support automatic code conversion.
shown on the left side of Figure 10 in a simple
This includes:
example, if the processing load is sized based on
Building effective middleware layers. static allocation, the size and power of the mission
These layers both insulate software from computer quickly becomes prohibitively large. The
hardware and also hardware from total aggregate processing throughput required in
software. In addition, this middleware this example using static resource allocation would
will also support reconfigurable software be 240 gigaflops.
architectures to manage changes in
In contrast, a reconfigurable mission computer
processing needs when hardware fails
architecture would alter resource usage during the
and when mission systems needs change.
mission and accomplish the processing with far less
These middleware layers need to include
capacity. This is shown on the right side of
low-level operating system support
Figure 10 where a reconfigurable architecture
(OSS) and library support for interfaces
would need only 138 gigaflops.
1.A.4-8
Figure 10. Comparison of Static vs. Dynamic Resource Allocation and Impact on Processor Size
A reconfigurable mission computer allows for processing significantly increases while the demand
the re-assignment of the node processing elements for EW and Data decrease. As the mission
as the mission demands. This is shown pictorially in transitions to the egress stage the radar processing
Figure 11, where the processing nodes of the load diminishes and unused radar processing nodes
mission computer are re-assigned to the various are reassigned back to Data. Finally during mission
domains (i.e., Radar, EW and Data ) at different return stage the EW load is significantly increased,
stages of the mission. In the fly-out portion of the while the Radar load is further reduced. This allows
mission the processing load is fairly balanced. further reassignment of processing resources to
During mission ingress, the demand for radar EW.
100
Radar EW Data Radar EW Data 90
Data
Misiion Computer Utilization
80
EW
70
uP uP uP uP uP uP uP uP 60
50
40
Radar
uP uP uP uP uP uP uP uP 30
20
10
0
Radar EW Data
uP uP uP uP uP uP uP uP
uP uP uP uP uP uP uP uP
1.A.4-9
Conclusion The Future Mission DWDM data buses, the future mission computers
will provide an open architecture with no electrical
Computer cross bar switches supporting data bus rates in
Multi-mission aircraft are placing greatly excess of 10 gigabits/second. Communication
increasing demands on mission computers. The between network nodes will support multi-level
mission computers network and architecture security with separated levels of classified
continue to grow as new subsystems are added and processing. Data encryption and decryption will be
new processing is required. The mission computer standard features allowing for safe and guarded
must have a highly flexible and versatile computing operation.
architecture to meet the ever-changing mission
needs with cost-effective processing resources. The computer modules will be a standard
shape and size based on FPGA and multi-core
By using a methodical and systematic processor technology supporting high density
approach and applying existing technology, the packaging. Multi-core, FPGA and hybrid
future mission computers will be able to make a processors will be supported using standard optical
quantum leap in processing power (>10 teraflops) bus architecture. The architecture will allow
and therefore support the growth in processing dynamic resource allocation based on the mission
demands. The future mission computer is shown needs allowing the mission computer to accomplish
pictorially in Figure 12. Using optical coupled more processing with less hardware.
Cooling Core
FPGA Two Sided
uP uP uP uP
FPGA FPGA FPGA
Optical
Coupler uP uP uP uP
Standard Chassis
References Biographies
Brian Sutterfield is the Chief Engineer for the
[1] Levis, J, B. Sutterfield, R. Stevens, 2006, Fiber
Tactical Avionics Group of Lockheed Martin in
Optic Communication within the F-35 Mission
Eagan, Minnesota. Brian was one of the chief
System, Avionics Fiber-Optics and Photonics,
Architects for the F-35 Integrated Core Processor
2006 IEEE Conference, pp. 12-13.
Mission Computer. Brian is presently leading the
[2] Data collected via Google search of open development the next generation mission computer
literature. architectures for multiple programs.
1.A.4-10
John Hoschette is a staff systems engineer in Email Addresses
the Tactical Avionics Group of Lockheed Martin in
Brian Sutterfield: brian.sutterfield@lmco.com
Eagan, Minnesota. John was one of the original
developers of the optical network used in both the John Hoschette: john.hoschette@lmco.com
F-16 and F-35 aircraft.
Paul Anton: paul.j.anton@lmco.com
Paul Anton was the Program Manager for the
F-35 mission computer in the Tactical Avionic
Group of Lockheed Martin in Eagan, Minnesota. 27th Digital Avionics Systems Conference
Under his direction the first F-35 mission October 26-30, 1008
computers were successfully built and delivered.
Paul is presently the the Tactical Avionics Business
Development Director.
1.A.4-11