Вы находитесь на странице: 1из 11

FUTURE INTEGRATED MODULAR AVIONICS FOR JET FIGHTER

MISSION COMPUTERS
Brian Sutterfield, John A. Hoschette, Paul Anton
Lockheed Martin MS2 Tactical Systems
Eagan, Minnesota

Abstract computer requires high speed through-put,


modularity, rapid reconfiguration, and an open
Highly flexible and versatile computing
architecture.
architectures are needed to meet the ever-changing
jet fighter aircraft mission needs with cost-effective
processors. Future avionics mission computers will
require a continued commitment to open
architectures and modular hardware operating at
more than two orders of magnitude greater
processing than todays most advanced mission
computing solutions. Dense multiplexing busses
(optical technology) will be required to interlink
multi-core modular processors. Multi-level security Figure 1. Advanced F-35 Multi-Mission Jet
will be required to provide safeguards to ensure the Fighter
integrity of the network. Systems must be
architected in a manner that will support open In this paper we present how the future
standards while still allowing for ultra high density integrated modular avionics will meet these needs.
packaging. Backplanes will contain integrated The next generation open architecture mission
electrical and optical signals, and progress in computer will be based on multi-core modular
modular software development and legacy software processors and will require high bandwidth
reuse will be necessary to maximize the efficiency of electromagnetic interference (EMI) protected data
integrating new hardware. All technologies required busses. These data busses will need to support
to build the next generation mission computer exist integrated data integrity, multi-level security and
today, and by applying proper focus these have expansion options that will support multiple
technologies can be combined to provide a quantum and future protocols. We see needs in the next five
increase in processing capability with higher to seven years of mission computers requiring
reliability and affordable cost. hundreds of teraflops of computer performance
versus the hundreds of gigaflops required today.
The architecture must be re-configurable to handle
Introduction high peak processing demands and provide
High speed mission computing onboard a maximum flexibility. Hardware will comply with
modern fighter aircraft is accomplished by the open standard form factors and support standard
mission computer. The mission computer is chassis interfaces making them interchangeable for
networked with other avionics subsystems to aid the higher reliability and maintainability. Centralized
pilot in mission planning, mission execution and computing resources and interfaces to avionics
workload management. As the brains of this linked subsystems, radar, electronic warfare systems,
and integrated avionics system, the mission optical sensors, Global Positioning System (GPS),
computer must support a multi-mission aircraft with Data Storage, Inertial Navigation System (INS), and
constantly changing needs to provide the pilot with others will support future growth, degraded modes
the best situational awareness possible. Shown in and reconfiguration. Key technology components to
Figure 1 is the advanced multi-mission F-35 fighter. achieve this superior future mission computer have
To meet the extreme and ever-changing processing already been built and independently demonstrated
demands of modern fighter aircraft, the mission in both lab and flight environments. With the proper

978-1-4244-2208-1/08/$25.00 2008 IEEE.


1.A.4-1
focus of these technologies, the next generation of additional real-time decision support to the pilot.
avionic mission computers is easily obtainable Industrialized microprocessors, printed wiring
within the next few years. The future of jet aircraft boards and single board computers began to be
mission computers centers on modular, optically applied to the problem. By 1995, computer
networked processors using multi-core technology processing hardware capability was increasing by
based on a standard format card and chassis [1]. such magnitude that a common open systems
approach began to be applied to the development of
the mission computer. Avionics Open System
Evolution of Mission Computing Architectures (AOSA), Optically Networked
Figure 2 depicts the evolution of mission Electronics (ONE), and Vita standards for avionics
computing over its 50-year lifespan. Sensor were all results of this standardization.
capability and complexity drive mission demands
As we look forward past 2010, the demand on
which in turn drive the mission computer
mission processors will continue to grow as pilot
performance requirements. As requirements
workload and aircraft missions increase in
increase, specific advances in technology had to be
capability and complexity. We are already seeing
invented and applied to the mission computer to
mission needs that will require one to two orders of
ensure the pilot and the system were capable of
magnitude increase in computing requirements
meeting the needs of the mission. As early as 1970,
within the next 10 years. Specific examples of this
engineers realized that a complex mission computer
are the desire to connect the pilot and aircraft to the
rivaling those used by scientists would be required
Global Information Grid (GIG) and link multiple
to provide pilot assistance in managing his or her
aircraft to provide local, theater and global
workload. These early mission computers were
situational awareness during each mission. As the
simple computers with processing capabilities
commercial market struggles to maintain the pace
similar to a hand-held calculator of today. By 1980,
of electronics development, our challenge is to
mission computers with localized networks were
ruggedize and insert the latest technologies to
being designed with the intention of networking
support our pilots in accomplishing their mission.
into other aircraft subsystems and to provide
Today

70s 80s 90s 00s 10s 20s


Pilot
Multi-sensors & Expanding Missions, Global Net- GIG
Increasing workload
Subsystems to More Complexity Multi-Aircraft
increases,
Mission Needs needs help
control More Subsystems Theater Awareness

Multiple Switched
Mission Birth of
Processor Network
Computing
Mission
Mission Mission ?
Computer
Computer Computer

Microprocessors Explosion of uP & memory Optical Buses


Single Board Computers High Bandwidth
Avionics Open System Multicore
Architectures (AOSA)
Technology FPGAs
Switched Networks Specialized Processors
Improvements Parallel Processing
Optically Network Electronics
(ONE)
Vita Standards

Expanding
Expanding Role
Role for
for the
the Mission
Mission Computer
Computer Requires
Requires Increasing
Increasing Processing
Processing

Figure 2. Evolution of Mission Computing

1.A.4-2
Future Mission Requirements (MIPS). The latest generation of the F-16 requires
nearly 20 gigaflops of performance, the F-22
Future mission computer processing
requires nearly 10 gigaflops of performance and the
requirements are growing at an exponential rate.
latest 5th generation fighter, the F-35, peaks at just
Figure 3 depicts a rough order of magnitude
under 200 gigaflops of required performance from
estimate of mission computer requirements [2].
the mission processor.
Early F-16 mission computer requirements were
approximately 6 Million Instructions Per Second

Processing Performance vs. Time


Maximum Reconfigurable Architectures
1,000,000 MIPS/MFLOPS
F-35 Upg

100,000 MIPS/MFLOPS

FLOPS Required During


F-35 IOC

10,000 MIPS/MFLOPS
F-22

F-16 60
Multiple Processors

Mission
1,000 MIPS/MFLOPS P4 x86
Cray-2

F-16 50
100 MIPS/MFLOPS F-16 30
F-16 Upg

10 MIPS/MFLOPS Pentium
Single Processor
TI-83 PSE
F-16A/B
TI-83 Calc
1 MIPS/MFLOPS
80486
Mission Time
.1 MIPS/MFLOPS
80386

MIPS
1970

1990

2010
1980

2000

MFLOPS

Figure 3. Future Mission Computer (MC) Requirements


We expect to see requirements exceeding one requirements, ultra high speed data buses in excess
teraflop by 2015. The processing demand varies of 10 Gbps will be necessary. In addition, the
greatly during a typical mission, and this places a network must be protected against attack and will
significant demand on the processors. No single require multi-level security, separation of classified
processor can handle the load. Multiple processors data and robust EMI capabilities. In order to make
are required, and reconfigurable architectures the mission computer more affordable, easier to
provide the potential to meet the peak processing produce, and more reliable, modular standardized
demands during the mission while still meeting the hardware and software are required. The basic
available volume and power requirements. statement of the problem is: highly flexible and
versatile computing architectures are needed to
Future mission computers will require
meet the ever-changing mission needs with cost-
reconfigurable and scalable architectures. This
effective processors.
development is driven by multi-mission aircraft
where many subsystems are required to maintain
continuous communication to successfully complete
the mission. A typical modern mission computer
architecture is shown in Figure 4. Currently, high
speed point-to-point networks are employed to
support low latency decision-making within the
mission system. As the complexity and
requirements of the system increase, we expect to
see communication needs in excess of the 1-2 GHz
required today. Ideally, the system of the future will Figure 4. Typical Mission Computer Network
support a bus topology to facilitate reconfiguration
and sharing of resources as well as straightforward
access to available data. To support mission

1.A.4-3
Technologies Needed for Increasing Reconfigurable and Scalable Computing
Architectures
MC Processing Capabilities
Much of the technology required to meet these
surfacing needs exists today in commercial and DWDM Optical Bus Technology
non-military applications. Focused efforts directed Shown in Figure 5 is the basic principle of
at packaging and ruggedizing these technologies in DWDM. Using laser diode technology, 32 channels
a systematic approach will yield a quantum leap in can be sent over a single optical fiber. The receivers
processing capability. These technologies are: are configured to receive many specific
wavelengths of light that each represents a separate
Dense Wave Division Multiplexing
channel. Each transmitter is tunable to a specific
(DWDM) Optical Bus Technology
transmit frequency that operates independently on
Field-Programmable Gate Array (FPGA) the fiber bus. Therefore, a system can have
Computing Technology multiple technologies on a single fiber including
Multi-Core Processors analog, digital, 2 to 40 Gbps, Fibre Channel or
Ethernet, each operating independently of each
Standardized Chassis with Integrated
other. Fiber optics offer high speed data
Optical and Electrical Back Plane
transmission using lighter weight, smaller cable size
Technology
and weight, and eliminate the need for specially
Standard Optical Module Form Factor tuned bulky cables used in copper wire systems. In
Multi-Level Security Hardware and addition, fiber optics is naturally resistant to EMI
Software and provides many features that directly support
Modular Software multiple levels of security and data separation.

Figure 5. Basic Principle of DWDM and Capability Compared to Existing Buses


FPGA Computing Technology extensive use of parallel processing and multiple
processing cores. FPGAs on the other hand are still
FPGAs became heavily used in military
running under 1 GHz, providing ample room for
avionics systems beginning in 1990. Although
clock speed performance improvement as well as
initially thought by some to one day replace the
more logic units and additional capability added
general purpose processor, FPGAs have been
due to reduction in feature size.
typically utilized as a supplementary capability for
communications and algorithmic acceleration
within the bounds of the mission system. Although Multi-Core Processors
primarily used as co-processors, FPGAs may hold Typical avionics mission systems have used
the key to maintaining Moores law in the avionics multiple independent processors for several years.
world. With general purpose processors currently Additionally, as multi-processor computers
ceilinged at 2.5 3 GHz, new functionality requires

1.A.4-4
evolved, the signal processing domains within the not clear that there currently exists a single cohesive
avionics systems were quick to leverage these new strategy to manage Multiple Levels of Security
capabilities. Signal processing applications (MLS), Certification Evidence Assurance Levels
typically involve the processing of large amounts of (CEAL) or safety certifications (DO-178B, DO-
data and, in turn, along with bounds on network 254, SEAL) using multi-core processors. In
speeds, further limit the amount of signal addition, not all multi-core processors are created
processing algorithms that can be parallelized. equal. Some rely heavily on shared resources,
Multi-core processors and networks speeds thus (shared cache for instance), while others can be
become part of the engineering tradeoff specified cleaved under hardware control to act as completely
by Amdahls law. independent entities. Regardless of the issues, there
is no doubt that multi-core processors will be a
Current general purpose processors have
significant part of most, if not all, solutions for the
somewhat stabilized around the 3 GHz mark.
foreseeable future [3]. Development of tools and
Avionics systems have already effectively focused
methods that allow avionics suppliers to easily
on leveraging the multi-core market. Avionics
integrate multi-core processors into their systems
systems are generally a good fit for multi-core
will be extremely important as we forge ahead on
processors because they have a generous contingent
this new path. Shown in Figure 6 is the multi-core
of requirements that can be satisfied with parallel
roadmap with the estimated number of cores per
processing or smaller serialized programs running
module [2].
on several processors. A new challenge arises when
typical safety and security partitions must be
applied to the multi-core processing paradigm. It is

Figure 6. Multi-Core Processor Roadmap

1.A.4-5
Standardized Chassis with Integrated they become available. Shown in Figure 7 is an
approach for high density packaging. The modules
Optical and Electrical Back Plane are packaged side by side and come in standard
Technology sizes and thicknesses. Processor heat is drawn out
Standard chassis with standard rack sizes will of the modules along the edges through advanced
continue to be required. Use of a standard chassis thermal packaging and into liquid-filled chassis
will allow for ease of aircraft installation and use of walls. The locking mechanism which holds the
standard interfaces. High density packaging of modules in place also serves as a heat conduit. The
processing modules will be required to maximize chassis walls contain liquid cooling lines to remove
processing power in limited space, and this will heat. This way the construction of the module is
require liquid cooling. Standardization will allow easier since no liquid actually passes through it.
for swapping of cards and easy upgrading when

High Density Packaging


Chassis to Allow Reconfigurable Architectures

Figure 7. High Density Packing and Standard Size Chassis Concept

The key to high density packaging is module Standard Optical Module Form
standardization, and integrated electrical and optical
backplanes are critically important to bring down
Factor
the size of the chassis. Mission computer power By going to a standard shape and construction
supplies should have the same shape factor as technique the modules will be lower cost and more
processing modules. Integrated optical and producible. The module construction is shown on
electrical backplanes need to be routed in one high the left of Figure 9. At the center of the module is
density assembly to keep the chassis size down. the heat cooling core to remove heat from the
Shown in Figure 8 is an integrated electrical and modules to the edge for cooling. Processing cards
optical back plane. are mounted on either side of the cooling core.
Standard covers are used, and the connector
interface at the bottom of the card is standard.

Figure 8. Integrated Electrical and Optical


Back Plane

1.A.4-6
Mezzanine Card
Standard
Processor Card 1
Cover

Processor Card 2
FPGA
FPGA

FPGA

FPGA FPGA FPGA


Standard
Cooling Core

Standard
Standard FPGA Standard Module for Processing
Cover
Interface
Standard Construction Techniques

Figure 9. Concept for Standard Optical Module Hardware

The module itself contains a standard set of Separation of levels


FPGAs that can be loaded with the latest processing Data Encryption /Decryption
cores. The cores are selected based on the
processing needs. This makes the module High throughput, on-board encryption
reconfigurable and easy to update as new and better engine (> 40 Mbits/sec)
processing cores become available. With a module Message Labeling (Source and
entirely FPGA-based, the hardware need not change Destination)
every time there is a change to the processor. This Message Priority High, Medium, Low
ability to reprogram FPGAs leads to significant
Tamper Proof
lifecycle cost savings.
Key among these capabilities is a secure
Network Interface Unit (NIU) that acts both as a
Multi-Level Security Hardware and communications translator and gateway between
Software computing resources and the network. Each system
Future mission system computers need a on the network must communicate with other
robust threat protection capability due to a large systems on the network only through their
number of potential threats from both known and respective secure NIUs. The NIUs must be able to
undetermined sources. In particular, threats can stop unauthorized and potentially dangerous
come from any source that has the potential to messages from being transmitted on the network,
communicate with or link to a mission system and from being received from the network.
computer. These include threats from other aircraft,
satellites, ground stations or even maintenance Modular Software
activities.
Existing mission computing requirements have
A robust security protection system for the increased from less than one gigaflop in the 1970s
Mission Computer must, as a minimum, have the through 200 gigaflops today, and that increase in
following safeguards in place: demand is expected to continue (Figure 3). The
software supporting this increasing demand in
User Authentication
interconnectivity, multi-systems coordination,
Handling of classified Data reconfigurability, and adaptive, situational-
Multi-level processing dependent responsiveness is likewise increasing
both in quantity and complexity. Recreating

1.A.4-7
customized, supporting software from scratch for between generic operating systems and
each new hardware system is very cost-prohibitive. generic hardware features (e.g.,
Many strategies for creating software must be used watchdog timers, standard busses, DMA
to support software that is reusable, reconfigurable, and encryption interfaces, etc.)
adaptable, testable, and maintainable. Developing software using tools that use
Key to these strategies is to build upon existing hardware, firmware, software, and
software solutions. Reusing existing code and operations models to automate the
creating reusable code is a first step. There is a hosting across multiple processors to
challenge even in this simple requirement as high balance mission processing loads.
performance embedded systems tend to prohibit Including debugging tools with standard
generalized software solutions, customers dont interfaces such as fault logging, start-up
typically want to support requirements that allow and run-time testing, cross-processor,
for generalized gold plated solutions, and the run-time, cluster debugging
rigorous testing required of avionics systems mechanisms for detecting, identifying
prohibit unused code. However, re-using code is and locating failures (that may be
only a small part of the solution. There is far more induced by hardware failures, software
cost-effectiveness in leveraging existing software design faults, interface incompatibilities,
tools: compilers, debuggers, profilers, and a myriad or unanticipated events).
of testing tools. Much of this construction on an Capturing and logging run-time
existing software support foundation is performance data both for adaptive and
accomplished through use of open software off-line performance improvements as
standards that use commercial solutions. Finally, well as both hardware and software
there is an enormous need to manage the software failure predictions.
and tools. Systems for keeping track of capabilities,
versions, compatibilities, testing coverage, as well Providing boot-up systems that allow for
as technical and user documentation. Industry complete, secure, field-reloadable software.
standards such as CMMI and SEI, as well as Developing and maintaining a comprehensive
configuration and data management methods set of regression tests to evaluate the accuracy of
continue to be refined to support this software automated code conversion mechanisms against
development and these software tools. legacy mission systems performance.
Even after tackling the issues associated with
developing and managing reusable code while Reconfigurable and Scalable
leveraging commercial software, there are still cost- Architectures
prohibitive limits associated with transporting the
The processing load of the mission computer
growing one to ten million lines of code in mission
nodes (i.e., Radar, Electronic Warfare (EW) or
system computers. There is the further need to build
Sensors) greatly varies during the mission. As
systems that support automatic code conversion.
shown on the left side of Figure 10 in a simple
This includes:
example, if the processing load is sized based on
Building effective middleware layers. static allocation, the size and power of the mission
These layers both insulate software from computer quickly becomes prohibitively large. The
hardware and also hardware from total aggregate processing throughput required in
software. In addition, this middleware this example using static resource allocation would
will also support reconfigurable software be 240 gigaflops.
architectures to manage changes in
In contrast, a reconfigurable mission computer
processing needs when hardware fails
architecture would alter resource usage during the
and when mission systems needs change.
mission and accomplish the processing with far less
These middleware layers need to include
capacity. This is shown on the right side of
low-level operating system support
Figure 10 where a reconfigurable architecture
(OSS) and library support for interfaces
would need only 138 gigaflops.

1.A.4-8
Figure 10. Comparison of Static vs. Dynamic Resource Allocation and Impact on Processor Size

A reconfigurable mission computer allows for processing significantly increases while the demand
the re-assignment of the node processing elements for EW and Data decrease. As the mission
as the mission demands. This is shown pictorially in transitions to the egress stage the radar processing
Figure 11, where the processing nodes of the load diminishes and unused radar processing nodes
mission computer are re-assigned to the various are reassigned back to Data. Finally during mission
domains (i.e., Radar, EW and Data ) at different return stage the EW load is significantly increased,
stages of the mission. In the fly-out portion of the while the Radar load is further reduced. This allows
mission the processing load is fairly balanced. further reassignment of processing resources to
During mission ingress, the demand for radar EW.

100
Radar EW Data Radar EW Data 90
Data
Misiion Computer Utilization

80
EW
70
uP uP uP uP uP uP uP uP 60
50
40
Radar
uP uP uP uP uP uP uP uP 30
20
10
0

Fly out Mission ingress Mission Timeline

Radar EW Data

Radar EW Data Radar EW Data

uP uP uP uP uP uP uP uP

uP uP uP uP uP uP uP uP

Mission Return Mission egress

Figure 11. Dynamic Reconfigurable Architecture as a Function of the Mission

1.A.4-9
Conclusion The Future Mission DWDM data buses, the future mission computers
will provide an open architecture with no electrical
Computer cross bar switches supporting data bus rates in
Multi-mission aircraft are placing greatly excess of 10 gigabits/second. Communication
increasing demands on mission computers. The between network nodes will support multi-level
mission computers network and architecture security with separated levels of classified
continue to grow as new subsystems are added and processing. Data encryption and decryption will be
new processing is required. The mission computer standard features allowing for safe and guarded
must have a highly flexible and versatile computing operation.
architecture to meet the ever-changing mission
needs with cost-effective processing resources. The computer modules will be a standard
shape and size based on FPGA and multi-core
By using a methodical and systematic processor technology supporting high density
approach and applying existing technology, the packaging. Multi-core, FPGA and hybrid
future mission computers will be able to make a processors will be supported using standard optical
quantum leap in processing power (>10 teraflops) bus architecture. The architecture will allow
and therefore support the growth in processing dynamic resource allocation based on the mission
demands. The future mission computer is shown needs allowing the mission computer to accomplish
pictorially in Figure 12. Using optical coupled more processing with less hardware.

Typical Processor Board


FPGA
FPGA

Cooling Core
FPGA Two Sided
uP uP uP uP
FPGA FPGA FPGA

Optical
Coupler uP uP uP uP

Standard Chassis

Figure 12. Concept for the Future Avionics Mission Computer

Acknowledgements [3] Faber, Rob, 2008, High Performance


computers Future, What will the Next 20 Years
The authors would like to acknowledge the
Bring? Scientific Computing, July/August 2008,
contributions of Jeff Levis, Dr Keith Burgess, Mert
pp. 24-25.
Horne and Rich Berg to this paper.

References Biographies
Brian Sutterfield is the Chief Engineer for the
[1] Levis, J, B. Sutterfield, R. Stevens, 2006, Fiber
Tactical Avionics Group of Lockheed Martin in
Optic Communication within the F-35 Mission
Eagan, Minnesota. Brian was one of the chief
System, Avionics Fiber-Optics and Photonics,
Architects for the F-35 Integrated Core Processor
2006 IEEE Conference, pp. 12-13.
Mission Computer. Brian is presently leading the
[2] Data collected via Google search of open development the next generation mission computer
literature. architectures for multiple programs.

1.A.4-10
John Hoschette is a staff systems engineer in Email Addresses
the Tactical Avionics Group of Lockheed Martin in
Brian Sutterfield: brian.sutterfield@lmco.com
Eagan, Minnesota. John was one of the original
developers of the optical network used in both the John Hoschette: john.hoschette@lmco.com
F-16 and F-35 aircraft.
Paul Anton: paul.j.anton@lmco.com
Paul Anton was the Program Manager for the
F-35 mission computer in the Tactical Avionic
Group of Lockheed Martin in Eagan, Minnesota. 27th Digital Avionics Systems Conference
Under his direction the first F-35 mission October 26-30, 1008
computers were successfully built and delivered.
Paul is presently the the Tactical Avionics Business
Development Director.

1.A.4-11

Вам также может понравиться