You are on page 1of 29

Direct Voice Input (DVI) (also sometimes called Voice Input Control (VIC)) is a style of

Human-Machine Interaction "HMI" in which the user makes voice commands to issue
instructions to the machine. It has found some usage in the design of the cockpit of several
modern military aircraft, particularly the Eurofighter, the F-35 Lightning II, the Rafale and the
JAS 39 Gripen, having been trialled on earlier fast jets such as the Harrier AV-8B and F-16
VISTA. A study has also been undertaken by the Royal Netherlands Air Force using voice
control in a F-16 simulator. [1]
DVI systems may be "user-dependent" or "user-independent". User-dependent systems require a
personal voice template to be created by the pilot which must then be loaded onto the aircraft
before flight. User-independent systems do not require any personal voice template and will
work with the voice of any user. [2]
Direct Voice Input

DVI allows the pilot to activate non-safety critical moding and data entry functions as an
alternative to using manual methods.
Options include:

Manual data entry

Multi-Function Head Down Display (MHDD) selection and manipulation

Radio selection and navigation route manipulation

Target selection

Target allocation to formation members

DVI commands are confirmed by visual and/or aural feedback. This unique VTAS capability
drastically reduces the pilots workload to focus on the mission and systems operation. In an air
battle scenario, this system even allows the lead pilot to assign targets to himself with two simple
voice commands or to any of his wingmen with only five commands

Direct Voice Input

DVI is a very simple concept: the pilot uses his/her voice to provide an input to an aircraft
system in order to obtain an action or information from that system.
The increment of aircraft capabilities and functionalities can dramatically increase pilot
workload. Keeping pilot workload within acceptable margins in the main driver for next
generation cockpit design. The Eurofighter Typhoon cockpit combines cutting edge Hands On
Throttle And Stick (HOTAS) design with revolutionary DVI technology, providing the operator
The technical process basically consists of a real time comparison between the incoming audio
signal (pilot voice) and stored data (general/individual speech models). The key issues are:

Injected audio signal (acoustic nature: speaker speech style, reverberations and echoes when
talking into the oxygen mask, background cockpit noise; electrical nature: frequency response
from microphone, transmission channel)

Speech models (speaker dependant/independent system)

Recognition algorithms

Computing capability of the processor/system (including syntax structure and total number of

DVI has been targeted for use in the commands that can reduce pilot workload without
compromising flight safety (a press button or soft key backup is incorporated):

Display management: Using DVI for display optimisation is an appreciated tool when workload

Communications and management: this tool is use for radio, IFF, routes, waypoints etc.

Track management: All displays present a fused picture to increase situational awareness and
reduce workload. DVI will allow tracks nomination, deletion, interrogation, display of extra info

MIDS management is another time consuming task that may be optimised with DVI

the new features of this system could include:

Continuous speech recogniser

Speaker independent voice recognition system based on common databases (e.g. British
English, American English, Spanish English etc., no pilot templates)

Large vocabulary

The technical process basically consists of a real time comparison between the incoming audio
signal (pilot voice) and stored data (general/individual speech models). The key issues are:

Injected audio signal (acoustic nature: speaker speech style, reverberations and echoes when
talking into the oxygen mask, background cockpit noise; electrical nature: frequency response
from microphone, transmission channel)

Speech models (speaker dependant/independent system)

Recognition algorithms

Computing capability of the processor/system (including syntax structure and total number of

DVI upgrades:

Requesting information to be displayed for any target or waypoint

Manipulation of the Laser Designator Pod and its picture

Creation of a waypoint at a point of interest with only two commands

Increased vocabulary to almost 90 commands

Expansion of functions relates to the allocation of different mission types to wingmen and the
reporting of own mission details

Increased vocabulary to over 100 commands


The AHK3000 series of multi-functional keyboards are designed specifically for use in the modern trading room
environment. They provide the trader with a logically laid out single keyboard solution for all major services whilst
avoiding the confusion of multi-purpose function keys and allowing the IT manager to standardise on a single
keyboard format across the entire trading floor.
All major market-data service keys are provided as standard together with dedicated selection keys for up to eight
additional services. The keyboards have been designed to integrate fully with the Amulet Hotkey range of
switching and driving products but can also be used directly connected to PS2, SUN, Bloomberg and Reuters
system units
The AHK3000 series of keyboards retain the familiar 104/105 key position layout but incorporates 36 extra
dedicated keys for dealing and switching functionality. Clear key cap legends, colours and LED's are used to
accurately replicate both the layout and functionality of whichever service is being used by the dealer. AHK3000
series keyboards are intuitive to use and robustly constructed making them the logical choice for the IT manager


The Head Up Display (HUD) is a glass window at the front of the cockpit which superimposes
important flight data and weapon information over the pilot's view. It is used to provide
important information to the pilot without requiring him to look down into the cockpit,
improving situational awareness. The information displayed includes the aircraft's attitude,
airspeed, and altitude, navigational information, and weapon targeting information.
The aircraft's cockpit contains a CRT embedded in the front of the cockpit underneath the HUD.
This display projects the information upwards onto the glass screen of the HUD, which reflects
the information into the pilot's field of view. The "screen" of the HUD contains no electronics, it
is simply a glass reflector.
Most of the weapon systems in Falcon 4 (and in real life) have their own HUD presentations,
which are discussed under the pages for those weapons. This page will only discuss common
features and the basic navigational display.

Basic Symbology
The HUD packs a lot of information into a very small space. It is important to be able to quickly
isolate and read the data that you are looking for, and this requires a good working knowledge of
the HUD's basic layout. The following image (from the F-16 MLU avionics manual) shows an
overview of the information displayed on the HUD. For simplicity, the discussion of the HUD
symbology is separated into left, right, and center portions.

Head-Up Display Schematic

Airspeed Scale

The airspeed tape is located on the left side of the HUD. It contains a readout of your aircraft's
current speed, as well as a moving "tape". The notch underneath the C points to your aircraft's
current airspeed (in this case, 597 knots). The "tape" has notches at 10-knot intervals, and is
labeled every 50 knots (except that the label closest to your current airspeed is not shown).
The C indicates that the airspeed tape is currently displaying calibrated airspeed. However, the
tape is also capable of displaying true airspeed (in which case a "T" is displayed) or your ground
speed (in which case a "G" is displayed). See below for details on how to select this. The number
immediately beneath the airspeed tape is your current Mach number. This page describes the
different types of airspeeds.
Steerpoint Caret

The "caret" or V shape on the right side of the airspeed tape (it's not specifically labeled in this
diagram) points to the airspeed you need to maintain in order to get to the selected steerpoint by
the time-on-station specified in your flight plan. This adjusts itself so that it is correct regardless
of the airspeed type you have selected. If you keep the caret aligned with the airspeed notch, you
will arrive at your selected steerpoint exactly on schedule.
G Forces

Immediately above the airspeed tape is the current G force on the aircraft. This is 1.0 when you
are straight and level flight, but will increase when you turn or do other flight maneuvers. The
maximum G force the aircraft has experienced since it started is displayed underneath the Mach
number (in this case, it is 2.6). This is used for maintenance, and also so you can check the
maximum G force you pulled in a turn (in case you were distracted or missed it).
SOI Symbol

An asterisk is displayed above the airspeed scale when the HUD is the currently selected sensor
of interest. This only applies to certain types of weapons such as the AGM-65 Maverick which
can be visually aimed using the HUD.
Current Mode/Weapon Status Indication

The current armed state of your weapons is displayed directly beneath the airspeed scale. If the
master arm switch is set to "armed", ARM will be displayed here. If it is set to "sim", SIM will
be displayed, and if the weapons are set to "safe", nothing will be displayed.
The current HUD mode is displayed underneath the maximum G-force indicator (in this case it is
"NAV" for navigation mode). NAV is the default mode, which is used for general navigation
with no weapon system selected. There are a large number of HUD modes, most of which are
tied to particular aircraft weapon systems.

Altitude Scale

The current altitude of the aircraft is displayed on the right side of the HUD. This is similar to the
"tape" used by the airspeed indicator. The pointer points to your current altitude on the tape, and
also displays it as a number (rounded to the nearest 10 feet). The tape is labeled every 500 feet in
multiples of 100 feet, and has unlabeled tick marks every 500 feet.
The radar altitude is displayed in a box beneath the altitude scale when baro altitude is selected.
Altitude Low Setting

The currently selected ALOW setting is displayed immediately beneath the radar altitude. The
ALOW setting is the radar altitude at which the VMS will start complaining because you're
flying too low. In this case, it is set to 200 feet above ground, so if you are lower than 200 feet
with your landing gear up you will hear a continuous "Altitude, Altitude" warning.
Steerpoint Information

Some basic navigational information is displayed in the lower right corner of the HUD. The slant
range to the currently selected steerpoint is displayed beneath the ALOW setting - B018.7 means
that the aircraft is exactly 18.7 nautical miles from the steerpoint. In the real aircraft, this takes
into account the aircraft's altitude - if you are directly over the steerpoint at an altitude of 2
nautical miles, it will display B002.0.
The time until reaching the steerpoint is displayed directly beneath the slant range. Beneath the
ETA is another indication which shows the "geographical" (i.e., non-slant range) distance to the
steerpoint, as well as the number of the currently selected steerpoint. In this case, steerpoint 3 is
selected, and the aircraft is 18 nautical miles away.
Aircraft Attitude and Heading

There are several items in the center of the HUD which show the aircraft's current attitude and
heading. The pitch ladder is the horizontal lines displayed at 5 degree increments up the center of
the HUD. The position of the gun cross relative to the pitch ladder shows the aircraft's pitch
attitude - for example, if you put the gun cross on the 10 degree line, the aircraft has a pitch
attitude of 10 degrees. Note that the ladder bars for negative pitch attitudes are dashed, and may
be differently shaped.
Depending on the current HUD mode, one of two roll indicators will be displayed - the BAI
(Bank Attitude Indicator) is a small scale which rotates around the flight path marker, showing
the current bank angle. The roll indicator is located at the bottom of the HUD, and looks like a
larger mirror image of the BAI. The roll indicator has tick marks at 10, 20, 30, and 45 degrees,
while the BAI has tick marks at 10, 20, 30, and 60 degrees.
The aircraft's heading is displayed as a tape, which may be located at the top or bottom of the
HUD, depending on the mode. The concept is similar to the airspeed and altitude tapes, only it is

horizontal instead of vertical. There is a caret or notch located above the heading tape, which
denotes the heading to fly to the currently selected steerpoint. In the real aircraft, this displays
magnetic heading, however Falcon 4 displays true heading.
Gun Cross

The gun cross is the "gun sight" of the aircraft - it is the point towards which the aircraft's M61
cannon is aimed. More importantly, it represents the point at which the aircraft's nose is pointed.
This contrasts with the flight path marker, which displays the path the aircraft is actually flying.
Flight Path Marker

The flight path marker is the circle with three tick marks arranged around it - notice how it sort
of looks like an aircraft from behind. The FPM is used to display the direction the aircraft is
actually traveling, as opposed to the direction the nose is pointed in.

HUD Control:
The HUD has controls in two places: the Integrated Control Panel and the right console of the
ICP Controls

The ICP has several controls which affect the HUD display. The SYM knob, at the top left corner
of the ICP, is used to control the brightness of the HUD. This is useful for night flight, when you
want to dim cockpit lighting to preserve your night vision. The drift cutout switch on the HUD is
used to control the position of the pitch ladder. If it is set to normal, then the pitch ladder will
always be centered around the flight path marker. If it is set to DRIFT C/O, the pitch ladder will
always be displayed at the center of the HUD, regardless of the position of the FPM. See the ICP
page for more details.
HUD Panel on Right Console

The HUD control panel is located on the right console, between the sensor power panel and the
interior lighting panel. It consists of eight switches which are used to control the amount and
type of information displayed on the HUD.

The scales switch is used to control which data is displayed as a "tape" on the HUD. The default
is VAH, in which case your altitude, airspeed, and heading will be displayed as tapes, and you
will see the roll indicator instead of the BAI. VV/VAH mode is the same as VAH, except that a
vertical velocity scale is added immediately to the left of the altitude tape. When the switch is set
to OFF, the altitude, airspeed, and heading tapes are removed (although the numeric indication is
still displayed), and the BAI is displayed instead of the roll indicator.
The FPM switch is used to control the display of the pitch ladder and flight path marker. The
default is ATT/FPM, in which case both will be displayed. If set to FPM, only the flight path
marker will be displayed, and if set to OFF, neither the pitch ladder nor the FPM will be
The DED data switch is used to control the display of DED data on the HUD. If the switch is
flipped to up, the DED display will be copied to the lower portion of the HUD. This is useful if
your DED is damaged or if you want to have access to DED data without looking down into the

The depressible reticle switch is not usually modeled in Falcon 4.

The velocity switch is used to control which type of airspeed is displayed. The available options
are CAS (calibrated/indicated airspeed), TAS (true airspeed), and ground speed.
The altitude switch controls which altitude unit is displayed. You can select radar altitude,
barometric altitude, or auto. If the switch is set to auto, radar altitude will be used until you are
more than 1500 feet above the ground, at which point it will automatically switch to baro
altitude. When you descend, it will switch from baro to radar altitude at 1200 feet.
The brightness control switch selects the brightness mode of the display. If set to "night", the
maximum brightness of the HUD will be approximately half of the maximum "day" brightness.
If the switch is set to auto, the HUD will automatically select its brightness mode using the
system time. Note that you can also change the brightness of the HUD using the SYM knob on
the ICP.

Plasma display

A typical modern plasma screen television

A plasma display panel (PDP) is a type of flat panel display common to large TV displays (80
cm/30 in or larger). They are called "plasma" displays because the pixels rely on plasma cells, or
what are in essence chambers more commonly known as fluorescent lamps. A panel typically has
millions of tiny cells in compartmentalized space between two panels of glass. These
compartments, or "bulbs" or "cells", hold a mixture of noble gases and a minuscule amount of

mercury. Just as in the fluorescent lamps over an office desk, when the mercury is vaporized and
a voltage is applied across the cell, the gas in the cells form a plasma. (A plasma is a collection of
particles that respond strongly and collectively to electromagnetic fields or electrical charges,
taking the form of gas-like clouds or ion beams.) With flow of electricity (electrons), some of the
electrons strike mercury particles as the electrons move through the plasma, momentarily
increasing the energy level of the molecule until the excess energy is shed. Mercury sheds the
energy as ultraviolet (UV) photons. The UV photons then strike phosphor that is painted on the
inside of the cell. When the UV photon strikes a phosphor molecule, it momentarily raises the
energy level of an outer orbit electron in the phosphor molecule, moving the electron from a
stable to an unstable state; the electron then sheds the excess energy as a photon at a lower
energy level than UV light; the lower energy photons are mostly in the infrared range but about
40% are in the visible light range. Thus the input energy is shed as mostly heat (infrared) but also
as visible light. Depending on the phosphors used, different colors of visible light can be
achieved. Each pixel in a plasma display is made up of three cells comprising the primary colors
of visible light. Varying the voltage of the signals to the cells thus allows different perceived

Plasma displays should not be confused with liquid crystal displays (LCDs), another lightweight
flat-screen display using very different technology. LCDs may use one or two large fluorescent
lamps as a backlight source, but the different colors are controlled by LCD units, which in effect
behave as gates that allow or block the passage of light from the backlight to red, green, or blue
paint on the front of the LCD panel.

General characteristics

A 103" plasma display panel by Panasonic

Plasma displays are bright (1,000 lux or higher for the module), have a wide color gamut, and
can be produced in fairly large sizesup to 150 inches (3.8 m) diagonally. They have a very
low-luminance "dark-room" black level compared to the lighter grey of the unilluminated parts
of an LCD screen (i.e. the blacks are blacker on plasmas and greyer on LCDs). [4] LED-backlit
LCD televisions have been developed to reduce this distinction. The display panel itself is about
6 cm (2.5 inches) thick, generally allowing the device's total thickness (including electronics) to

be less than 10 cm (4 inches). Power consumption varies greatly with picture content, with bright
scenes drawing significantly more power than darker ones - this is also true of CRTs. Typical
power consumption is 400 watts for a 50-inch (127 cm) screen. 200 to 310 watts for a 50-inch
(127 cm) display when set to cinema mode. Most screens are set to 'shop' mode by default, which
draws at least twice the power (around 500-700 watts) of a 'home' setting of less extreme
brightness.[5] Panasonic has greatly reduced power consumption ("1/3 of 2007 models") [6][7]
Panasonic claims that PDPs will consume only half the power of their previous series of plasma
sets to achieve the same overall brightness for a given display size. The lifetime of the latest
generation of plasma displays is estimated at 100,000 hours of actual display time,
Plasma display advantages and disadvantages

Slim profile

Can be wall mounted

Less bulky than rear-projection televisions

Produces deep blacks allowing for superior contrast ratio[3][9][10]

Wider viewing angles than those of LCD; images do not suffer from degradation at high angles
unlike LCDs[3][9]


Heavier screen-door effect when compared to LCD or OLED based TVs

Susceptible to screen burn-in and image retention, although most recent models have pixel
orbiter, that moves the entire picture faster than it's noticeable to the human eye, which reduces
the affect of burn-in but doesn't prevent burn-in. [13] However turning off individual pixels does
counteract screen burn-in on modern plasma displays. [14]

Phosphors lose luminosity over time, resulting in gradual decline of absolute image brightness
(newer models are less susceptible to this, having lifespans exceeding 100,000 hours, far longer
than older CRT technology)[8][10]

Susceptible to "large area flicker"[15]

Generally do not come in smaller sizes than 37 inches[3][9]

Susceptible to reflection glare in bright rooms

Heavier than LCD due to the requirement of a glass screen to hold the gases

Use more electricity, on average, than an LCD TV

High-definition plasma television

Early high-definition (HD) plasma displays had a resolution of 1024x1024 and were alternate
lighting of surfaces (ALiS) panels made by Fujitsu/Hitachi.[21][22] These were interlaced displays,
with non-square pixels.[23]
Modern HDTV plasma televisions usually have a resolution of 1,024768 found on many
42 inch plasma screens, 1,280768, 1,366768 found on 50 in, 60 in, and 65 in plasma screens,
or 1,9201,080 found in plasma screen sizes from 42 inch to 103 inch. These displays are
usually progressive displays, with square pixels, and will up-scale their incoming standarddefinition signals to match their native display resolution.[24]

Composition of plasma display panel

The xenon, neon, and helium gas in a plasma television is contained in hundreds of thousands of
tiny cells positioned between two plates of glass. Long electrodes are also put together between
the glass plates, in front of and behind the cells. The address electrodes sit behind the cells, along
the rear glass plate. The transparent display electrodes, which are surrounded by an insulating
dielectric material and covered by a magnesium oxide protective layer, are mounted in front of
the cell, along the front glass plate. Control circuitry charges the electrodes that cross paths at a
cell, creating a voltage difference between front and back and causing the gas to ionize and form
a plasma. As the gas ions rush to the electrodes and collide, photons are emitted.[25][26]

In color panels, the back of each cell is coated with a phosphor. The ultraviolet photons emitted
by the plasma excite these phosphors to give off colored light. The operation of each cell is thus
comparable to that of a fluorescent lamp.
Plasma is often cited as having better (i.e. darker) black levels (and higher contrast ratios),
although both plasma and LCD each have their own technological challenges.
Each cell on a plasma display has to be precharged before it is due to be illuminated (otherwise
the cell would not respond quickly enough) and this precharging means the cells cannot achieve
a true black, whereas an LED backlit LCD panel can actually turn off parts of the screen. Some
manufacturers have worked hard to reduce the precharge and the associated background glow, to
the point where black levels on modern plasmas are starting to rival CRT. With LCD technology,
black pixels are generated by a light polarization method; many panels are unable to completely
block the underlying backlight. However, more recent LCD panels (particularly those using
white LED illumination) can compensate by automatically reducing the backlighting on darker
scenes, though this method analogous to the strategy of noise reduction on analog audio tape
obviously cannot be used in high-contrast scenes, leaving some light showing from black
parts of an image with bright parts, such as (at the extreme) a solid black screen with one fine
intense bright line. This is called a "halo" effect which has been almost completely minimized on
newer LED backlit LCD's with local dimming. Edgelit models cannot compete with this as the
light is reflected via a light funnell to distribute the light behind the panel.[3][9][10]
Screen burn-in

An example of a plasma display that has suffered severe burn-in from stationary text

Image burn-in occurs on CRTs and plasma panels when the same picture is displayed for long
periods of time. This causes the phosphors to overheat, losing some of their luminosity and
producing a "shadow" image that is visible with the power off. Burn-in cannot be repaired
(except on monochrome CRTs),
Plasma displays also exhibit another image retention issue which is sometimes confused with
screen burn-in damage. In this mode, when a group of pixels are run at high brightness (when
displaying white, for example) for an extended period of time, a charge build-up in the pixel

structure occurs and a ghost image can be seen. However, unlike burn-in, this charge build-up is
transient and self corrects after the image condition that caused the effect has been removed and
a long enough period of time has passed (with the display either off or on).

Plasma displays were first used in PLATO computer terminals. This PLATO V model illustrates the
display's monochromatic orange glow as seen in 1988. [42]

Organic light-emitting diode






A green emitting OLED device

An organic light emitting diode (OLED) is a light-emitting diode (LED) in which the emissive
electroluminescent layer is a film of organic compounds which emit light in response to an
electric current. This layer of organic semiconductor material is situated between two electrodes.
Generally, at least one of these electrodes is transparent.
OLEDs are used in television screens, computer monitors, small, portable system screens such as
mobile phones and PDAs, watches, advertising, information and indication. OLEDs are also used

in light sources for general space illumination and in large-area light-emitting elements. Due to
their comparatively early stage of development, they typically emit less light per unit area than
inorganic solid-state based LED point-light sources.
An OLED display functions without a backlight. Thus, it can display deep black levels and can
also be thinner and lighter than established liquid crystal displays. Similarly, in low ambient light
conditions such as dark rooms, an OLED screen can achieve a higher contrast ratio than an LCD
screen using either cold cathode fluorescent lamps or the more recently developed LED
There are two main families of OLEDs: those based upon small molecules and those employing
polymers. Adding mobile ions to an OLED creates a Light-emitting Electrochemical Cell or
LEC, which has a slightly different mode of operation.
The first observations of electroluminescence in organic materials were in the early 1950s.A
high-voltage alternating current (AC) fields in air to materials such as acridine orange, either
deposited on or dissolved in cellulose or cellophane thin films. The proposed mechanism was
either direct excitation of the dye molecules or excitation of electrons.
Working principle

Schematic of a bilayer OLED: 1. Cathode (), 2. Emissive Layer, 3. Emission of radiation, 4. Conductive
Layer, 5. Anode (+)

A typical OLED is composed of a layer of organic materials situated between two electrodes, the
anode and cathode, all deposited on a substrate. The organic molecules are electrically
conductive as a result of delocalization of pi electrons caused by conjugation over all or part of
the molecule. These materials have conductivity levels ranging from insulators to conductors,
and therefore are considered organic semiconductors. The highest occupied and lowest
unoccupied molecular orbitals (HOMO and LUMO) of organic semiconductors are analogous to
the valence and conduction bands of inorganic semiconductors.
Originally, the most basic polymer OLEDs consisted of a single organic layer. poly(p-phenylene
vinylene). However multilayer OLEDs can be fabricated with two or more layers in order to

improve device efficiency. As well as conductive properties, different materials may be chosen to
aid charge injection at electrodes by providing a more gradual electronic profile, [21] or block a
charge from reaching the opposite electrode and being wasted. [22] Many modern OLEDs
incorporate a simple bilayer structure, consisting of a conductive layer and an emissive layer.
During operation, a voltage is applied across the OLED such that the anode is positive with
respect to the cathode. A current of electrons flows through the device from cathode to anode, as
electrons are injected into the LUMO of the organic layer at the cathode and withdrawn from the
HOMO at the anode. This latter process may also be described as the injection of electron holes
into the HOMO.

The different manufacturing process of OLEDs lends itself to several advantages over flat-panel
displays made with LCD technology.

Light weight & flexible plastic substrates: OLED displays can be fabricated on flexible plastic
substrates leading to the possibility of Organic light-emitting diode roll-up display being
fabricated or other new applications such as roll-up displays embedded in fabrics or clothing. As
the substrate used can be flexible such as PET. the displays may be produced inexpensively.

Wider viewing angles & improved brightness: OLEDs can enable a greater artificial contrast
ratio (both dynamic range and static, measured in purely dark conditions) and viewing angle
compared to LCDs because OLED pixels directly emit light. OLED pixel colours appear correct
and unshifted, even as the viewing angle approaches 90 degrees from normal.

Better power efficiency: LCDs filter the light emitted from a backlight, allowing a small fraction
of light through so they cannot show true black, while an inactive OLED element produces no
light and consumes no power.[54]

Response time: OLEDs can also have a faster response time than standard LCD screens.
Whereas LCD displays are capable of a 1 ms response time or less [55] offering a frame rate of
1,000 Hz or higher, an OLED can theoretically have less than 0.01 ms response time enabling
100,000 Hz refresh rates.

Screen burn-in: Unlike displays with a common light source, the brightness of each
OLED pixel fades depending on the content displayed. The varied lifespan of the
organic dyes can cause a discrepancy between red, green, and blue intensity. This
leads to image persistence, also known as burn-in.[69]

Multi-function display

A Multi-function display (MFD) (part of Multi Function structures) is a small screen (CRT or
LCD) in an aircraft surrounded by multiple buttons that can be used to display information to the
pilot in numerous configurable ways. Often an MFD will be used in concert with a Primary
Flight Display. MFDs are part of the digital era of modern planes or helicopter. The first MFD
were introduced by air forces. The advantage of an MFD over analog display is that an MFD
does not consume much space in the cockpit. For example the cockpit of RAH-66 "Comanche"
does not have analog dials or gauges at all. All information is displayed on the MFD pages. The
possible MFD pages could differ for every plane, complementing their abilities (in combat).
Many MFDs allow the pilot to display their navigation route, moving map, weather radar,
NEXRAD, GPWS, TCAS and airport information all on the same screen.
MFD's are added to the Space Shuttle (as the glass cockpit) starting in 1998 replacing the analog
instruments and CRT's. The information being displayed is similar, and the glass cockpit was
first flown on the STS-101 mission.

A touchscreen is an electronic visual display that can detect the presence and location of a touch
within the display area. The term generally refers to touching the display of the device with a
finger or hand. Touchscreens can also sense other passive objects, such as a stylus.
The touchscreen has two main attributes. First, it enables one to interact directly with what is
displayed, rather than indirectly with a cursor controlled by a mouse or touchpad. Secondly, it

lets one do so without requiring any intermediate device that would need to be held in the hand.
Such displays can be attached to computers, or to networks as terminals. They also play a
prominent role in the design of digital appliances such as the personal digital assistant (PDA),
satellite navigation devices, mobile phones, and video games.

In 1971, the first "touch sensor" was developed by Doctor Sam Hurst (founder of Elographics)
while he was an instructor at the University of Kentucky. This sensor, called the "Elograph," was
patented by The University of Kentucky Research Foundation. The "Elograph" was not
transparent like modern touch screens; however, it was a significant milestone in touch screen
technology. In 1974, the first true touch screen incorporating a transparent surface was developed
by Sam Hurst and Elographics. In 1977, Elographics developed and patented five-wire resistive
technology, the most popular touch screen technology in use today. [5] Touchscreens first gained
some visibility with the invention of the computer-assisted learning terminal, which came out in
1975 as part of the PLATO project. Touchscreens have subsequently become familiar in
everyday life. Companies use touch screens for kiosk systems in retail and tourist settings, point
of sale systems, ATMs, and PDAs, where a stylus is sometimes used to manipulate the GUI and
to enter data. The popularity of smart phones, PDAs, portable game consoles and many types of
information appliances is driving the demand for, and acceptance of, touchscreens.
Until recently, most consumer touchscreens could only sense one point of contact at a time, and
few have had the capability to sense how hard one is touching. This is starting to change with the
commercialization of multi-touch technology.
Touchscreens are popular in hospitality, and in heavy industry, as well as kiosks such as museum
displays or room automation, where keyboard and mouse systems do not allow a suitably
intuitive, rapid, or accurate interaction by the user with the display's content.
Historically, the touchscreen sensor and its accompanying controller-based firmware have been
made available by a wide array of after-market system integrators, and not by display, chip, or
motherboard manufacturers. Display manufacturers and chip manufacturers worldwide have

acknowledged the trend toward acceptance of touchscreens as a highly desirable user interface
component and have begun to integrate touchscreen functionality into the fundamental design of
their products.
A resistive touchscreen panel is composed of several layers, the most important of which are two
thin, metallic, electrically conductive layers separated by a narrow gap. When an object, such as
a finger, presses down on a point on the panel's outer surface the two metallic layers become
connected at that point: the panel then behaves as a pair of voltage dividers with connected
outputs. This causes a change in the electrical current, which is registered as a touch event and
sent to the controller for processing.
Surface acoustic wave (SAW) technology uses ultrasonic waves that pass over the touchscreen
panel. When the panel is touched, a portion of the wave is absorbed. This change in the
ultrasonic waves registers the position of the touch event and sends this information to the
controller for processing. Surface wave touch screen panels can be damaged by outside elements.
Contaminants on the surface can also interfere with the functionality of the touchscreen.[6]

Capacitive touchscreen of a mobile phone

A capacitive touchscreen panel consists of an insulator such as glass, coated with a transparent
conductor such as indium tin oxide (ITO).[7][8] As the human body is also a conductor, touching
the surface of the screen results in a distortion of the screen's electrostatic field, measurable as a
change in capacitance. Different technologies may be used to determine the location of the touch.
The location is then sent to the controller for processing.
This method produces a stronger signal than mutual capacitance, but it is unable to resolve
accurately more than one finger, which results in "ghosting", or misplaced location sensing.

An infrared touchscreen uses an array of X-Y infrared LED and photodetector pairs around the
edges of the screen to detect a disruption in the pattern of LED beams. These LED beams cross
each other in vertical and horizontal patterns. This helps the sensors pick up the exact location of
the touch. A major benefit of such a system is that it can detect essentially any input including a

finger, gloved finger, stylus or pen. It is generally used in outdoor applications and point-of-sale
systems which can't rely on a conductor (such as a bare finger) to activate the touchscreen.
Unlike capacitive touchscreens, infrared touchscreens do not require any patterning on the glass
which increases durability and optical clarity of the overall system.
Optical imaging

This is a relatively modern development in touchscreen technology, in which two or more image
sensors are placed around the edges (mostly the corners) of the screen. Infrared back lights are
placed in the camera's field of view on the other side of the screen. A touch shows up as a
shadow and each pair of cameras can then be triangulated to locate the touch or even measure the
size of the touching object (see visual hull). This technology is growing in popularity, due to its
scalability, versatility, and affordability, especially for larger units.

There are several principal ways to build a touchscreen. The key goals are to recognize one or
more fingers touching a display, to interpret the command that this represents, and to
communicate the command to the appropriate application.
In the most popular techniques, the capacitive or resistive approach, there are typically four
1. Top polyester layer coated with a transparent metallic conductive coating on the bottom
2. Adhesive spacer
3. Glass layer coated with a transparent metallic conductive coating on the top
4. Adhesive layer on the backside of the glass for mounting.

When a user touches the surface, the system records the change in the electrical current that
flows through the display.
Dispersive-signal technology which 3M created in 2002, measures the piezoelectric effect the
voltage generated when mechanical force is applied to a material that occurs chemically when
a strengthened glass substrate is touched.
There are two infrared-based approaches. In one, an array of sensors detects a finger touching or
almost touching the display, thereby interrupting light beams projected over the screen. In the
other, bottom-mounted infrared cameras record screen touches.
In each case, the system determines the intended command based on the controls showing on the
screen at the time and the location of the touch.

The development of multipoint touchscreens facilitated the tracking of more than one finger on
the screen, thus operations that require more than one finger are possible. These devices also
allow multiple users to interact with the touchscreen simultaneously.
With the growing acceptance of many kinds of products with an integral touchscreen interface,
the marginal cost of touchscreen technology is routinely absorbed into the products that
incorporate it and is effectively eliminated. As typically occurs with any technology, touchscreen
hardware and software has sufficiently matured and been perfected over more than three decades
to the point where its reliability is proven.
Mobile Devices With Touch Displays

Some mobile devices [cell phones, handheld game devices] are Apple iPhone, Apple iPod Touch,
Google Android OS Powered Phones, Palm WebOS devices, some of LG [with Verizon stock
OS] phones have a touch screen [some of the most popular are the LG enV Touch, LG Dare, LG
Voyager], Apple iPod Nano, Apple iPad, UMPCs, and many more

Serial communication
In telecommunication and computer science, the concept of serial communication is the
process of sending data one bit at a time, sequentially, over a communication channel or
computer bus. This is in contrast to parallel communication, where several bits are sent as a
whole, on a link with several parallel channels. Serial communication is used for all long-haul
communication and most computer networks, where the cost of cable and synchronization
difficulties make parallel communication impractical. Serial computer buses are becoming more
common even at shorter distances, as improved signal integrity and transmission speeds in newer
serial technologies have begun to outweigh the parallel bus's advantage of simplicity (no need for
serializer and deserializer, or SerDes) and to outstrip its disadvantages (clock skew, interconnect
density). The migration from PCI to PCI Express is an example. Serial buses
Integrated circuits are more expensive when they have more pins. To reduce the number of pins
in a package, many ICs use a serial bus to transfer data when speed is not important. Some
examples of such low-cost serial buses include SPI, IC, UNI/O, and 1-Wire.
Serial versus parallel

The communication links across which computersor parts of computerstalk to one another
may be either serial or parallel. A parallel link transmits several streams of data (perhaps
representing particular bits of a stream of bytes) along multiple channels (wires, printed circuit
tracks, optical fibres, etc.); a serial link transmits a single stream of data.

At first sight it would seem that a serial link must be inferior to a parallel one, because it can
transmit less data on each clock tick. However, it is often the case that serial links can be clocked
considerably faster than parallel links, and achieve a higher data rate. A number of factors allow
serial to be clocked at a greater rate:

Clock skew between different channels is not an issue (for unclocked asynchronous serial
communication links)

A serial connection requires fewer interconnecting cables (e.g. wires/fibres) and hence occupies
less space. The extra space allows for better isolation of the channel from its surroundings

Crosstalk is less of an issue, because there are fewer conductors in proximity.

In many cases, serial is a better option because it is cheaper to implement. Many ICs have serial
interfaces, as opposed to parallel ones, so that they have fewer pins and are therefore less

Data transmission
Data transmission, digital transmission or digital communications is the physical transfer of
data (a digital bit stream) over a point-to-point or point-to-multipoint communication channel.
Examples of such channels are copper wires, optical fibres, wireless communication channels,
and storage media. The data is represented as an electromagnetic signal, such as an electrical
voltage, radiowave, microwave or infrared signal.
While analog communications is the transfer of continuously varying information signal, digital
communications is the transfer of discrete messages. The messages are either represented by a
sequence of pulses by means of a line code (baseband transmission), or by a limited set of
continuously varying wave forms (passband transmission), using a digital modulation method.
The passband modulation and corresponding demodulation (also known as detection) is carried
out by modem equipment. According to the most common definition of digital signal, both
baseband and passband signals representing bit-streams are considered as digital transmission,
while an alternative definition only considers the baseband signal as digital, and passband
transmission of digital data as a form of digital-to-analog conversion.
Data transmitted may be digital messages originating from a data source, for example a computer
or a keyboard. It may also be an analog signal such as a phone call or a video signal, digitized
into a bit-stream for example using pulse-code modulation (PCM) or more advanced source
coding (analog-to-digital conversion and data compression) schemes. This source coding and
decoding is carried out by codec equipment.

In telecommunication and computer science, parallel communication is a method of sending

several data signals simultaneously over several parallel channels. It contrasts with serial
communication; this distinction is one way of characterizing a communications link.
The basic difference between a parallel and a serial communication channel is the number of
distinct wires or strands at the physical layer used for simultaneous transmission from a device.
Parallel communication implies more than one such wire/strand, in addition to a ground
connection. An 8-bit parallel channel transmits eight bits (or a byte) simultaneously. A serial
channel would transmit those bits one at a time. If both operated at the same clock speed, the
parallel channel would be eight times faster. A parallel channel will generally have additional
control signals such as a clock, to indicate that the data is valid, and possibly other signals for
handshaking and directional control of data transmission.
In telecommunication and computer science, parallel communication is a method of sending
several data signals simultaneously over several parallel channels. It contrasts with serial
communication; this distinction is one way of characterizing a communications link.
The basic difference between a parallel and a serial communication channel is the number of
distinct wires or strands at the physical layer used for simultaneous transmission from a device.
Parallel communication implies more than one such wire/strand, in addition to a ground
connection. An 8-bit parallel channel transmits eight bits (or a byte) simultaneously. A serial
channel would transmit those bits one at a time. If both operated at the same clock speed, the
parallel channel would be eight times faster. A parallel channel will generally have additional
control signals such as a clock, to indicate that the data is valid, and possibly other signals for
handshaking and directional control of data transmission.
Distinction between related subjects

the field of data transmission[1] as well as digital transmission[2][3] and digital communications [4]
have similar content.
Digital transmission or data transmission traditionally belongs to telecommunications and
electrical engineering. Basic principles of data transmission may also be covered within the
computer science/computer engineering topic of data communications, which also includes
computer networking or computer communication applications and networking protocols, for
example routing, switching and process-to-process communication. Although the Transmission
control protocol (TCP) involves the term "transmission", TCP and other transport layer protocols
are typically not discussed in a textbook or course about data transmission, but in computer
Applications and history

Data (mainly but not exclusively informational) has been sent via non-electronic (e.g. optical,
acoustic, mechanical) means since the advent of communication. Analog signal data has been
sent electronically since the advent of the telephone. However, the first data electromagnetic
transmission applications in modern time were telegraphy (1809) and teletypewriters (1906),
which are both digital signals. The fundamental theoretical work in data transmission and
information theory by Harry Nyquist, Ralph Hartley, Claude Shannon and others during the early
20th century, was done with these applications in mind.
Data transmission is utilized in computers in computer buses and for communication with
peripheral equipment via parallel ports and serial ports such us RS-232 (1969), Firewire (1995)
and USB (1996). The principles of data transmission is also utilized in storage media for Error
detection and correction since 1951.
Data transmission is utilized in computer networking equipment such as modems (1940), local
area networks (LAN) adapters (1964), repeaters, hubs, microwave links, wireless network access
points (1997), etc.
In telephone networks, digital communication is utilized for transferring many phone calls over
the same copper cable or fiber cable by means of Pulse code modulation (PCM), i.e. sampling
and digitization, in combination with Time division multiplexing (TDM) (1962). Telephone
exchanges have become digital and software controlled, facilitating many value added services.
For example the first AXE telephone exchange was presented in 1976. Since late 1980th, digital
communication to the end user has been possible using Integrated Services Digital Network
(ISDN) services. Since the end of 1990th, broadband access techniques such as ADSL, Cable
modems, fiber-to-the-building (FTTB) and fiber-to-the-home (FTTH) have become wide spread
to small offices and homes. The current tendency is to replace traditional telecommunication
services by packet mode communication such as IP telephony and IPTV.
Transmitting analog signals digitally allows for greater signal processing capability. The ability
to process a communications signal means that errors caused by random processes can be
detected and corrected. Digital signals can also be sampled instead of continuously monitored.
The multiplexing of multiple digital signals is much simpler to the multiplexing of analog
Because of all these advantages, and because recent advances in wideband communication
channels and solid-state electronics have allowed scientists to fully realize these advantages,
digital communications has grown quickly. Digital communications is quickly edging out analog
communication because of the vast demand to transmit computer data and the ability of digital
communications to do so.

The digital revolution has also resulted in many digital telecommunication applications where
the principles of data transmission are applied. Examples are second-generation (1991) and later
cellular telephony, video conferencing, digital TV (1998), digital radio (1999), telemetry, etc.
Serial and parallel transmission

In telecommunications, serial transmission is the sequential transmission of signal elements of a

group representing a character or other entity of data. Digital serial transmissions are bits sent
over a single wire, frequency or optical path sequentially. Because it requires less signal
processing and less chances for error than parallel transmission, the transfer rate of each
individual path may be faster. This can be used over longer distances as a check digit or parity bit
can be sent along it easily.
In telecommunications, parallel transmission is the simultaneous transmission of the signal
elements of a character or other entity of data. In digital communications, parallel transmission is
the simultaneous transmission of related signal elements over two or more separate paths.
Multiple electrical wires are used which can transmit multiple bits simultaneously, which allows
for higher data transfer rates than can be achieved with serial transmission. This method is used
internally within the computer, for example the internal buses, and sometimes externally for such
things as printers, The major issue with this is "skewing" because the wires in parallel data
transmission have slightly different properties (not intentionally) so some bits may arrive before
others, which may corrupt the message. A parity bit can help to reduce this. However, electrical
wire parallel data transmission is therefore less reliable for long distances because corrupt
transmissions are far more likely.
Asynchronous transmission uses start and stop bits to signify the beginning bit [citation needed] ASCII
character would actually be transmitted using 10 bits e.g.: A "0100 0001" would become "1 0100
0001 0". The extra one (or zero depending on parity bit) at the start and end of the transmission
tells the receiver first that a character is coming and secondly that the character has ended. This
method of transmission is used when data is sent intermittently as opposed to in a solid stream.
In the previous example the start and stop bits are in bold. The start and stop bits must be of
opposite polarity. This allows the receiver to recognize when the second packet of information is
being sent.
Synchronous transmission uses no start and stop bits but instead synchronizes transmission
speeds at both the receiving and sending end of the transmission using clock signal(s) built into
each component[vague]. A continual stream of data is then sent between the two nodes. Due to there
being no start and stop bits the data transfer rate is quicker although more errors will occur, as
the clocks will eventually get out of sync, and the receiving device would have the wrong time
that had been agreed in protocol for sending/receiving data, so some bytes could become

corrupted (by losing bits) Ways to get around this problem include re-synchronization of the
clocks and use of check digits to ensure the byte is correctly interpreted and received