Вы находитесь на странице: 1из 28

A motherboard is the central printed circuit board (PCB) in some complex electronic systems, such as modern personal

computers. The motherboard is sometimes alternatively known as the mainboard, system board, or, on Apple
computers, the logic board.[1] It is also sometimes casually shortened to mobo


Prior to the advent of the microprocessor, a computer was usually built in a card-cage case or mainframe with
components connected by a backplane consisting of a set of slots themselves connected with wires; in very old designs
the wires were discrete connections between card connector pins, but printed-circuit boards soon became the standard
practice. The central processing unit, memory and peripherals were housed on individual printed circuit boards which
plugged into the backplane.

During the late 1980s and 1990s, it became economical to move an increasing number of peripheral functions onto the
motherboard (see below). In the late 1980s, motherboards began to include single ICs (called Super I/O chips) capable of
supporting a set of low-speed peripherals: keyboard, mouse, floppy disk drive, serial ports, and parallel ports. As of the
late 1990s, many personal computer motherboards supported a full range of audio, video, storage, and networking
functions without the need for any expansion cards at all; higher-end systems for 3D gaming and computer graphics
typically retained only the graphics card as a separate component.

The early pioneers of motherboard manufacturing were Micronics, Mylex, AMI, DTK, Hauppauge, Orchid Technology,
Elitegroup, DFI, and a number of Taiwan-based manufacturers.

Popular personal computers such as the Apple II and IBM PC had published schematic diagrams and other
documentation which permitted rapid reverse-engineering and third-party replacement motherboards. Usually intended
for building new computers compatible with the exemplars, many motherboards offered additional performance or
other features and were used to upgrade the manufacturer's original equipment.

The term mainboard is archaically applied to devices with a single board and no additional expansions or capability. In
modern terms this would include embedded systems, and controlling boards in televisions, washing machines etc. A
motherboard specifically refers to a printed circuit with the capability to add/extend its performance/capabilities with
the addition of "daughterboards".


An Acer E360 motherboard made by Foxconn, from 2005, with a large number of integrated peripherals. This board's
nForce3 chipset lacks a traditional northbridge.

Most computer motherboards produced today are designed for IBM-compatible computers, which currently account for
around 90% of global PC sales[citation needed]. A motherboard, like a backplane, provides the electrical connections by
which the other components of the system communicate, but unlike a backplane, it also hosts the central processing
unit, and other subsystems and devices.

Motherboards are also used in many other electronics devices such as mobile phones, stop-watches, clocks, and other
small electronic devices.

A typical desktop computer has its microprocessor, main memory, and other essential components on the motherboard.
Other components such as external storage, controllers for video display and sound, and peripheral devices may be
attached to the motherboard as plug-in cards or via cables, although in modern computers it is increasingly common to
integrate some of these peripherals into the motherboard itself.
An important component of a motherboard is the microprocessor's supporting chipset, which provides the supporting
interfaces between the CPU and the various buses and external components. This chipset determines, to an extent, the
features and capabilities of the motherboard.

Modern motherboards include, at a minimum:

sockets (or slots) in which one or more microprocessors are installed[3]

slots into which the system's main memory is installed (typically in the form of DIMM modules containing DRAM chips)

a chipset which forms an interface between the CPU's front-side bus, main memory, and peripheral buses

non-volatile memory chips (usually Flash ROM in modern motherboards) containing the system's firmware or BIOS

a clock generator which produces the system clock signal to synchronize the various components

slots for expansion cards (these interface to the system via the buses supported by the chipset)

power connectors flickers, which receive electrical power from the computer power supply and distribute it to the CPU,
chipset, main memory, and expansion cards.[4]

The Octek Jaguar V motherboard from 1993.[5] This board has 6 ISA slots but few onboard peripherals, as evidenced by
the lack of external connectors.

Additionally, nearly all motherboards include logic and connectors to support commonly-used input devices, such as
PS/2 connectors for a mouse and keyboard. Early personal computers such as the Apple II or IBM PC included only this
minimal peripheral support on the motherboard. Occasionally video interface hardware was also integrated into the
motherboard; for example on the Apple II, and rarely on IBM-compatible computers such as the IBM PC Jr. Additional
peripherals such as disk controllers and serial ports were provided as expansion cards.

Given the high thermal design power of high-speed computer CPUs and components, modern motherboards nearly
always include heat sinks and mounting points for fans to dissipate excess heat.


Main article: CPU socket

A CPU socket or CPU slot is an electrical component that attaches to a printed circuit board (PCB) and is designed to
house a CPU (also called a microprocessor). It is a special type of integrated circuit socket designed for very high pin
counts. A CPU socket provides many functions, including providing a physical structure to support the CPU, providing
support for a heat sink, facilitating replacement (as well as reducing cost) and most importantly forming an electrical
interface both with the CPU and the PCB. CPU sockets can most often be found in most desktop and server computers
(laptops typically use surface mount CPUs), particularly those based on the Intel x86 architecture on the motherboard.


Block diagram of a modern motherboard, which supports many on-board peripheral functions as well as several
expansion slots.

With the steadily declining costs and size of integrated circuits, it is now possible to include support for many peripherals
on the motherboard. By combining many functions on one PCB, the physical size and total cost of the system may be
reduced; highly-integrated motherboards are thus especially popular in small form factor and budget computers.
For example, the ECS RS485M-M,[6] a typical modern budget motherboard for computers based on AMD processors,
has on-board support for a very large range of peripherals:

disk controllers for a floppy disk drive, up to 2 PATA drives, and up to 6 SATA drives (including RAID 0/1 support)

integrated ATI Radeon graphics controller supporting 2D and 3D graphics, with VGA and TV output

integrated sound card supporting 8-channel (7.1) audio and S/PDIF output

Fast Ethernet network controller for 10/100 Mbit networking

USB 2.0 controller supporting up to 12 USB ports

IrDA controller for infrared data communication (e.g. with an IrDA enabled Cellular Phone or Printer)

temperature, voltage, and fan-speed sensors that allow software to monitor the health of computer components

Expansion cards to support all of these functions would have cost hundreds of dollars even a decade ago, however as of
April 2007 such highly-integrated motherboards are available for as little as $30 in the USA.


A typical motherboard of 2009 will have a different number of connections depending on its standard. A standard ATX
motherboard will typically have 1x PCI-E 16x connection for a graphics card, 2x PCI slots for various expansion cards and
1x PCI-E 1x which will eventually supersede PCI.

A standard Super ATX motherboard will have 1x PCI-E 16x connection for a graphics card. It will also have a varying
number of PCI and PCI-E 1x slots. It can sometimes also have a PCI-E 4x slot. This varies between brands and models.

Some motherboards have 2x PCI-E 16x slots, to allow more than 2 monitors without special hardware or to allow use of
a special graphics technology called SLI (for Nvidia) and Crossfire (for ATI). These allow 2 graphics cards to be linked
together, to allow better performance in intensive graphical computing tasks, such as gaming and video-editing.

As of 2007, virtually all motherboards come with at least 4x USB ports on the rear, with at least 2 connections on the
board internally for wiring additional front ports that are built into the computer's case. Ethernet is also included now.
This is a standard networking cable for connecting the computer to a network or a modem. A sound chip is always
included on the motherboard, to allow sound to be output without the need for any extra components. This allows
computers to be far more multimedia-based than before. Cheaper machines now often have their graphics chip built
into the motherboard rather than a separate card.


Motherboards are generally air cooled with heat sinks often mounted on larger chips, such as the northbridge, in
modern motherboards. If the motherboard is not cooled properly, it can cause the computer to crash. Passive cooling,
or a single fan mounted on the power supply, was sufficient for many desktop computer CPUs until the late 1990s; since
then, most have required CPU fans mounted on their heat sinks, due to rising clock speeds and power consumption.
Most motherboards have connectors for additional case fans as well. Newer motherboards have integrated temperature
sensors to detect motherboard and CPU temperatures, and controllable fan connectors which the BIOS or operating
system can use to regulate fan speed. Some higher-powered computers (which typically have high-performance
processors and large amounts of RAM, as well as high-performance video cards) use a water-cooling system instead of
many fans.

Some small form factor computers and home theater PCs designed for quiet and energy-efficient operation boast fan-
less designs. This typically requires the use of a low-power CPU, as well as careful layout of the motherboard and other
components to allow for heat sink placement.

A 2003 study[7] found that some spurious computer crashes and general reliability issues, ranging from screen image
distortions to I/O read/write errors, can be attributed not to software or peripheral hardware but to aging capacitors on
PC motherboards. Ultimately this was shown to be the result of a faulty electrolyte formulation.[8]

For more information on premature capacitor failure on PC motherboards, see capacitor plague.

Motherboards use electrolytic capacitors to filter the DC power distributed around the board. These capacitors age at a
temperature-dependent rate, as their water based electrolytes slowly evaporate. This can lead to loss of capacitance
and subsequent motherboard malfunctions due to voltage instabilities. While most capacitors are rated for 2000 hours
of operation at 105 °C,[9] their expected design life roughly doubles for every 10 °C below this. At 45 °C a lifetime of 15
years can be expected. This appears reasonable for a computer motherboard, however many manufacturers have
delivered substandard capacitors,[citation needed] which significantly reduce life expectancy. Inadequate case cooling
and elevated temperatures easily exacerbate this problem. It is possible, but tedious and time-consuming, to find and
replace failed capacitors on PC motherboards; it is less expensive to buy a new motherboard than to pay for such a
repair.[citation needed]


Main article: Comparison of computer form factors


Motherboards are produced in a variety of sizes and shapes ("form factors"), some of which are specific to individual
computer manufacturers. However, the motherboards used in IBM-compatible commodity computers have been
standardized to fit various case sizes. As of 2007, most desktop computer motherboards use one of these standard form
factorsͶeven those found in Macintosh and Sun computers which have not traditionally been built from commodity

Laptop computers generally use highly integrated, miniaturized, and customized motherboards. This is one of the
reasons that laptop computers are difficult to upgrade and expensive to repair. Often the failure of one laptop
component requires the replacement of the entire motherboard, which is usually more expensive than a desktop
motherboard due to the large number of integrated components.


Nvidia SLI and ATI Crossfire technology allows two or more of the same series graphics cards to be linked together to
allow faster graphics-processing capabilities. Almost all medium- to high-end Nvidia cards and most high-end ATI cards
support the technology.

They both require compatible motherboards. There is an obvious need for 2x PCI-E 16x slots to allow two cards to be
inserted into the computer. The same function can be achieved in 650i motherboards by NVIDIA, with a pair of x8 slots.
Originally, tri-Crossfire was achieved at 8x speeds with two 16x slots and one 8x slot; albeit at a slower speed. ATI
opened the technology up to Intel in 2006, and all new Intel chipsets now support Crossfire.
SLI is a little more proprietary in its needs. It requires a motherboard with Nvidia's own NForce chipset series to allow it
to run (exception: select Intel X58 chipset based motherboards).

It is important to note that SLI and Crossfire will not usually scale to 2x the performance of a single card when using a
dual setup. They also do not double the effective amount of VRAM or memory bandwidth.



Motherboards contain some non-volatile memory to initialize the system and load an operating system from some
external peripheral device. Microcomputers such as the Apple II and IBM PC used ROM chips, mounted in sockets on the
motherboard. At power-up, the central processor would load its program counter with the address of the boot ROM,
and start executing ROM instructions, displaying system information on the screen and running memory checks, which
would in turn start loading memory from an external or peripheral device (disk drive). If none is available, then the
computer can perform tasks from other memory stores or display an error message, depending on the model and design
of the computer and version of the BIOS.

Most modern motherboard designs use a BIOS, stored in an EEPROM chip soldered to the motherboard, to bootstrap
the motherboard. (Socketed BIOS chips are widely used, also.) By booting the motherboard, the memory, circuitry, and
peripherals are tested and configured. This process is known as a computer Power-On Self Test (POST) and may include
testing some of the following devices:

floppy drive

network controller

CD-ROM drive

DVD-ROM drive

SCSI hard drive

IDE, EIDE, or SATA hard drive

External USB memory storage device

Any of the above devices can be stored with machine code instructions to load an operating system or a program.

ATX (Advanced Technology Extended) is a computer form factor designed by Intel in 1995. It was the first big change in
computer case, motherboard, and power supply design in many years. ATX overtook AT completely as the default form
factor for new systems. ATX addressed many of the AT form factor's annoyances that had frustrated system builders.
Other standards for smaller boards (including microATX, FlexATX and mini-ITX) usually keep the basic rear layout but
reduce the size of the board and the number of expansion slot positions. In 2003, Intel announced the BTX standard,
intended as a replacement for ATX. As of 2009, the ATX form factor remains a standard for do-it-yourselfers; BTX has
however made inroads into pre-made systems.

The official specifications were released by Intel in 1995, and have been revised numerous times since, the most recent
being version 2.2,[1] released in 2004.
A full size ATX board is 12 in × 9.6 in (305 mm × 244 mm). This allows many ATX form factor chassis to accept microATX
boards as well.



ATX I/O plates

On the back of the system, some major changes were made. The AT standard had only a keyboard connector and
expansion slots for add-on card backplates. Any other onboard interfaces (such as serial and parallel ports) had to be
connected via flying leads to connectors which were mounted either on spaces provided by the case or brackets placed
in unused expansion slot positions. ATX allowed each motherboard manufacturer to put these ports in a rectangular
area on the back of the system, with an arrangement they could define themselves (though a number of general
patterns depending on what ports the motherboard offers have been followed by most manufacturers). Generally the
case comes with a snap out panel, also known as an I/O plate, reflecting one of the common arrangements. If necessary,
I/O plates can be replaced to suit the arrangement on the motherboard that is being fitted and the I/O plates are usually
included when purchasing a motherboard. Panels were also made that allowed fitting an AT motherboard in an ATX

ATX also made the PS/2-style mini-DIN keyboard and mouse connectors ubiquitous. AT systems used a 5-pin DIN
connector for the keyboard, and were generally used with serial port mice (although PS/2 mouse ports were also found
on some systems). Many modern motherboards are phasing out the PS/2-style keyboard and mouse connectors in favor
of the more modern Universal Serial Bus. Other legacy connectors that are slowly being phased out of modern ATX
motherboards include 25-pin parallel ports and 9-pin RS-232 serial ports. In their place are on-board peripheral ports
such as Ethernet, FireWire, eSATA, audio ports (both analog and S/PDIF), video (analog D-sub, DVI, or HDMI), and extra

USB ports.


The ATX specification requires the power supply to produce three main outputs, +3.3 V, +5 V and +12 V. Low-power о12
V and 5 VSB (standby) supplies are also required. A о5 V output was originally required, but it is now completely

Originally the motherboard was powered by one 20-pin connector. An ATX power supply provides a number of
peripheral power connectors, and (in modern systems) two connectors for the motherboard: a 4-pin auxiliary connector
providing additional power to the CPU, and a main 24-pin power supply connector, an extension of the original 20-pin
V !   


PS_ON# or "Power On" is a signal from the motherboard to the power supply. When the line is connected to GND (by
the motherboard), the power supply turns on. It is internally pulled up to +5 V inside the power supply.[4] [5]

PWR_OK or "Power Good" is an output from the power supply that indicates that its output has stabilized and is ready
for use. It remains low for a brief time (100ʹ500 ms) after the PS_ON# signal is pulled low.[6]

+5 VSB or "+5 V standby" supplies power even when the rest of the supply lines are off. This can be used to power the
circuitry that controls the Power On signal.

+3.3 V sense should be connected to the +3.3 V on the motherboard or its power connector. This connection allows for
remote sensing of the voltage drop in the power supply wiring.

Generally, supply voltages must be within ±5% of their nominal values at all times. The little-used negative supply
voltages, however, have a ±10% tolerance. There is a specification for ripple in a 10ʹ20 MHz bandwidth:[4]

! !

AT-style computer cases had a power button that was directly connected to the system computer power supply (PSU).
The general configuration was a double-pole latching mains voltage switch with the four pins connected to wires from a
four-core cable. The wires were either soldered to the power button (making it difficult to replace the power supply if it
failed) or blade receptacles were used.

An ATX power supply does not directly connect to the system power button, allowing the computer to be turned off via
software. However, many ATX power supplies have a manual switch on the back to ensure the computer is truly off and
no power is being sent to the components. With this switch on, energy still flows to the components even when the
computer appears to be "off." This is known as soft-off or standby and can be used for remote wake up through Wake-
on-Ring or Wake-on-LAN, but is generally used to power on the computer through a front switch.



The power supply's connection to the motherboard was changed. Older AT power supplies had two similar connectors
that could be accidentally switched, usually causing short-circuits and irreversible damage to the motherboard. ATX used
one large, keyed connector instead, making a reversed connection very difficult. The new connector also provided a 3.3
volt source, removing the need for motherboards to derive this voltage from one of the other power rails. Some
motherboards, particularly late model AT form factor offerings, supported both AT and ATX PSUs.

If not working with an ATX motherboard, one can fully turn on the power (it is always partly on) only by shorting from
pin 16 (the green wire) on the ATX connector to a black wire (ground), since it is the motherboard's power switch which
the ATX PSU uses. In order to use an old PC power supply for tasks other than powering a PC, one must also be careful to
observe the minimum load requirements of the PSU; if some load is not provided, the supply may shut down, output
incorrect voltages, or otherwise malfunction.


ATX was originally designed with the power supply drawing air into the case and exhausting it down onto the
motherboard. The plan was to deliver cool air directly to the CPU's and power regulation circuitry's location, which was
usually at the top of the motherboard in ATX designs. This was not particularly useful for a variety of reasons. Early ATX
systems simply didn't have processors or components with thermal output that required special cooling considerations.
Later ATX systems with significantly greater heat output would not be aided in cooling by a power supply, because it
would be delivering its often significantly heated exhaust into the case. As a result, the ATX specification was changed to
make PSU airflow direction optional,[1] and modern ATX power supplies universally exhaust air from the case.



ATX, introduced in late 1995, defined three types of power connectors:

4-pin Molex connector (actually it is an AMP MATE-N-LOCK) Ͷ transferred directly from AT standard: +5 V and +12 V for
P-ATA hard disks, CD-ROMS, 5.25 in floppy drives and other peripherals.[7]

4-pin Berg floppy connector Ͷ transferred directly from AT standard: +5 V and +12 V for 3.5 in inch floppy drives and
other peripherals.[8]

20-pin Molex Mini-fit Jr. main motherboard connector Ͷ new to the ATX standard.

The power distribution specification defined that most of PSU's power should be provided on 5 V and 3.3 V rails,
because most of the electronic components (CPU, RAM, chipset, PCI, AGP and ISA cards) used 5 V or 3.3 V for power
supply. The 12 V rail was only used by fans and motors of peripheral devices (HDD, FDD, CD-ROM, etc.).

The original ATX power supply specification remained mostly unrevised until 2000.

While designing the Pentium 4 platform in 1999/2000, the standard 20-pin ATX power connector was deemed
inadequate to supply increasing electrical load requirements. So, ATX was significantly revised into ATX12V 1.0 standard
(that is why ATX12V 1.x is sometimes inaccurately called ATX-P4). ATX12V 1.x was also adopted by Athlon XP and Athlon
64 systems.


The main changes and additions in ATX12V 1.0 (released in February 2000) were:

An extra 4-pin, 12-volt connector to power the CPU. Formally called the +12 V Power Connector, this is commonly
referred to as the P4 connector because this was first needed to support the Pentium 4 processor. (Older processors
were powered from the 5V rail.)

A supplemental 6-pin AUX connector providing additional 3.3 V and 5 V supplies to the motherboard, if it needed it.
Although it was provided by every ATX12V 1.x PSU (as required per standard), it was rarely required by motherboards.

Increased the power on the 12 V rail (power on 5 V and 3.3 V rails remained mostly the same).

Both additional connectors were also of the Molex Mini-Fit, Jr. type.


This is a minor revision from August 2000. The power on 3.3 V rail was slightly increased, among other much lesser


A relatively minor revision from January 2002. The only significant change was that the о5 V rail was no longer required
(it became optional). This voltage was very rarely used, only on some old systems with some ISA add-on cards.


Introduced in April 2003 (a month after 2.0). Lots of relatively minor changes. Some of them are:

Slightly increased the power on 12 V rail.

Defined minimal required PSU efficiencies for light and normal load.

Defined acoustic levels.

Introduction of Serial ATA power connector (but defined as optional).


ATX12V 2.x brought a very significant design change regarding power distribution. When analyzing the then-current PC
architectures' power demands, it was determined that it would be much easier (both from economical and engineering
perspectives) to power most PC components from 12 V rails, instead of from 3.3 V and 5 V rails.


The above conclusion was incorporated in ATX12V 2.0 (introduced in February 2003), which defined quite different
power distribution from ATX12V 1.x:
The main ATX power connector was extended to 24 pins (it is backwards compatible). The extra four pins provide one
additional 3.3 V, 5 V and 12 V circuit.

The 6-pin AUX connector from ATX12V 1.x was removed because the extra 3.3 V and 5 V circuits which it provided are
now incorporated in the 24-pin main connector.

Most power is now provided on 12 V rails. The standard specifies that two independent 12 V rails (12 V2 for the 4 pin
connector and 12 V1 for everything else) with independent overcurrent protection are needed to meet the power
requirements safely (some very high power PSUs have more than two rails, recommendations for such large PSUs are
not given by the standard).

The power on 3.3 V and 5 V rails was significantly reduced.

Serial ATA power cable is required.

Many other specification changes and additions.


This is a minor revision from June 2004. The о5 V rail was completely removed from the specification.


This is a minor revision from March 2005. The power was slightly increased on all rails. Efficiency requirements changed.


Another minor revision, the main change was a call for higher-quality connectors on the motherboard power



This is an ATX12V power supply derivative made by AMD to power its Athlon MP (dual processor) platform. It was used
only on high-end Athlon MP motherboards. It has a special 8-pin supplemental connector for motherboard, so an AMD
GES PSU is required for such motherboards (those motherboards will not work with ATX(12 V) PSUs).

, "#$

EPS12V is defined in SSI, and used primarily by SMP/multi-core systems such as Core 2, Core i7, Opteron and Xeon. It has
a 24-pin main connector (same as ATX12V v2.x), an 8-pin secondary connector, and an optional 4-pin tertiary connector.
To ensure backwards compatibility with ATX12V, many power supply makers implement the 8-pin connector as two
combinable 4-pin connectors.



Because video card power demands have dramatically increased over the 2000s, some high-end graphics cards have
power demands that exceed AGP or PCIe slot capabilities. For these cards, supplementary power was delivered through
a standard 4-pin peripheral or floppy power connector. Midrange and high-end PCI Express-based video cards
manufactured after 2004 typically use a standard 6 or 8-pin PCIe power connector directly from the PSU.


Although the ATX power supply specifications are all vertically compatible in both ways (both electrically and physically),
it is not wise to mix old motherboards/systems with new PSU's, and vice versa.

There are two main reasons for this:

The power distribution biases across 3.3V, 5V and 12V rails are very different between older and newer ATX PSU
designs, as well as between older and newer PC system designs.

Older PSUs may not have connectors which are required for newer PC systems to properly operate.

This is a practical guidance what to mix and what not to mix:

Older systems (until Pentium 4 and Athlon XP platforms) were designed to draw most power from 5 V and 3.3 V rails.

Pentium 4 and Athlon XP systems draw much more power from the 12 V rail, than the 5 V and 3.3 V rails.

Newer systems (Athlon 64, Core Duo etc.) draw most power from 12 V rails.

Original ATX PSUs have power distribution designed for pre-P4/XP PCs. They lack the supplemental 4-pin 12-volt
connector for the CPU, so they simply cannot be used with P4/XP era or newer motherboards (adapters do exist but
power drain on the 12 V rail must be checked very carefully if using them).

ATX12V 1.x PSUs have power distribution designed for P4/XP PCs, but they are also greatly suitable for older PCs, since
they give plenty of power (relative to old PCs' needs) both on 12 V and on 5 V/3.3 V. Some of them might not have the -5
V rail which is needed for some special add-in ISA cards. It is not recommended to use ATX12V 1.x PSUs on ATX12V 2.x
motherboards because those systems require much more power on 12V, and much less on 3.3 V/5 V than ATX12V PSUs

ATX12V 2.x PSUs have power distribution designed for late P4/XP PCs and for Athlon 64 and Core Duo PCs. They can be
used with earlier P4/XP PCs, but the power distribution will be significantly suboptimal, so a more powerful ATX12V 2.0
PSU should be used to compensate for that discrepancy. ATX12V 2.x PSUs can also be used with pre-P4/XP systems, but
the power distribution will be greatly suboptimal (12 V rails will be mostly unused, while the 3.3 V/5 V rails will be
overloaded), so this is not recommended.

Special note: Proprietary brand-name or high-end workstation/server designs do not fit into these guidelines. They
usually require an exactly matching power supply unit.


Older Dell computers, particularly those from the Pentium II and III times, are notable for using proprietary power wiring
on their power supplies and motherboards. While the motherboard connectors appear to be standard ATX, and will
actually fit a standard power supply, they are not compatible. Not only have wires been switched from one location to
another, but the number of wires for a given voltage has been changed. Thus, the pins cannot simply be rearranged.[2]

The change affects not only 20-pin ATX connectors, but also auxiliary 6-pin connectors. Modern Dell systems may use
standard ATX connectors.[3] Dell PC owners should be careful when attempting to mix non-Dell motherboards and
power supplies, as it can cause damage to the power supply or other components. If the power supply color coding on
the wiring does not match ATX standards, then it is probably proprietary. Wiring diagrams for Dell systems are usually
available on Dell's support page.

Austria's ISO, 2-letter country code
.at, the Internet country code top-level domain for Austria
Anguilla, a World Meteorological Organization country code
Ashmore and Cartier Islands (FIPS 10-4 territory code and obsolete NATO country code
Atchison County, Kansas (county code)
Asti, a province of Italy in the ISO 3166-2:IT code

Abel Tasman, a seventeenth century Dutch explorer and merchant
Aphex Twin, electronic musician
Ashley Tisdale, actress and singer

Astatine, the chemical symbol for the element
Ampere-turn (symbol), an International System of Units (MKS) unit of magnetomotive force
a subdivision equal to 1/100 of a Kip, the unit of currency in Laos
attotesla, an SI unit of magnetic flux density
technical atmosphere (symbol), physical unit of pressure
Aarne-Thompson classification system
Acceptance test
Análisis Transaccional (Spanish "Transactional analysis"), a psychological method
Ataxia telangiectasia, an immunodeficiency disorder
Appropriate Technology
Assistive technology
Automatic transmission

IBM Personal Computer/AT
AT form factor for motherboards and computer cases
The common name for the 5-pin DIN keyboard connector
Hayes command set for computer modems (all commands begin with the characters "AT")
"@", the punctuation symbol now typically used in e-mail addresses
at (Unix), to schedule tasks to be performed at a later date
at (Windows), to schedules other commands to be run at a certain time

AT Field, a force field in the anime Neon Genesis Evangelion
Air Trecks, fictional inline skates in the manga and anime Air Gear
See Walker (Star Wars) for AT-AT, AT-ST, AT-TE, etc., fictional attack vehicles from Star Wars
Alimentary tract
Atmel Corporation
Anaerobic threshold
Tamper resistance aka Anti-tamper, resistance to tampering of a product, package, or system
Anti-tank weapons
Counter-terrorism, also known as "anti-terrorism" (AT)
Appalachian Trail
Audit trail
Aviation Electronics Technician, a rating in the United States Navy
Royal Air Maroc (IATA airline designator)
At (Unix)
At (Windows)
microATX, also known as µATX (sometimes transliterated as mATX[1] or uATX[2][3] on internet forums) is a small form
factor standard for motherboards that was introduced in December of 1997.[4] The maximum size of a microATX
motherboard is 9.6 inches x 9.6 inches (244 mm x 244 mm), but some microATX boards can be as small as 6.75 inches by
6.75 inches (171.45 mm x 171.45 mm)[5]. The standard ATX size is 25% longer, at 305 mm × 244 mm (12" wide x 9.6"

Currently available microATX motherboards support CPUs from VIA, Intel or AMD.

2 ! / 

microATX was explicitly designed to be backward-compatible with ATX. The mounting points of microATX motherboards
are a subset of those used on full-size ATX boards, and the I/O panel is identical. Thus, microATX motherboards can be
used in full-size ATX cases. Furthermore, most microATX motherboards generally use the same power connectors as ATX
motherboards,[6] thus permitting the use of full-size ATX power supplies with microATX boards.

microATX boards often use the same chipsets (northbridges and southbridges) as full-size ATX boards, allowing them to
use many of the same components. However, since microATX cases are typically much smaller than ATX cases, they
usually have fewer expansion slots.

Most modern ATX motherboards have five or more PCI or PCI-Express

expansion slots, while microATX boards typically have only four (four being
the maximum permitted by the specification). In order to conserve expansion
slots and case space, many manufacturers produce microATX motherboard
with a full-range of integrated peripherals (especially integrated graphics),
which may serve as the basis for small form factor and media center PCs. For
example, the Asus A8N-VM CSM motherboard (pictured right) features
onboard GeForce 6 graphics, AC97 audio, and gigabit Ethernet (among
others), thus freeing up the expansion slots that would have been used for a
graphics card, sound card, and Ethernet card. In recent years, however, it is
common even for ATX boards to integrate all these components, as much of
this functionality is contained in the typical northbridge/southbridge pair.
With the "must-have" functions already present on the motherboard, the
need for having many expansion slots has faded, and adoption of microATX
has increased even to be used in ATX cases.

A more modern limitation of a microATX case is due to its reduction in drive

bays. Current southbridges support up to six SATA devices, in addition of up
to four legacy IDE devices. The full range of connectors are commonly found
on microATX boards, and can be fully exploited if the board is mounted in an ATX case.

In addition, some microATX cases require the use of Low-Profile PCI cards and use power supplies with non-standard

 * , 

El puerto IDE (Integrated device Electronics) o PATA (Parallel Advanced Technology Attachment) controla los dispositivos
de almacenamiento masivo de datos, como los discos duros y ATAPI (Advanced Technology Attachment Packet
Interface) y además añade dispositivos como las unidades CD-ROM.

En el sistema IDE el controlador del dispositivo se encuentra integrado en la electrónica del dispositivo. Las diversas
versiones de sistemas ATA son:
Parallel ATA (algunos están utilizando la sigla PATA)
ATA-1, con una velocidad de 8,3 Mb/s.
ATA-2, soporta transferencias rápidas en bloque y multiword DMA. Velocidad de 13,3 Mb/s.
ATA-3, es el ATA2 revisado y mejorado. Soporta velocidades de 16,6 Mb/s.
ATA-4, conocido como Ultra-DMA o ATA-33 que soporta transferencias en 33 Mb/s.
ATA-5 o Ultra ATA/66, originalmente propuesta por Quantum para transferencias en 66 Mb/s.
ATA-6 o Ultra ATA/100, soporte para velocidades de 100 Mb/s.
ATA-7 o Ultra ATA/133, soporte para velocidades de 133 Mb/s.
Serial ATA, remodelación de ATA con nuevos conectores (alimentación y datos), cables, tensión de alimentación y
conocida comúnmente como SATA.
Ata over ethernet implementación sobre Ethernet de comandos ATA para montar una red SAN. Se presenta como
alternativa a iSCSI

En un primer momento, las controladoras IDE iban como tarjetas de ampliación, mayoritariamente ISA, y sólo se
integraban en la placa madre de equipos de marca como IBM, Dell o Commodore. Su versión más extendida eran las
tarjetas multi I/O, que agrupaban las controladores IDE y de disquete, así como los puertos RS-232 y el puerto paralelo, y
sólo modelos de gama alta incorporaban zócalos y conectores SIMM para cachear el disco. La integración de dispositivos
trajo consigo que un solo chip fuera capaz de desempeñar todo el trabajo.

Con la aparición del bus PCI, las controladoras IDE casi siempre están incluidas en la placa base, inicialmente como un
chip, para pasar a formar parte del chipset. Suele presentarse como dos conectores para dos dispositivos cada uno. De
los dos discos duros, uno tiene que estar como esclavo y el otro como maestro para que la controladora sepa a/de qué
dispositivo mandar/recibir los datos. La configuración se realiza mediante jumpers. Habitualmente, un disco duro puede
estar configurado de una de estas tres formas:
Como Maestro ('Master'). Si es el único dispositivo en el cable, debe tener esta configuración, aunque a veces también
funciona si está como esclavo. Si hay otro dispositivo, el otro debe estar como esclavo.
Como Esclavo ('slave'). Debe haber otro dispositivo que sea maestro.
Selección por cable (cable select). El dispositivo será maestro o esclavo en función de su posición en el cable. Si hay otro
dispositivo, también debe estar configurado como cable select. Si el dispositivo es el único en el cable, debe estar
situado en la posición de maestro. Para distinguir el conector en el que se conectará el primer bus Ide (Ide 1) se utilizan
colores distintos.

Este diseño (dos dispositivos a un bus) tiene el inconveniente de que mientras se accede a un dispositivo el otro
dispositivo del mismo conector IDE no se puede usar. En algunos chipset (Intel FX triton) no se podría usar siquiera el
otro IDE a la vez.

Este inconveniente está resuelto en S-ATA y en SCSI, que pueden usar dos dispositivos por canal.

Los discos IDE están mucho más extendidos que los SCSI debido a su precio mucho más bajo. El rendimiento de IDE es
menor que SCSI pero se están reduciendo las diferencias. El UDMA hace la función del Bus Mastering en SCSI con lo que
se reduce la carga de la CPU y aumenta la velocidad y el Serial ATA permite que cada disco duro trabaje sin interferir a
los demás.

De todos modos aunque SCSI es superior se empieza a considerar la alternativa S-ATA para sistemas informáticos de
gama alta ya que su rendimiento no es mucho menor y su diferencia de precio sí resulta más ventajosa.
O) ,-V
A computer fan is any fan inside a computer case used for cooling purposes, and may refer to fans that draw cooler air
into the case from the outside, expel warm air from inside, or move air across a heatsink to cool a particular component.
The use of fans to cool a computer is an example of active cooling.

Manufacturers of fan include, among others, AVC, Akasa, Antec, Arctic Cooling, Cooler Master, Delta, ebm-papst, Nexus,
Noctua, NorthQ, Scythe, Thermaltake and Zalman.

As processors, graphics cards, RAM and other components in computers have increased in clock speed and power
consumption, the amount of heat produced by these components as a side-effect of normal operation has also
increased. These components need to be kept within a specified temperature range to prevent overheating, instability,
malfunction and damage leading to a shortened component lifespan.

While in earlier personal computers it was possible to cool most components using natural convection (passive cooling),
many modern components require more effective active cooling. To cool these components, fans are used to move
heated air away from the components and draw cooler air over them. Fans attached to components are usually used in
combination with a heatsink to increase the area of heated surface in contact with the air, thereby improving the
efficiency of cooling.

In the IBM compatible PC market, the computer's power supply unit (PSU) almost always uses an exhaust fan to expel
warm air from the PSU. Active cooling on CPUs started to appear on the Intel 80486, and by 1997 was standard on all
desktop processors[1]. Chassis or case fans, usually one exhaust fan to expel heated air from the rear and optionally an
intake fan to draw cooler air in through the front, became common with the arrival of the Pentium 4 in late 2000[1]. A
third vent fan in the side of the PC, often located over the CPU, is also common. The graphics processing unit (GPU) on
many modern graphics cards also requires a heatsink and fan. In some cases, the northbridge chip on the motherboard
has another fan and heatsink. Other components such as the hard drives and RAM may also be actively cooled, though
as of 2007 this remains relatively unusual. It is not uncommon to find five or more fans in a modern PC.





Used to aerate the case of the computer. The components inside the case cannot dissipate heat efficiently if the
surrounding air is too hot. Case fans move air through the case, usually drawing cooler outside air in through the front
(where it may also be drawn over the internal hard drive racks) and expelling it through the rear. There may be a third
fan in the side or top of the case to draw outside air into the vicinity of the CPU, which is usually the largest single heat
source. Standard case fans are 80 mm, 92 mm or 120 mm along each side. As case fans are often the most readily visible
form of cooling on a PC, decorative fans are widely available and may be lit with LEDs, made of UV-reactive plastic, and
covered with decorative grilles. Decorative fans and accessories are popular with case modders. Air filters are often used
over intake fans, to prevent dust from entering the case.

A power supply (PSU) fan often plays a double role, not only keeping the PSU itself from overheating, but also removing
warm air from inside the case. PSUs with two fans are also available, which typically have a fan on the inside to supply
case air into the PSU and a second fan on the back to expel the heated air.

Used to cool the CPU (central processing unit) heatsink.

See computer spot cooling.


Used to cool the graphics processing unit or the memory on graphics cards. These fans were not necessary on older
cards because of their low power dissipation, but most modern graphics cards, especially those designed for 3D graphics
and gaming, need their own dedicated cooling fans. Some of the higher powered cards can produce more heat than the
CPU (approxminately 289 watts[2]), so effective cooling is especially important. Passive coolers for new video cards,
however, are not unheard of, such as the Thermalright HR-03.

Used to cool the northbridge of a motherboard's chipset, which may be necessary for system bus overclocking.

Other less commonly encountered fans may include:

PCI slot fan: A fan mounted in one of the PCI slots, usually to supply additional cooling to the PCI and/or graphics cards.
Hard disk fan: A fan mounted next to or on a hard disk drive. This may be desirable on faster-spinning (e.g. 10,000 RPM)
hard disks with greater heat production.
CD burner fan: Some internal CD and/or DVD burners included cooling fans.

The width and height of these usually square fans are measured in millimeters; common sizes include 60 mm, 80 mm, 92
mm and 120 mm. Fans with a round frame are also available; these are usually designed so that one may use a larger fan
than the mounting holes would otherwise allow (i.e., a 120 mm fan with 90 mm holes). The amount of airflow which
fans generate is typically measured in cubic feet per minute (CFM), and the speed of rotation is measured in revolutions
per minute (RPM). Often, computer enthusiasts choose fans which have a higher CFM rating, but produce less noise
(measured in decibels, or dB), and some fans come with an adjustable RPM rating to produce less noise when the
computer does not require additional airflow. Fan speeds may be controlled manually (a simple potentiometer control,
for example), thermally, or by the computer hardware or by software. It is also possible to run many 12V fans from the 5
V supply, at the expense of airflow, but with reduced noise levels.

The type of bearing used in a fan can affect its performance and noise output. Most computer fans use one of the
following bearing types:
Sleeve bearing fans use two surfaces lubricated with oil or grease as a friction contact. Sleeve bearings are less durable
as the contact surfaces can become rough and/or the lubricant dry up, eventually leading to failure. Sleeve bearings may
be more likely to fail at higher temperatures, and may perform poorly when mounted in any orientation other than
vertical. The lifespan of a sleeve bearing fan may be around 40,000 hours at 50 °C. Fans that use sleeve bearings are
generally cheaper than fans that use ball bearings, and are quieter at lower speeds early in their life, but can grow
considerably noisier as they age.[3][4]
Rifle bearing fans are similar to sleeve bearing, but are quieter and have almost as much lifespan as ball bearings. The
bearing has a spiral groove in it that pumps fluid from a reservoir. This allows them to be safely mounted horizontally
(unlike sleeve bearings), since the fluid being pumped lubricates the top of the shaft.[5] The pumping also ensures
sufficient lubricant on the shaft, reducing noise, and increasing lifespan.
Ball bearing fans use ball bearings. Though generally more expensive, ball bearing fans do not suffer the same
orientation limitations as sleeve bearing fans, are more durable especially at higher temperatures, and quieter than
sleeve bearing fans at higher rotation speeds. The lifespan of a ball bearing fan may be around 63,000 hours at 50
Fluid bearing fans have the advantages of near-silent operation and high life expectancy (comparable to ball bearing
fans). However, these fans tend to be the most expensive. The enter bearing fan is a variation of the fluid bearing fan,
developed by Everflow.[6]
Magnetic bearing or maglev fans, in which the fan is repelled from the bearing by magnetism.


The standard connectors for computer fans are

3-pin Molex connector
This connector is used when connecting a fan to the motherboard or other circuit board. It is a small thick rectangular in-
line female connector with two tabs on the outer-most edge of one long side. The size and spacing of the pin sockets is
identical to a standard 3-pin female IC connector.
4-pin Molex connector
This connector is used when connecting the fan directly to the power supply. It consists of two wires (red/12V and
black/ground) leading to and splicing into a large in-line 4-pin male-to-female Molex connector.
Dell, Inc. proprietary
This connector is an expansion of a simple 3-pin female IC connector by adding two tabs to the middle of the connector
on one side and a lock-tab on the other side. The size and spacing of the pin sockets is identical to a standard 3-pin
female IC connector and 3-pin Molex connector. Some models have the wiring of the white wire (speed sensor) in the
middle, whereas the standard 3-pin Molex requires the white wire as pin #3, thus compatibility issues may exist.

Industry Standard Architecture

Industry Standard Architecture (in practice almost always shortened to ISA) was a computer bus standard for IBM
compatible computers.


The ISA bus was developed by a team lead by Mark Dean at IBM as part of the IBM PC project in 1981. It originated as an
8-bit system and was extended in 1983 for the XT system architecture. The newer 16-bit standard, the IBM AT bus, was
introduced in 1984. In 1988, the Gang of Nine IBM PC compatible manufacturers put forth the 32-bit EISA standard and
in the process retroactively renamed the AT bus to "ISA" to avoid infringing IBM's trademark on its PC/AT computer. IBM
designed the 8-bit version as a buffered interface to the external bus of the Intel 8088 (16/8 bit) CPU used in the original
IBM PC and PC/XT, and the 16-bit version as an upgrade for the external bus of the Intel 80286 CPU used in the IBM AT.
Therefore, the ISA bus was synchronous with the CPU clock, until sophisticated buffering methods were developed and
implemented by chipsets to interface ISA to much faster CPUs.

Designed to connect peripheral cards to the motherboard, ISA allows for bus mastering although only the first 16 MB of
main memory are available for direct access. The 8-bit bus ran at 4.77 MHz (the clock speed of the IBM PC and IBM
PC/XT's 8088 CPU), while the 16-bit bus operated at 6 or 8 MHz (because the 80286 CPUs in IBM PC/AT computers ran at
6 MHz in early models and 8 MHz in later models.) IBM RT/PC also used the 16-bit bus. It was also available on some
non-IBM compatible machines such as the short-lived AT&T Hobbit and later PowerPC based BeBox.

In 1987, IBM moved to replace the AT bus with their proprietary Micro Channel Architecture (MCA) in an effort to regain
control of the PC architecture and the PC market. (Note the relationship between the IBM term "I/O Channel" for the
AT-bus and the name "Micro Channel" for IBM's intended replacement.) MCA had many features that would later
appear in PCI, the successor of ISA, but MCA was a closed standard, unlike ISA (PC-bus and AT-bus) for which IBM had
released full specifications and even circuit schematics. The system was far more advanced than the AT bus, and
computer manufacturers responded with the Extended Industry Standard Architecture (EISA) and later, the VESA Local
Bus (VLB). In fact, VLB used some electronic parts originally intended for MCA because component manufacturers
already were equipped to manufacture them. Both EISA and VLB were backwards-compatible expansions of the AT (ISA)

Users of ISA-based machines had to know special information about the hardware they were adding to the system.
While a handful of devices were essentially "plug-n-play," this was rare. Users frequently had to configure several
parameters when adding a new device, such as the IRQ line, I/O address, or DMA channel. MCA had done away with this
complication, and PCI actually incorporated many of the ideas first explored with MCA (though it was more directly
descended from EISA).

This trouble with configuration eventually led to the creation of ISA PnP, a plug-n-play system that used a combination
of modifications to hardware, the system BIOS, and operating system software to automatically manage the nitty-gritty
details. In reality, ISA PnP can be a major headache, and didn't become well-supported until the architecture was in its
final days. This was a major contributor to the use of the phrase "plug-n-pray."

PCI slots were the first physically-incompatible expansion ports to directly squeeze ISA off the motherboard. At first,
motherboards were largely ISA, including a few PCI slots. By the mid-1990s, the two slot types were roughly balanced,
and ISA slots soon were in the minority of consumer systems. Microsoft's PC 97 specification recommended that ISA
slots be removed entirely, though the system architecture still required ISA to be present in some vestigial way internally
to handle the floppy drive, serial ports, etc. ISA slots remained for a few more years, and towards the turn of the century
it was common to see systems with an Accelerated Graphics Port (AGP) sitting near the central processing unit, an array
of PCI slots, and one or two ISA slots near the end. Now (in late 2008), even floppy disk drives and serial ports are
disappearing, and the extinction of vestigial ISA from chipsets may be on the horizon.

It is also notable that PCI slots are "rotated" compared to their ISA counterpartsͶPCI cards were essentially inserted
"upside-down," allowing ISA and PCI connectors to squeeze together on the motherboard. Only one of the two
connectors can be used in each slot at a time, but this allowed for greater flexibility.

The AT Attachment (ATA) hard disk interface is directly descended from ISA (the AT bus). ATA has its origins in hardcards
that integrated a hard disk controller (HDC) Ͷ usually with an ST-506/ST-412 interface Ͷ and a hard disk drive on the
same ISA adapter. This was at best awkward from a mechanical structural standpoint, as ISA slots were not designed to
support such heavy devices as hard disks (and the 3.5" form-factor hard disks of the time were about twice as tall and
heavy as modern drives), so the next generation of Integrated Drive Electronics drives moved both the drive and
controller to a drive bay and used a ribbon cable and a very simple interface board to connect it to an ISA slot. ATA, at its
essence, is basically a standardization of this arrangement, combined with a uniform command structure for software to
interface with the controller on a drive. ATA has since been separated from the ISA bus, and connected directly to the
local bus (usually by integration into the chipset), to be clocked much much faster than ISA could support and with much
higher throughput. (Notably when ISA was introduced as the AT bus, there was no distinction between a local and
extension bus, and there were no chipsets.) Still, ATA retains details which reveal its relationship to ISA. The 16-bit
transfer size is the most obvious example; the signal timing, particularly in the PIO modes, is also highly correlated, and
the interrupt and DMA mechanisms are clearly from ISA. (The article about ATA has more detail about this history.)

The PC/XT-bus is an eight-bit ISA bus used by Intel 8086 and Intel 8088 systems in the IBM PC and IBM PC XT in the
1980s. Among its 62 pins were demultiplexed and electrically buffered versions of the eight data and 20 address lines of
the 8088 processor, along with power lines, clocks, read/write strobes, interrupt lines, etc. Power lines included -5V and
+/-12 V in order to directly support pMOS and enhancement mode nMOS circuits such as dynamic RAMs among other
things. The XT bus architecture uses a single Intel 8259 PIC, giving eight vectorized and prioritized interrupt lines. It has
four DMA channels, three of which are brought out to the XT bus expansion slots; of these, two are normally already
allocated to machine functions (diskette drive and hard disk

The PC/AT-bus is a 16-bit (or 80286-) version of the PC/XT bus

introduced with the IBM PC/AT, officially termed I/O Channel by
IBM. It extends the XT-bus by adding a second shorter edge
connector in-line with the eight-bit XT-bus connector, which is
unchanged, retaining compatibility with most 8-bit cards. The
second connector adds four additional address lines for a total of
24, and eight additional data lines for a total of 16. It also adds
new interrupt lines connected to a second 8259 PIC (connected to one of the lines of the first) and four 16-bit DMA
channels, as well as control lines to select 8 or 16 bit transfers.

The 16-bit AT bus slot originally used two standard edge connector sockets in early IBM PC/AT machines. However, with
the popularity of the AT-architecture and the 16-bit ISA bus, manufacturers introduced specialized 98-pin connectors
that integrated the two sockets into one unit. These can be found in almost every AT-class PC manufactured after the
mid-1980s. The ISA slot connector is typically black (distinguishing it from the brown EISA connectors and white PCI

Originally, the bus clock was synchronous with the CPU clock, resulting in varying bus clock frequencies among the many
different IBM "clones" on the market (sometimes as high as 16 or 20 MHz), leading to software or electrical timing
problems for certain ISA cards at bus speeds they were not designed for. Later motherboards and/or integrated chipsets
used a separate clock generator or a clock divider which either fixed the ISA bus frequency at 4, 6 or 8 MHz or allowed
the user to adjust the frequency via the BIOS setup. When used at a higher bus frequency, some ISA cards (certain
Hercules-compatible video cards, for instance), could show significant performance improvements.

Memory address decoding for the selection of 8 or 16-bit transfer mode was limited to 128 KB sections - A0000..BFFFF,
C0000..DFFFF, E0000..FFFFF leading to problems when mixing 8 and 16-bit cards, as they could not co-exist in the same
128 KB area.


Apart from specialized industrial use, ISA is all but gone today. Even where present, system manufacturers often shield
customers from the term "ISA bus", referring to it instead as the "legacy bus" (see legacy system). The PC/104 bus, used
in industrial and embedded applications, is a derivative of the ISA bus, utilizing the same signal lines with different
connectors. The LPC bus has replaced the ISA bus as the connection to the legacy I/O devices on recent motherboards;
while physically quite different, LPC looks just like ISA to software, so that the peculiarities of ISA such as the 16 MiB
DMA limit (which corresponds to the full address space of the Intel 80286 CPU used in the original IBM AT) are likely to
stick around for a while.

Starting with Windows Vista, Microsoft is phasing out support for ISA cards in Windows. Vista still supports ISA-PnP for
the time being, although it's not enabled by default. However, consumer market PCs discontinued the ISA port feature
on their motherboards before Windows XP was released.

As explained in the History section, ISA was the basis for development of the ATA interface, used for ATA (a.k.a. IDE) and
more recently Serial ATA (SATA) hard disks. Physically, ATA is essentially a simple subset of ISA, with 16 data bits,
support for exactly one IRQ and one DMA channel, and 3 address bits plus two IDE address select ("chip select") lines,
plus a few unique signal lines specific to ATA/IDE hard disks (such as the Cable Select/Spindle Sync. line.) ATA goes
beyond and far outside the scope of ISA by also specifying a set of physical device registers to be implemented on every
ATA (IDE) drive and accessed using the address bits and address select signals in the ATA physical interface channel; ATA
also specifies a full set of protocols and device commands for controlling fixed disk drives using these registers, through
which all operations of ATA hard disks are performed. A further deviation between ISA and ATA is that while the ISA bus
remained locked into a single standard clock rate (for backward compatibility), the ATA interface offered many different
speed modes, could select among them to match the maximum speed supported by the attached drives, and kept
adding faster speeds with later versions of the ATA standard (up to 133 MB/s for ATA-6, the latest.) In most forms, ATA
ran much faster than ISA.

Before the 16-bit ATA/IDE interface, there was an 8-bit XT-IDE (also known as XTA) interface for hard disks, though it was
not nearly as popular as ATA has become, and XT-IDE hardware is now fairly hard to find (for those vintage computer
enthusiasts who may look for it.) Some XT-IDE adapters were available as 8-bit ISA cards, and XTA sockets were also
present on the motherboards of Amstrad's later XT clones. The XTA pinout was very similar to ATA, but only eight data
lines and two address lines were used, and the physical device registers had completely different meanings. A few hard
drives (such as the Seagate ST351A/X) could support either type of interface, selected with a jumper.

A derivation of ATA was the PCMCIA specification, merely a wire-adapter away from ATA. This then meant that Compact
Flash, based on PCMCIA, were (and are) ATA compliant and can, with a very simple adapter, be used on ATA ports.


Although most computers do not have physical ISA buses all IBM compatible computers Ͷ x86, and x86-64 (most non-
mainframe, non-embedded) Ͷ have ISA buses allocated in virtual address space. Embedded controller chips
(southbridge) and CPUs themselves provide services such as temperature monitoring and voltage readings through
these buses as ISA devices.


IEEE started a standardization of the ISA bus in 1985, called the P996 specification. However, despite there even having
been books published on the P996 specification, it never officially progressed past draft status.

The Intel 8085 is an 8-bit microprocessor introduced by Intel in 1977. It was binary-compatible with the more-famous
Intel 8080 but required less supporting hardware, thus allowing simpler and less expensive microcomputer systems to
be built.

The "5" in the model number came from the fact that the 8085 required only a +5-volt (V) power supply rather than the
+5V, -5V and +12V supplies the 8080 needed. Both processors were sometimes used in computers running the CP/M
operating system, and the 8085 later saw use as a microcontroller (much by virtue of its component count reducing
feature). Both designs were eclipsed for desktop computers by the compatible but more capable Zilog Z80, which took
over most of the CP/M computer market as well as taking a large share of the booming home computer market in the

The 8085 had a very long life as a controller. Once designed into such products as the DECtape controller and the VT100
video terminal in the late 1970s, it continued to serve for new production throughout the life span of those products
(generally many times longer than the new manufacture lifespan of
desktop computers).


i8085 microarchitecture.

The 8085 is a conventional von Neumann design based on the Intel 8080.
Unlike the 8080 it had no state signals multiplexed onto the data bus, but
the 8-bit data bus was instead multiplexed with the lower part of the 16-bit
address bus (in order to limit the number of pins to 40). The processor was
designed using nMOS circuitry and the later "H" versions were
implemented in Intel's enhanced nMOS process called HMOS, originally
developed for fast static RAM products. The 8085 used approximately
6,500 transistors[1].

The 8085 incorporated the functionality of the 8224 (clock generator) and
the 8228 (system controller), increasing the level of integration. A
downside compared to similar contemporary designs (such as the Z80) was
the fact that the buses required demultiplexing, however, address latches in the Intel 8155, 8355, and 8755 memory
chips allowed a direct interface, so an 8085 along with these chips was almost a complete system.
The 8085 had extensions to support new interrupts: It had three maskable interrupts (RST 7.5, RST 6.5 and RST 5.5), one
Non-Maskable interrupt (TRAP), and one externally serviced interrupt (INTR). The RST n.5 interrupts refer to actual pins
on the processor-a feature which permitted simple systems to avoid the cost of a separate interrupt controller.

Like the 8080, the 8085 could accommodate slower memories through externally generated wait states (pin 35, READY),
and had provisions for Direct Memory Access (DMA) using HOLD and HLDA signals (pins 39 and 38). An improvement
over the 8080 was that the 8085 can itself drive a piezoelectric crystal directly connected to it, and a built in clock
generator generates the internal high amplitude two-phase clock signals at half the crystal frequency (a 6.14 MHz crystal
would yield a 3.07 MHz clock for instance).


With a slighly higher integration and a single 5V power (using depletion mode load nMOS), the 8085 was a binary
compatible follow up on the 8080, the successor to the original Intel 8008. The 8080 and 8085 used the same basic
instruction set as the 8008 (developed by Computer Terminal Corporation) and they were source code compatible with
their predecessor. However, the 8080 added several useful and handy 16-bit operations above the 8008 instruction set,
while the 8085 added only a few relatively minor instructions above the 8080 set.

The processor had seven 8-bit registers, (A, B, C, D, E, H, and L) where A was the 8-bit accumulator and the other six
could be used as either byte-registers or as three 16-bit register pairs (BC, DE, HL) depending on the particular
instruction. Some instructions also enabled HL to be used as (a limited) 16-bit accumulator. It also had a 16-bit stack
pointer to memory (replacing the 8008's internal stack), and a 16-bit program counter.


Like in many other 8-bit processors, all instructions were encoded in a single byte (including register-numbers, but
excluding immediate data), for simplicity. Some of them were followed by one or two bytes of data, which could be an
immediate operand, a memory address, or a port number. Like larger processors, it had automatic CALL and RET
instructions for multi-level procedure calls and returns (which could even be conditionally executed, like jumps) and
instructions to save and restore any 16-bit register-pair on the machine stack. There were also eight one-byte call
instructions (RST) for subroutines located at the fixed addresses 00h, 08h, 10h,...,38h. These were intended to be
supplied by external hardware in order to invoke a corresponding interrupt-service routine, but were also often
employed as fast system calls. The most sophisticated command was XTHL, which was used for exchanging the register
pair HL with the value stored at the address indicated by the stack pointer.


Most 8-bit operations could only be performed on the 8-bit accumulator (the A register). For dyadic 8-bit operations, the
other operand could be either an immediate value, another 8-bit register, or a memory cell addressed by the 16-bit
register pair HL. Direct copying was supported between any two 8-bit registers and between any 8-bit register and a HL-
addressed memory cell. Due to the regular encoding of the MOV-instruction (using a quarter of available opcode space)
there were redundant codes to copy a register into itself (MOV B,B, for instance), which was of little use, except for
delays. However, what would have been a copy from the HL-addressed cell into itself (i.e., MOV M,M) was instead used
to encode the HLT instruction (halting execution until an external reset or interrupt).


Although the 8085 was generally an 8-bit processor, it also had limited abilities to perform 16-bit operations: Any of the
three 16-bit register pairs (BC, DE, HL) or SP could be loaded with an immediate 16-bit value (using LXI), incremented or
decremented (using INX and DCX), or added to HL (using DAD). The XCHG operation exchanged the values of HL and DE.
By adding HL to itself, it was possible to achieve the same result as a 16-bit arithmetical left shift with one instruction.
The only 16 bit instructions that affect any flag is DAD, which sets the CY (carry) flag in order to allow for programmed
24-bit or 32-bit arithmetics (or larger), needed to implement floating point arithmetics, for instance.

The 8085 supported up to 256 input/output (I/O) ports, accessed via
dedicated I/O instructionsͶtaking port addresses as operands. This I/O
mapping scheme was regarded as an advantage, as it freed up the
processor's limited address space. Many CPU architectures instead use a
common address space without the need for dedicated I/O instructions,
although a drawback in such designs may be that special hardware must be
used to insert wait states as peripherals are often slower than memory.
However, in some simple 8080 computers, I/O was indeed addressed as if
they were memory cells, "memory mapped", leaving the I/O commands
unused. I/O addressing could also sometimes employ the fact that the
processor would output the same 8-bit port address to both the lower and
the higher address byte (i.e. IN 05h would put the address 0505h on the 16-
bit address bus). Similar I/O-port schemes were used in the 8080-compatible
Zilog Z80 as well as the closely related x86 families of microprocessors.


Intel produced a series of development systems for the 8080 and 8085,
known as the Personal Development System. The original PDS was a large box
(in the Intel corporate blue colour) which included a CPU and monitor, and used 8 inch floppy disks. It ran the ISIS
operating system and could also operate an emulator pod and EPROM programmer. The later iPDS was a much more
portable unit featuring a small green screen and a 5¼ inch floppy disk drive, and ran the ISIS-II operating system. It could
also accept a second 8085 processor, allowing a limited form of multi-processor operation where both CPUs shared the
screen, keyboard and floppy disk drive. In addition to an 8080/8085 assembler, Intel produced a number of compilers
including PL/M-80 and Pascal languages, and a set of tools for linking and statically locating programs to enable them to
be burnt into EPROMs and used in embedded systems. The hardware support changes were announced and supported,
but the software upgrades were not supported by the assembler, user manual or any other means. At times it was
claimed they were not tested when that was false.[citation needed]


For the extensive use of 8085 in various applications, the microprocessor is provided with an instruction set which
consists of various instructions such as MOV, ADD, SUB, JMP etc. These instructions are written in the form of a program
which is used to perform various operations such as branching, addition, subtraction, bitwise logical and bit shift
operations. More complex operations and other arithmetic operations must be implemented in software. For example,
multiplication is implemented using a multiplication algorithm.

The 8085 processor has found marginal use in small scale computers up to the 21st century. The TRS-80 Model 100 line
uses a 80C85. The CMOS version 80C85 of the NMOS/HMOS 8085 processor has/had several manufacturers, and some
versions (eg. Tundra Semiconductor Corporation's CA80C85B) have additional functionality, eg. extra machine code
instructions. One niche application for the rad-hard version of the 8085 has been in on-board instrument data
processors for several NASA and ESA space physics missions in the 1990s and early 2000s, including CRRES, Polar, FAST,
Cluster, HESSI, Sojourner (rover)[2], and THEMIS. The Swiss company SAIA used the 8085 and the 8085-2 as the CPUs of
their PCA1 line of programmable logic controllers during the 1980s.
See also: Comparison of embedded computer systems on board the Mars rovers

The 8086[1] is a 16-bit microprocessor chip designed by Intel and introduced on the market in 1978, which gave rise to
the x86 architecture. Intel 8088, released in 1979, was essentially the same chip, but with an external 8-bit data bus
(allowing the use of cheaper and fewer supporting logic chips[2]), and is notable as the processor used in the original

In 1972, Intel launched the 8008, the first 8-bit microprocessor[3]. It implemented an instruction set designed by
Datapoint corporation with programmable CRT terminals in mind, that also proved to be fairly general purpose. The
device needed several additional ICs to produce a functional computer, in part due to its small 18-pin "memory-
package", which ruled out the use of a separate address bus (Intel was primarily a DRAM manufacturer at the time).

Two years later, in 1974, Intel launched the 8080[4], employing the new 40-pin DIL packages originally developed for
calculator ICs to enable a separate address bus. It had an extended instruction set that was source- (not binary-)
compatible with the 8008 and also included some 16-bit instructions to make programming easier. The 8080 device,
often described as the first truly useful microprocessor, was nonetheless soon replaced by the 8085 which could cope
with a single 5V power supply instead of the three different operating voltages of earlier chips.[5] Other well known 8-
bit microprocessors that emerged during these years were Motorola 6800 (1974), Microchip PIC16X (1975), MOS
Technology 6502 (1975), Zilog Z80 (1976), and Motorola 6809 (1977), as well as others.


The 8086 was originally intended as a temporary substitute for the ambitious iAPX 432 project in an attempt to draw
attention from the less-delayed 16 and 32-bit processors of other manufacturers (such as Motorola, Zilog, and National
Semiconductor) and at the same time to top the successful Z80 (designed by former Intel employees). Both the
architecture and the physical chip were therefore developed quickly (in a little more than two years[6]), using the same
basic microarchitecture elements and physical implementation techniques as employed by the older 8085, and for
which it also functioned as its continuation. Marketed as source compatible, it was designed so that assembly language
for the 8085, 8080, or 8008 could be automatically converted into equivalent (sub-optimal) 8086 source code, with little
or no hand-editing. This was possible because the programming model and instruction set was (loosely) based on the
8080. However, the 8086 design was expanded to support full 16-bit processing, instead of the fairly basic 16-bit
capabilities of the 8080/8085. New kinds of instructions were added as well; self-repeating operations and instructions
to better support nested ALGOL-family languages such as Pascal, among others.

The 8086 was sequenced[7] using a mix of random logic and microcode and was implemented using depletion load
nMOS circuitry with approximately 20,000 active transistors (29,000 counting all ROM and PLA sites). It was soon moved
to a new refined nMOS manufacturing process called HMOS (for High performance MOS) that Intel originally developed
for manufacturing of fast static RAM products[8]. This was followed by HMOS-II, HMOS-III versions, and, eventually, a
fully static version designed in CMOS and manufactured in CHMOS.[9] The original chip measured 33 mm² and minimum
feature size was 3.2 ʅm.

The architecture was defined by Stephen P. Morse and Bruce Ravenel. Jim McKevitt and John Bayliss were the lead
engineers of the development team and William Pohlman the manager. While less known than the 8088 chip, the legacy
of the 8086 is enduring; references to it can still be found on most modern computers in the form of the Vendor ID entry
for all Intel devices, which is 8086H (hexadecimal). It also lent its last two digits to Intel's later extended versions of the
design, such as the 286 and the 386, all of which eventually became known as the x86 family.


All internal registers as well as internal and external data buses were 16 bits wide, firmly establishing the "16-bit
microprocessor" identity of the 8086. A 20-bit external address bus gave an 1 MB (segmented) physical address space
(220 = 1,048,576). The data bus was multiplexed with the address bus in order to fit a standard 40-pin dual in-line
package. 16-bit I/O addresses meant 64 KB of separate I/O space (216 = 65,536). The maximum linear address space
were limited to 64 KB, simply because internal registers were only 16 bits wide. Programming over 64 KB boundaries
involved adjusting segment registers (see below) and were therefore fairly awkward (and remained so until the 80386).

Some of the control pins, which carry essential signals for all external operations, had more than one function depending
upon whether the device was operated in "min" or "max" mode. The former were intended for small single processor
systems whilst the latter were for medium or large systems, using more than one processor.


The 8086 had eight (more or less general) 16-bit registers including the stack pointer, but excluding the instruction
pointer, flag register and segment registers. Four of them (AX,BX,CX,DX) could also be accessed as (twice as many) 8-bit
registers (AH,AL,BH,BL, etc), the other four (BP,SI,DI,SP) were 16-bit only.

Due to a compact encoding inspired by 8085 and other 8-bit processors, most instructions were one-address or two-
address operations which means that the result were stored in one of the operands. At most one of the operands could
be in memory, but this memory operand could also be the destination, while the other operand, the source, could be
either register or immediate. A single memory location could also often be used as both source and destination which,
among other factors, further contributed to a code density comparable to (often better than) most eight bit machines.

Although the degree of generality of most registers were much greater than in the 8080 or 8085, it was still fairly low
compared to the typical contemporary minicomputer, and registers were also sometimes used implicitly by instructions.
While perfectly sensible for the assembly programmer, this complicated register allocation for compilers compared to
more regular 16- and 32-bit processors (such as the PDP-11, VAX, 68000, etc); on the other hand, compared to
contemporary 8-bit microprocessors (such as the 8085, or 6502), it was significantly easier to generate code for the 8086

As mentioned above 8086 also featured 64 KB of 8-bit (or alternatively 32 K-word or 16-bit) I/O space. A 64 KB (one
segment) stack growing towards lower addresses is supported by hardware; 2-byte words are pushed to the stack and
the stack top (bottom) is pointed out by SS:SP. There are 256 interrupts, which can be invoked by both hardware and
software. The interrupts can cascade, using the stack to store the return address.

The processor had some new instructions (not present in the 8085) to better support stack based high level
programming languages such as Pascal and PL/M; some of the more useful ones were push mem-op, and ret size,
supporting the "pascal calling convention". (Several others, such as push immed and enter, would be added in the
subsequent 80186, 80286, and 80386 designs.)

8086 has a 16 bit flag register. Out of these, 9 are active, and indicate the current state of the processor. These are Ͷ
Carry flag, Parity flag, Auxiliary flag, Zero flag, Sign flag, Trap flag, Interrupt enable flag, Direction flag and Overflow flag.


There were also four sixteen-bit segment registers (CS, DS, SS, ES, standing for "code segment", "data segment", "stack
segment" and "extra segment") that allowed the CPU to access one megabyte of memory in an unusual way. Rather
than concatenating the segment register with the address register, as in most processors whose address space exceeded
their register size, the 8086 shifted the segment register left 4 bits and added it to the offset address (physical address =
16·segment + offset), producing a 20-bit effective address from the 32-bit segment:offset pair. As a result, each physical
address could be referred to by 212 = 4096 different segment:offset pairs. This scheme had the advantage that a small
program (less than 64 kilobytes) could be loaded starting at a fixed offset (such as 0) in its own segment, avoiding the
need for relocation, with at most 15 bytes of alignment waste. The 16-byte separation between segment bases was
known as a "paragraph".

Compilers for the 8086 commonly supported two types of pointer, "near" and "far". Near pointers were 16-bit addresses
implicitly associated with the program's code or data segment (and so made sense only in programs small enough to fit
in one segment). Far pointers were 32-bit segment:offset pairs. C compilers also supported "huge" pointers, which were
like far pointers except that pointer arithmetic on a huge pointer treated it as a flat 20-bit pointer, while pointer
arithmetic on a far pointer wrapped around within its initial 64-kilobyte segment.

To avoid the need to specify "near" and "far" on every pointer and every function which took or returned a pointer,
compilers also supported "memory models" which specified default pointer sizes. The "small", "compact", "medium",
and "large" models covered every combination of near and far pointers for code and data. The "tiny" model was like
"small" except that code and data shared one segment. The "huge" model was like "large" except that all pointers were
huge instead of far by default. Precompiled libraries often came in several versions compiled for different memory

In principle the address space of the x86 series could have been extended in later processors by increasing the shift
value, as long as applications obtained their segments from the operating system and did not make assumptions about
the equivalence of different segment:offset pairs. In practice the use of "huge" pointers and similar mechanisms was
widespread, and though some 80186 clones did change the shift value, these were never commonly used in desktop

According to Morse et al., the designers of the 8086 considered using a shift of eight bits instead of four, which would
have given the processor a 16-megabyte address space.[10].


The 80286's protected mode extended the processor's address space to 224 bytes (16 megabytes), but not by increasing
the shift value. Instead, the 16-bit segment registers supply an index into a table of 24-bit base addresses, to which the
offset is added. To support old software the 80286 also had a "real mode" in which address calculation mimicked the
8086. There was, however, one small difference: on the 8086 the address was truncated to 20 bits, while on the 80286 it
was not. Thus real-mode pointers could refer to addresses between 100000 and 10FFEF (hexadecimal). This roughly 64-
kilobyte region of memory was known as the High Memory Area, and later versions of MS-DOS could use it to increase
available low memory.

The 80386 increased both the base address and the offset to 32 bits and introduced two more general-purpose segment
registers, FS and GS. The 80386 also introduced paging. The segment system can be used to enforce separation of
unprivileged processes in a 32-bit operating system, but most operating systems using paging for this purpose instead,
and set all segment registers to point to a segment with an offset of 0 and a length of 232, giving the application full
access to its virtual address space through any segment register.

The x86-64 architecture drops most support for segmentation. The segment registers still exist, but the base addresses
for CS, SS, DS, and ES are forced to 0, and the limit to 264.

In x86 versions of Microsoft Windows, the FS segment does not cover the entire address space. Instead it points to a
small data structure, different for each thread, which contains information about exception handling, thread-local
variables, and other per-thread state. The x86-64 architecture supports this technique by allowing a nonzero base
address for FS & GS.

Small programs could ignore the segmentation and just use plain 16-bit addressing. This allowed 8-bit software to be
quite easily ported to the 8086. The authors of MS-DOS took advantage of this by providing an Application Programming
Interface very similar to CP/M as well as including the simple .com executable file format, identical to CP/M. This was
important when the 8086 and MS-DOS was new, because it allowed many existing CP/M (and other) applications to be
quickly made available, greatly easing the acceptance of new platform.


Although partly shadowed by other design choices in this particular chip, the multiplexed bus limited performance
slightly; transfers of 16-bit or 8-bit quantities were done in a four-clock memory access cycle.[11] As instructions varied
from 1 to 6 bytes, fetch and execution were made concurrent (as it remains in today's x86 processors): The bus interface
unit fed the instruction stream to the execution unit through a 6 byte prefetch queue (a form of loosely coupled
pipelining), speeding up operations on registers and immediates, while memory operations unfortunately became
slower (4 years later, this performance problem was fixed with the 80186 and 80286). However, the full (instead of
partial) 16-bit architecture with a full width ALU meant that 16-bit arithmetic instructions could now be performed with
a single ALU cycle (instead of two, via carry), speeding up such instructions considerably. Combined with
orthogonalizations of operations versus operand-types and addressing modes, as well as other enhancements, this
made the performance gain over the 8080 or 8085 fairly significant, despite cases where the older chips may be faster
(see below).

Execution times for typical instructions (in clock cycles):

As can be seen from these tables, operations on registers and immediates were fast (between 2 and 4 cycles), while
memory-operand instructions and jumps were quite slow; jumps took more cycles than on the simple 8080 and 8085,
and the 8088 (used in the IBM PC) was additionally hampered by its narrower bus. The reasons why most memory
related instructions were slow were threefold:
Loosely coupled fetch and execution units are efficient for instruction prefetch, but not for jumps and random data
access (without special measures).
No dedicated address calculation adder was afforded; the microcode routines had to use the main ALU for this (although
there was a dedicated segment + offset adder).
The address and data buses were multiplexed, forcing a slightly longer (33~50%) bus cycle than in typical contemporary
8-bit processors.

It should be noted, however, that memory access performance was drastically enhanced with Intel's next generation
chips. The 80186 and 80286 both had dedicated address calculation hardware, saving many cycles, and 80286 also had
separate (non-multiplexed) address and data buses.


The 8086/8088 could be connected to a mathematical coprocessor to add floating point capability. The Intel 8087 was
the standard math coprocessor, operating on 80-bit numbers, but manufacturers like Weitek soon offered higher
performance alternatives.


The clock frequency was originally limited to 5 MHz (IBM PC used 4.77 MHz, 3/4 the standard NTSC color burst
frequency), but the last versions in HMOS were specified for 10 MHz. HMOS-III and CMOS versions were manufactured
for a long time (at least a while into the 1990s) for embedded systems, although its successor, the 80186/80188, has
been more popular for embedded use.


Soviet clone KP1810BM86.

OKI M80C86A QFP-56

Compatible and, in many cases, enhanced versions were manufactured by Fujitsu, Harris/Intersil, OKI, Siemens AG,
Texas Instruments, NEC, and AMD. For example, the NEC V20 and NEC V30 pair were hardware compatible with the
8088 and 8086, respectively, but incorporated the instruction set of the 80186 along with some (but not all) of the
80186 speed enhancements, providing a drop-in capability to upgrade both instruction set and processing speed
without manufacturers having to modify their designs. Such relatively simple and low-power 8086-compatible
processors in CMOS are still used in embedded systems.

The electronics industry of the Soviet Union was able to replicate the 8086 through both industrial espionage and
reverse engineering. The resulting chip, K1810BM86 was pin-compatible with the original Intel 8086 (ʶ1810ʦʺ86 was a
copy of the Intel 8086, not the Intel 8088) and had the same instruction set. However this IC was metric and was not
mechanicaly compatible with the Intel products. The Intel microprocesoors I8086 and I8088 were the core of the Soviet
block-made PC-compatible ES1840 and ES1841 desktops. However, these computers had significant hardware
differences from their authentic prototypes (respectively PC/XT and PC): ES1840 was Intel 8088 based, ES1841 was Intel
8086 based. Also, the data/address bus circuitry was designed independently of original Intel products. ES1841 was the
first PC compatible computer with dynamic bus sizing (US Pat. No 4,831,514). Later some of the ES1841 priciples were
adopted in PS2 (US Pat. No 5,548,786) and some other machines (UK Patent Application, Publication No. GB-A-2211325,
Published Jun. 28, 1989).

Microcomputers using the 8086

One of the most influential microcomputers of all, the IBM PC, used the Intel 8088, a version of the 8086 with an eight-
bit data bus (as mentioned above).
The first commercial microcomputer built on the basis of the 8086 was the Mycron 2000.
The IBM Displaywriter[citation needed] word processing machine and the Wang Professional Computer, manufactured
by Wang Laboratories, also used the 8086. Also, this chip could be found in the AT&T 6300 PC (built by Olivetti).
The first Compaq Deskpro used an 8086 running at 7.14 MHz, but was capable of running add-in cards designed for the
4.77 MHz IBM PC XT.
The FLT86 is a well established training system for the 8086 CPU still being manufactured by Flite Electronics
International Limited in Southampton, England
The IBM PS/2 models 25 and 30 were built with an 8MHz 8086
The Tandy 1000 SL-series machines used 8086 CPUs.
The Amstrad PC1512, PC1640, and PC2086 all used 8086 CPUs at 8MHz.

A video card, video adapter or a graphics accelerator card, display adapter, or
graphics card, is an expansion card whose function is to generate and output
images to a display. Many video cards offer added functions, such as accelerated
rendering of 3D scenes, video capture, TV tuner adapter, MPEG-2 and MPEG-4
decoding, FireWire, light pen, TV output, or the ability to connect multiple

Video hardware can be integrated on the mainboard, as it often happened with

early computers; in this configuration it was sometimes referred to as a video
controller or graphics controller.

The first IBM PC video card, which was released with the first IBM PC, was
developed by IBM in 1981. The MDA (Monochrome Display Adapter) could only
work in text mode representing 80 columns and 25 lines (80x25) in the screen. It
had a 4KB video memory and just one color.[1]

Starting with the MDA in 1981, several video cards were released, which are summarized in the attached
VGA was widely accepted, which led some corporations such as ATI, Cirrus Logic and S3 to work with that video card,
improving its resolution and the number of colours it used. This developed into the SVGA (Super VGA) standard, which
reached 2 MB of video memory and a resolution of 1024x768 at 256 color mode.

In 1995 the first consumer 2D/3D cards were released, developed by Matrox, Creative, S3, ATI and others.[citation
needed] These video cards followed the SVGA standard, but incorporated 3D functions. In 1997, 3dfx released the
Voodoo graphics chip, which was more powerful compared to other consumer graphics cards, introducing 3D effects
such mip mapping, Z-buffering and anti-aliasing into the consumer market. After this card, a series of 3D video cards
were released, such as Voodoo2 from 3dfx, TNT and TNT2 from NVIDIA. The bandwidth required by these cards was
approaching the limits of the PCI bus capacity. Intel developed the AGP (Accelerated Graphics Port) which solved the
bottleneck between the microprocessor and the video card. From 1999 until 2002, NVIDIA controlled the video card
market (taking over 3dfx) with the GeForce family.[6] The improvements carried out at this time were focused in 3D
algorithms and graphics processor clock rate. Video memory was also increased to improve their data rate; DDR
technology was incorporated, improving the capacity of video memory from 32 MB with GeForce to 128 MB with
GeForce 4.

From 2002 onwards, the video card market came to be dominated almost entirely by the competition between ATI and
Nvidia, with their Radeon and Geforce lines respectively, taking around 90% of the independent graphics card market
between them, while other manufacturers were forced into much smaller, niche markets.[7].