You are on page 1of 34



Storage, or more rightly termed as computer data storage, is the technol
ogy that consists the different components in the different kinds of computer an
d even recording media used to retain different amounts of digital data and info
rmation. It may serve as one of the core functions and also one of the fundament
al components of the computers we use. The data that is sent to the storage area
of the computers is being controlled and manipulated by the computers central pr
ocessing unit, commonly abbreviated as the CPU. The principle of having a storag
e hierarchy, done either from the lowest to the highest or the highest to the lo
west, is followed and practiced in most computers nowadays. Fast yet very costly
small storage unit options are being placed and installed closer to the central
processing unit. The larger, easier to maintain units are being placed farther
away from the central processing unit since it is being considered as one of the
cheapest ones available in the market.
Memory, or computer memory, is a temporary storage area. It holds the da
ta and instructions that the central processing unit needs. Before a program can
be run, the program is loaded from some storage medium into the memory. This al
lows the central processing unit direct access to the program. The term "memory"
, meaning primary memory is often associated with addressable semiconductor memo
ry, i.e. integrated circuits consisting of silicon-based transistors, used for e
xample as primary memory but also other purposes in computers and other digital
electronic devices. Nearly everything a computer programmer does requires him or
her to consider how to manage memory. Even storing a number in memory requires
the programmer to specify how the memory should store it.
Magnetic Tape
The magnetic tape served as one of the breakthrough technologies in the
development of computer data storage. It was developed in the year 1928 by Germa
n-Austrian engineer Fritz Pfleumer. It is one of the earliest forms of media mad
e for magnetic tape recordings. It was made using thin magnetized coatings of lo
ng, narrow strips of plastic films.
It was based upon the invention of magnetic wire tape recording, created
by Valdemar Poulsen in the year 1898. An audio tape recorder, tape deck, reel-t
o-reel tape deck, cassette deck or tape machine is an audio storage device that
records and plays back sounds, including articulated voices, usually using magne
tic tape, either wound on a reel or in a cassette, for storage. In its present d
ay form, it records a fluctuating signal by moving the tape across a tape head t
hat polarizes the magnetic domains in the tape in proportion to the audio signal
The use of magnetic tape for sound recording originated around 1930. Poulsen inv
olved the usage of media that moves in a constant velocity past a recorder, ther
efore producing an electrical signal, which is related to the sound that is to b
e recorded, including a pattern of magnetic fields similar to the said signal. S
ince some early refinements improved the fidelity of the reproduced sound, magne
tic tape has been the highest quality analogue sound recording medium available.
Prior to the development of magnetic tape, magnetic wire recorders had successf
ully demonstrated the concept of magnetic recording, but they never offered audi
o quality comparable to the other recording and broadcast standards of the time.
Pfleumer then had the idea of using ferric oxide coated long strips of paper to
use in the magnetic tape, making it as a base for the designs used in modern rec
ording machines.
Magnetic Drum
A magnetic drum was invented by Austrian Gustav Tauschek in Austria in t
he year 1932. It was considered as a form of an obsolete magnetic data storage d
evice. It is a large metal cylinder that is coated on the outside surface with a
ferromagnetic recording material. It could be considered the precursor to the h
ard disk platter, but in the form of a drum rather than a flat disk. In many cas
es a row of fixed read-write heads runs along the long axis of the drum, one for
each track.
In many cases of creating machines, a drum is the one responsible for fo
rming the main working memory of the said machine. Different data, programs, and
information can be loaded on to or off the drum using different media invented
in the past like that of the punched cards system created by Joseph Marie Jacqua
rd. These were commonly used for the main computer memory that can be referred t
o as magnetic drum machines.
Tauscheks magnetic drums original capability of storing data and informati
on can be estimated to be at an approximate value of five hundred thousand (500,
000) bits, or it can be converted to 62.5 kilobytes. It was used as late as 1980
in PDP-11/45 machines which used Tauscheks invention for changing and swapping d
Williams Tube
Professor Frederick C. Williams and colleagues at Manchester University
in the United Kingdom developed the first random access computer storage, throug
h using electrostatic cathode-ray display tubes as digital stores. By 1948, stor
age of 1024 bits was successfully implemented. William's colleague Tom Kilburn m
ade improvements that increased the capacity to 2048 bits. The Williams-Kilburn
tubes (commonly known as Williams tubes) were used on several of the early store
d program computers, including the Manchester 'Baby' (1948) and the Manchester M
ark I which became operational in 1949, and the Institute of Advanced Study (IAS
) machine spearheaded by von Neumann at Princeton, finally completed in 1951.
The Williams tube depends on an effect called secondary emission. When a
dot is drawn on a cathode ray tube, the area of the dot becomes slightly positi
vely charged and the area immediately around it becomes slightly negatively char
ged, creating a charge well. The charge well remains on the surface of the tube
for a fraction of a second, allowing the device to act as a computer memory. The
lifetime of the charge well depends on the electrical resistance of the inside
of the tube. The dot can be erased by drawing a second dot immediately next to t
he first one, thus filling the charge well. Most systems did this by drawing a s
hort dash starting at the dot position, so that the extension of the dash erased
the charge initially stored at the starting point. Information is read from the
tube by means of a metal pickup plate that covers the face of the tube. Each ti
me a dot is created or erased, the change in electrical charge induces a voltage
pulse in the pickup plate. Since this operation is synchronised with whichever
location on the screen is being targeted at that moment, it effectively reads th
e data stored there. Because the electron beam is essentially inertia-free, and
thus can be steered from location to location very quickly, there is no practica
l restriction in the order of positions so accessed, hence the so-called ?random
-access? nature of the lookup.
Williams tubes differ in the material they are made from. Others were mad
e from phosphor coatings; others were not. This material difference did not affe
ct the outcome of the tubes performance in the operating machines.
Selectron Tube
The Selectron tube was an early form of computer storage developed by Radio Corp
oration of America.
Like the Williams-Kilburn tube, the Selectron was also a random access storage d
evice. Development started in 1946 with a planned production of 200 by the end o
f the year, but production problems meant that they were still not available by
the middle of 1948. By that time their primary customer, John von Neumann's IAS
machine, was forced to switch to the Williams-Kilburn tube for storage, and RCA
eventually had to scale down the Selectron from storing 4096 bits, to 256. This
smaller version saw use in a number of IAS-related machines, but finally RCA gav
e up on the concept.
The original 4096-bit Selectron was a large (5 inch by 3 inch) vacuum tube with
a cathode running up the middle, surrounded by two separate sets of wires formin
g a cylindrical grid, a dielectric material outside of the grid, and finally a c
ylinder of metal conductor outside the dielectric, called the signal plate.
The two sets of orthogonal grid wires were normally "biased" slightly positive,
so that the electrons from the cathode could flow through the grid and reach the
dielectric. The continuous flow of electrons allowed the stored charge to be co
ntinuously regenerated by the secondary emission of electrons. To select a bit t
o be read from or written to, all but two adjacent wires on each of the two grid
s were biased negative, allowing current to flow to the dielectric at one locati
on only.

A cross-section of a selectron tube
Delay Line Memory
The basic concept of a delay-line memory consists of inserting an inform
ation pattern into a path which contains delay. If the end of the delay path is
connected back to the beginning through amplifying and timing circuits, a closed
loop is formed allowing for recirculation of the information pattern. A delay-l
ine memory resembles the human device of repeating a telephone number to one's s
elf from the time it is found in the directory until it has been dialed. The del
ay medium should slow the propagation rate of the information sufficiently so th
at the size of the storage equipment for a large number of pulses is within reas
The first such systems consisted of a column of mercury with piezo crystal trans
ducers (a combination of speaker and microphone) at either end. Data from the co
mputer was sent to the piezo at one end of the tube, and the piezo would pulse a
nd generate a small wave in the mercury. The wave would quickly travel to the fa
r end of the tube, where it would be read back out by the other piezo and sent b
ack to the computer.
To form a memory, additional circuity was added at the receiving end to send the
signal back to the input. In this way the pattern of waves sent into the system
by the computer could be kept circulating as long as the power was applied. The
computer would count the pulses by comparing to a master clock to find the part
icular bit it was looking for.
Mercury was used because the acoustic impedance of mercury is almost exactly the
same as that of the piezo-electric quartz crystals; this minimized the energy l
oss and the echoes when the signal was transmitted from crystal to medium and ba
ck again. The high speed of sound in mercury (1450 m/s) meant that the time need
ed to wait for a pulse to arrive at the receiving end was less than it would hav
e been a slower medium, such as air, but also that the total number of pulses th
at could be stored in any reasonably sized column of mercury was limited. Other
technical drawbacks of mercury included its weight, its cost, and its toxicity.
Moreover, to get the acoustic impedances to match as closely as possible, the me
rcury had to be kept at a temperature of 40 degrees Celsius, which made servicin
g the tubes hot and uncomfortable work. Use of a delay line for a computer memor
y was invented by J. Presper Eckert in the mid-1940s for use in computers such a
s the EDVAC and the UNIVAC I. Eckert and John Mauchly applied for a patent for a
delay line memory system on October 31, 1947; the patent was issued in 1953. Th
is patent focused on mercury delay lines, but it also discussed delay lines made
of strings of inductors and capacitors, magnetostrictive delay lines, and delay
lines built using rotating disks to transfer data from a read head at one point
on the circumference to a write head elsewhere around the circumference.

A diagram showing how delay line memory storage works
Magnetic Core Memory
Magnetic core memory was the most widely used form of digital computer m
emory from its birth in the early 1950s until the era of integrated-circuit memo
ry began in the early 1970s. Aside from being extremely reliable, magnetic core
memory is an appealing technology because it is based on a very simple idea.
It was the predominant form of random-access computer memory for 20 years (from
19551975). It uses tiny magnetic rings, the cores, through which wires are thread
ed to write and read information. Each core represents one bit of information. T
he cores can be magnetized in two different ways (clockwise or counter clockwise
) and the bit stored in a core is zero or one depending on that core's magnetiza
tion direction. The wires are arranged to allow an individual core to be set to
either a "one" or a "zero", and for its magnetization to be changed, by sending
appropriate electric current pulses through selected wires. The process of readi
ng the core causes the core to be reset to a "zero", thus erasing it.
A magnetic core is a ring of ferrite material. It can be permanently magnetised
either clockwise or anti-clockwise about its axis just as a vertical bar magnet
can be magnetised up or down. We can then turn a magnetic core into a bit of dig
ital memory by letting these two magnetisation states correspond to 0 and 1.
The core needs no power to retain its data. In other words, core memory is a for
m of non-volatile storage like modern hard disk drives, although in its day it f
ulfilled the high-speed role of modern RAM. The performance of early core memories
can be characterized in today's terms as being very roughly comparable to a clo
ck rate of 1 MHz (equivalent to early 1980s home computers, like the Apple II an
d Commodore 64). Early core memory systems had cycle times of about 6 s, which ha
d fallen to 1.2 s by the early 1970s, and by the mid-70s it was down to 600 ns (0
.6 s). Some designs had substantially higher performance: the CDC 6600 had a memo
ry cycle time of 1.0 s in 1964, using cores that required a half-select current o
f 200 mA. Everything possible was done in order to decrease access times and inc
rease data rates (bandwidth), including the simultaneous use of multiple grids o
f core, each storing one bit of a data word. For instance, a machine might use 3
2 grids of core with a single bit of the 32-bit word in each one, and the contro
ller could access the entire 32-bit word in a single read/write cycle.
As expected, the cores were much larger physically than those of read-write memo
ry. This type of memory was exceptionally reliable.

The first magnetic core memory

The IBM version of the magnetic core memory
Hard Disk
The hard disk drive, simply abbreviated as the HDD, is a data storage de
vice used for storing and retrieving digital information using rapidly rotating
disks (platters) coated with magnetic material. An HDD retains its data even whe
n powered off. Data is read in a random-access manner, meaning individual blocks
of data can be stored or retrieved in any order rather than sequentially. An HD
D consists of one or more rigid, or very hard way, manners in rapidly rotating d
isks, may be referred to as platters, with magnetic heads arranged on a moving a
ctuator arm to read and write data to the surfaces.
The hard disk uses rigid rotating platters. It stores and retrieves digi
tal data from a planar magnetic surface. Information is written to the disk by t
ransmitting an electromagnetic flux through an antenna or write head that is ver
y close to a magnetic material, which in turn changes its polarization due to th
e flux. The information can be read back in a reverse manner, as the magnetic fi
elds cause electrical change in the coil or read head that passes over it.
A typical hard disk drive design consists of a central axis or spindle upon whic
h the platters spin at a constant speed. Moving along and between the platters o
n a common armature are the read-write heads, with one head for each platter fac
e. The armature moves the heads in a radiant manner across the platters as they
spin, allowing each head access to the entirety of the platter.
The first computer with a hard disk drive as standard was the IBM 350 Disk File,
introduced in 1956 with the IBM 305 computer. This drive had fifty 24 inch plat
ters, with a total capacity of five million characters. In 1952, an IBM engineer
named Reynold Johnson developed a massive hard disk consisting of fifty platter
s, each two feet wide, which rotated on a spindle at 1200 rpm with read/write he
ads for the first database running RCAs Bismark computer. The storage capacity o
f the 305's 50 two-foot diameter disks was 5 megabytes of data.
The primary characteristics of an HDD are its capacity and performance. Capacity
is specified in unit prefixes corresponding to powers of 1000: a 1-terabyte (TB
) drive has a capacity of 1,000 gigabytes (GB; where 1 gigabyte = 1 billion byte
s). Typically, some of an HDD's capacity is unavailable to the user because it i
s used by the file system and the computer operating system, and possibly inbuil
t redundancy for error correction and recovery. Performance is specified by the
time to move the heads to a file (Average Access Time) plus the time it takes fo
r the file to move under its head (average latency, a function of the physical r
otational speed in revolutions per minute) and the speed at which the file is tr
ansmitted (data rate).
According t Wikipedia, mainframe and minicomputer hard disks were of widely vary
ing dimensions, typically in free standing cabinets the size of washing machines
or designed to fit a 19" rack. In 1962, IBM introduced its model 1311 disk, whi
ch used 14 inch (nominal size) platters. This became a standard size for mainfra
me and minicomputer drives for many years. Such large platters were never used w
ith microprocessor-based systems.
With increasing sales of microcomputers having built in floppy-disk drives (FDDs
), HDDs that would fit to the FDD mountings became desirable. Thus HDD Form fact
ors initially followed those of 8-inch, 5.25-inch, and 3.5-inch floppy disk driv
es. Because there were no smaller floppy disk drives, smaller HDD form factors d
eveloped from product offerings or industry standards.
These are as follows:
8 inch
9.5 in 4.624 in 14.25 in (241.3 mm 117.5 mm 362 mm). In 1979, Shugart Associates
' SA1000 was the first form factor compatible HDD, having the same dimensions an
d a compatible interface to the 8" FDD.
5.25 inch
5.75 in 3.25 in 8 in (146.1 mm 82.55 mm 203 mm). This smaller form factor, first
used in an HDD by Seagate in 1980, was the same size as full-height 5 1/4-inch-
diameter (130 mm) FDD, 3.25-inches high. This is twice as high as "half height";
i.e., 1.63 in (41.4 mm). Most desktop models of drives for optical 120 mm disks
(DVD, CD) use the half height 5" dimension, but it fell out of fashion for HDDs.
The Quantum Bigfoot HDD was the last to use it in the late 1990s, with "low-pro
file" (25 mm) and "ultra-low-profile" (20 mm) high versions.
3.5 inch
4 in 1 in 5.75 in (101.6 mm 25.4 mm 146 mm) = 376.77344 cm. This smaller form fac
tor is similar to that used in an HDD by Rodime in 1983,[72] which was the same
size as the "half height" 3" FDD, i.e., 1.63 inches high. Today, the 1-inch high
("slimline" or "low-profile") version of this form factor is the most popular fo
rm used in most desktops.
2.5 inch
2.75 in 0.2750.59 in 3.945 in (69.85 mm 715 mm 100 mm) = 48.895104.775 cm3. This sm
aller form factor was introduced by PrairieTek in 1988; there is no correspondin
g FDD. It came to be widely used for HDDs in mobile devices (laptops, music play
ers, etc.) and for solid-state drives (SSDs), by 2008 replacing some 3.5 inch en
terprise-class drives. It is also used in the PlayStation 3 and Xbox 360 video g
ame consoles. Drives 9.5 mm high became an unofficial standard for all except th
e largest-capacity laptop drives (usually having two platters inside); 12.5 mm-h
igh drives, typically with three platters, are used for maximum capacity, but wi
ll not fit most laptop computers. Enterprise-class drives can have a height up t
o 15 mm. Seagate released a 7 mm drive aimed at entry level laptops and high end
netbooks in December 2009. Western Digital released on April 23, 2013 a hard dr
ive 5 mm in height specifically aimed at UltraBooks.
1.8 inch
54 mm 8 mm 71 mm = 30.672 cm. This form factor, originally introduced by Integral
Peripherals in 1993, evolved into the ATA-7 LIF with dimensions as stated. For
a time it was increasingly used in digital audio players and subnotebooks, but i
ts popularity decreased to the point where this form factor is increasingly rare
and only a small percentage of the overall market.
1 inch
42.8 mm 5 mm 36.4 mm. This form factor was introduced in 1999 as IBM's Microdriv
e to fit inside a CF Type II slot. Samsung calls the same form factor "1.3 inch"
drive in its product literature.
0.85 inch
24 mm 5 mm 32 mm. Toshiba announced this form factor in January 2004 for use in
mobile phones and similar applications, including SD/MMC slot compatible HDDs op
timized for video storage on 4G handsets. Toshiba manufactured a 4 GB (MK4001MTD
) and an 8 GB (MK8003MTD) version
Compact Audio or Cassette Tapes
The compact audio cassette audio storage medium was introduced by Philip
s in 1963. The compact cassette had originally been intended for use in dictatio
n machines, but soon became, and remains, a popular medium for distributing pre-
recorded music. Starting in 1979, Sony's Walkman helped the format become widely
used and popular.
The cassette was a great step forward in convenience from reel-to-reel audio tap
e recording, though because of the limitations of the cassette's size and speed,
it compared poorly in quality. Unlike the open reel format, the two stereo trac
ks lie adjacent to each other rather than a 1/3 and 2/4 arrangement. These permi
tted monaural cassette players to play stereo recordings "summed" as mono tracks
and permitted stereo players to play mono recordings through both speakers. The
tape is 1/8 inch (3.175 mm) wide, with each stereo track being 1/32 inch (0.79
mm) wide and moves at 17/8 inches per second (47.625 mm/s). For comparison, the
typical open reel format was inch (6.35 mm) wide, each stereo track being 1/16 i
nch (1.5875 mm) wide, and running at either 3 or 7 inches per second (95.25 or 190
.5 mm/s). Some machines did use 17/8 inches per second (47.625 mm/s) but the qua
lity was poor.
The original magnetic material was based on ferrite (Fe2O3), but then chromium d
ioxide (CrO2) and more exotic materials were used in order to improve sound qual
ity to try to match or exceed that of vinyl records. Cobalt doped ferrite was in
troduced by TDK and proved very successful. Sony tried a dual layer tape with bo
th ferrite and chrome dioxide. Finally pure metal particles as opposed to oxide
formulations were used. These each had different bias and equalization requireme
nts requiring specialized settings. Ferrite tapes use 120 S equalization (known a
s Type 1), while chrome and cobalt doped tape types require 70 S equalization (Ty
pe 2). In practice the cassette shell was modified with indents to automatically
select the proper bias and equalization on compatible cassette decks.
Many home computers of the 1980s, notably the TRS-80, Commodore 64, ZX Spectrum,
Amstrad CPC and BBC Micro, used cassettes as a cheap alternative to floppy disk
s as a storage medium for programs and data. Data rates were typically 500 to 20
00 bit/s, although some games used special faster loading routines, up to around
4000 bit/s. A rate of 2000 bit/s equates to a capacity of around 660 kilobytes
per side of a 90 minute tape. In 1935, decades before the introduction of the Co
mpact Cassette, AEG released the first reel-to-reel tape recorder (in German: To
nbandgert), with the commercial name "Magnetophon", based on the invention of the
magnetic tape (1928) by Fritz Pfleumer, which used similar technology but with
open reels (for which the tape was manufactured by BASF). These instruments were
still very expensive and relatively difficult to use and were therefore used mo
stly by professionals in radio stations and recording studios. For private use t
he (reel-to-reel) tape recorder was not very common and only slowly took off fro
m about the 1950s; with prices between 700 and 1,500 DM (which would now be abou
t 1600 to 3400) such machines were still far too expensive for the mass market a
nd their vacuum tube construction made them very bulky. In the early 1960s, howe
ver, the weights and the prices dropped when vacuum tubes were replaced by trans
istors. Reel-to-reel tape recorders then became more common in household use, th
ough never but a small fraction of the number of homes using long playing record
disc players.
In 1958, following four years of development, RCA Victor introduced the stereo,
quarter-inch, reversible, reel-to-reel RCA tape cartridge. It was a cassette, bi
g (5" 7"), but offered few pre-recorded tapes; despite multiple versions, it fai
In 1962, Philips invented the Compact Cassette medium for audio storage, introdu
cing it in Europe on 30 August 1963 (at the Berlin Radio Show), and in the Unite
d States (under the Norelco brand) in November 1964, with the trademark name Com
pact Cassette. The team at Philip's was led by Lou Ottens.
Although there were other magnetic tape cartridge systems, Philips' Compact Cass
ette became dominant as a result of Philips' decision in the face of pressure fr
om Sony to license the format free of charge. Philips also released the Norelco
Carry-Corder 150 recorder/player in the U.S. in November 1964. By 1966 over 250,
000 recorders had been sold in the US alone and Japan soon became the major sour
ce of recorders. By 1968, 85 manufacturers had sold over 2.4 million players.
In the early years, sound quality was mediocre, but it improved dramatically by
the early 1970s when it caught up with the quality of 8-track tape and kept impr
oving. The Compact Cassette went on to become a popular (and re-recordable) alte
rnative to the 12-inch vinyl LP during the late 1970s.
The Hewlett Packard HP 9830 was one of the first desktop computers in the early
1970s to use automatically controlled cassette tapes for storage. It could save
and find files by number, using a clear leader to detect the end of tape. These
would be replaced by specialized cartridges, such as the 3M DC-series. Many of t
he earliest microcomputers implemented the Kansas City standard for digital data
storage. Most home computers of the late 1970s and early 1980s could use casset
tes for data storage as a cheaper alternative to floppy disks, though users ofte
n had to manually stop and start a cassette recorder. Even the first version of
the IBM PC of 1981 had a cassette port and a command in its ROM BASIC programmin
g language to use it. However, IBM cassette tape was seldom used, as by 1981 flo
ppy drives had become commonplace in high-end machines.
The typical encoding method for computer data was simple FSK, which resulted in
data rates of typically 500 to 2000 bit/s, although some games used special, fas
ter-loading routines, up to around 4000 bit/s. A rate of 2000 bit/s equates to a
capacity of around 660 kilobytes per side of a 90-minute tape.
Among home computers that used primarily data cassettes for storage in the late
1970s were Commodore PET (early models of which had a cassette drive built-in),
TRS-80 and Apple II, until the introduction of floppy disk drives and hard drive
s in the early 1980s made cassettes virtually obsolete for day-to-day use in the
US. However, they remained in use on some portable systems such as the TRS-80 M
odel 100 line - often in microcassette form - until the early 1990s.
Floppy disk storage had become the standard data storage medium in the United St
ates by the mid-1980s; for example, by 1983 the majority of software sold by Ata
ri Program Exchange was on floppy. Cassette remained more popular for 8-bit comp
uters such as the Commodore 64, ZX Spectrum, MSX, and Amstrad CPC 464 in many co
untries such as the United Kingdom (where 8-bit software was mostly sold on cass
ette until that market disappeared altogether in the early 1990s.) Reliability o
f cassettes for data storage is inconsistent, with gamers recalling repeated att
empts to load video games. In some countries, including the United Kingdom, Pola
nd, Hungary, and the Netherlands, cassette data storage was so popular that some
radio stations would broadcast computer programs that listeners could record on
to cassette and then load into their computer.
The use of better modulation techniques, such as those used in modern modems, co
mbined with the improved bandwidth and signal to noise ratio of newer cassette t
apes, allowed much greater capacities (up to 60 MB) and data transfer speeds of
10 to 17 kB/s on each cassette. They found use during the 1980s in data loggers
for scientific and industrial equipment. The cassette was adapted into what is c
alled a streamer cassette, a version dedicated solely for data storage, and used
chiefly for hard disk backups and other types of data. Streamer cassettes look
almost exactly the same as a standard cassette, with the exception of having a n
otch about 1/4 inch wide and deep situated slightly off-center at the top edge o
f the cassette. Streamer cassettes also have a re-usable write-protect tab on on
ly one side of the top edge of the cassette, with the other side of the top edge
having either only an open rectangular hole, or no hole at all. This is due to
the whole 1/8 inch width of the tape loaded inside being used by a streamer cass
ette drive for the writing and reading of data, hence only one side of the casse
tte being used. Streamer cassettes can hold anywhere from 50 to 160 megabytes of

One of the first German cassettes to be sold for personal computer purposes in 1
In 1966 Robert H. Dennard invented Dynamic Random Access Memory (DRAM) c
ells, one-transistor memory cells that store each single bit of information as a
n electrical charge in an electronic circuit. This technology permitted major in
creases in memory density.
DRAM is a type of random access memory that stores each bit of data in a separat
e capacitor. The number of electrons stored in the capacitor determines whether
the bit is considered 1 or 0. As the capacitor leaks electrons, the information
gets lost eventually, unless the charge is refreshed periodically.
To store data using this kind of memory, a row is opened and a given column's se
nse amplifier is temporarily forced to the desired high or low voltage state, th
us causing the bit-line to charge or discharge the cell storage capacitor to the
desired value. Due to the sense amplifier's positive feedback configuration, it
will hold a bit-line at stable voltage even after the forcing voltage is remove
d. During a write to a particular cell, all the columns in a row are sensed simu
ltaneously just as during reading, so although only a single column's storage-ce
ll capacitor charge is changed, the entire row is refreshed (written back in), a
s illustrated in the figure to the right.
Subsequently, DRAM needs to be refreshed once in a while. Typically, manufacture
rs specify that each row must have its storage cell capacitors refreshed every 6
4 ms or less, as defined by the JEDEC (Foundation for developing Semiconductor S
tandards) standard. Refresh logic is provided in a DRAM controller which automat
es the periodic refresh, that is no software or other hardware has to perform it
. This makes the controller's logic circuit more complicated, but this drawback
is outweighed by the fact that DRAM is much cheaper per storage cell and because
each storage cell is very simple, DRAM has much greater capacity per unit of su
rface than SRAM.
Some systems refresh every row in a burst of activity involving all rows every 6
4 ms. Other systems refresh one row at a time staggered throughout the 64 ms int
erval. For example, a system with 213 = 8192 rows would require a staggered refr
esh rate of one row every 7.8 s which is 64 ms divided by 8192 rows. A few real-t
ime systems refresh a portion of memory at a time determined by an external time
r function that governs the operation of the rest of a system, such as the verti
cal blanking interval that occurs every 1020 ms in video equipment. All methods r
equire some sort of counter to keep track of which row is the next to be refresh
ed. Most DRAM chips include that counter. Older types require external refresh l
ogic to hold the counter.
Under some conditions, most of the data in DRAM can be recovered even if the DRA
M has not been refreshed for several minutes.
Twistor Memory
Twistor memory is, similar to core memory, formed by wrapping magnetic t
ape around a current-carrying wire. It was developed at Bell Labs, but it was us
ed for only a brief time in the marketplace between 1968 and the mid-1970s. It w
as often replaced by RAM chips.
Twistor memory used the same concept as core memory, but instead of small circul
ar magnets, it used magnetic tape to store the patterns. The tape was wrapped ar
ound one set of the wires, say X, in such a way that it formed a 45 degree helix
. The Y wires were replaced by solenoids wrapping a number of twistor wires. Sel
ection of a particular bit was the same as in core, with one X and Y line being
powered, generating a field at 45 degrees. The magentic tape was specifically se
lected to only allow magnetization along the length of the tape, so only a singl
e point of the twistor would have the right direction of field to become magneti
In core memory, small ring-shaped magnets - the cores - are threaded by two cros
sed wires, X and Y, to make a matrix known as a plane. When one X and one Y wire
are powered, a magnetic field is generated at a 45-degree angle to the wires. T
he core magnets sit on the wires at a 45-degree angle, so the single core wrappe
d around the crossing point of the powered X and Y wires will pick up the induce
d field.
The materials used for the core magnets were specially chosen to have a very "sq
uare" magnetic hysteresis pattern. This meant that fields just below a certain t
hreshold will do nothing, but those just above this threshold will cause the cor
e to pick up that magnetic field. The square pattern and sharp flipping states e
nsures that a single core can be addressed within a grid; nearby cores will see
a slightly different field, and not be selected.
The basic operation in a core memory is writing. This is accomplished by powerin
g a selected X and Y wire both to the current level that will, by itself, create
the critical magnetic field. This will cause the field at the crossing point to
be greater than the core's saturation point, and the core will pick up the exte
rnal field. Ones and zeros are represented by the direction of the field, which
can be set simply by changing the direction of the current flow in one of the tw
o wires.
In core memory, a third wire - the sense/inhibit line - is needed to write or re
ad a bit. Reading uses the process of writing; the X and Y lines are powered in
the same fashion that they would be to write a "0" to the selected core. If that
core held a "1" at that time, a short pulse of electricity is induced into the
sense/inhibit line. If no pulse is seen, the core held a "0". This process is di
stractive; if the core did hold a "1", that pattern is destroyed during the read
, and has to be re-set in a subsequent operation.
The sense/inhibit line is shared by all of the cores in a particular plane, mean
ing that only one bit can be read (or written) at once. Core planes were typical
ly stacked in order to store one bit of a word per plane, and a word could be re
ad or written in a single operation by working all of the planes at once.

A figure showing how a twistor memory looks upclose
Introduced in 1957, the first commercial use was in their 1ESS switch which went
into operation in 1965. Twistor was used only briefly in the late 1960s and ear
ly 1970s, when semiconductor memory devices replaced almost all earlier memory s
ystems. The basic ideas behind twistor also led to the development of bubble mem
ory, although this had a similarly short commercial lifespan.
Bubble Memory
Bubble Memory uses a thin film of a magnetic material to hold small magn
etized areas, known as bubbles, which each store one bit of data. Andrew Bobeck
invented the Bubble Memory in 1970. His development of the magnetic core memory
and the development of the twistor memory put him in a good position for the dev
elopment of Bubble Memory.
In 1967, Bobeck joined a team at Bell Labs and started work on improving
twistor. He thought that, if he could find a material that allowed the movement
of the fields easily in only one direction, a strip of such material could have
a number of read/write heads positioned along its edge instead of only one. Pat
terns would be introduced at one edge of the material and pushed along just as i
n twistor, but since they could be moved in one direction only, they would natur
ally form "tracks" across the surface, increasing the areal density. This would
produce a sort of "2D twistor". Paul Charles Michaelis working with permalloy ma
gnetic thin films discovered that it was possible to propagate magnetic domains
in orthogonal directions within the film. This seminal work led to a patent appl
The memory device and method of propagation were described in a paper pr
esented at the 13th Annual Conference on Magnetism and Magnetic Materials, Bosto
n, Massachusetts, September 15, 1967. The device used anisotropic thin magnetic
films that required different magnetic pulse combinations for orthogonal propaga
tion directions. The propagation velocity was also dependent on the hard and eas
y magnetic axes. This difference suggested that an isotropic magnetic medium wou
ld be desirable.
Starting work extending this concept using orthoferrite, Bobeck noticed an addit
ional interesting effect. With the magnetic tape materials used in twistor the d
ata had to be stored on relatively large patches known as "domains". Attempts to
magnetize smaller areas would fail. With orthoferrite, if the patch was written
and then a magnetic field was applied to the entire material, the patch would s
hrink down into a tiny circle, which he called a bubble. These bubbles were much
smaller than the "domains" of normal media like tape, which suggested that very
high area densities were possible.
It is conceptually a stationary disk with spinning bits. The unit, only a couple
of square inches in size, contains a thin film magnetic recording layer. Globul
ar-shaped bubbles (bits) are electromagnetically generated in circular strings i
nside this layer. In order to read or write the bubbles, they are rotated past t
he equivalent of a read/write head.
According to the authorities from Bell Lab, five (5) discoveries took place, nam
1. The controlled two-dimensional motion of single wall domains in permallo
y films
2. The application of orthoferrites
3. The discovery of the stable cylindrical domain
4. The invention of the field access mode of operation
5. The discovery of growth-induced uniaxial anisotropy in the garnet system
and the realization that garnets would be a practical material
The bubble system cannot be described by any single invention, but in terms of t
he above discoveries. Andy Bobeck was the sole discoverer of (4) and (5) and co-
discoverer of (2) and (3); (1) was performed in P. Bonyhard's group. At one poin
t, over 60 scientists were working on the project at Bell Labs, many of whom hav
e earned recognition in this field. For instance, in September 1974, H.E.D. Scov
il, P.C. Michaelis and Bobeck were awarded the IEEE Morris N. Liebmann Memorial
Award by the IEEE with the following citation: For the concept and development o
f single-walled magnetic domains (magnetic bubbles), and for recognition of thei
r importance to memory technology.
It took some time to find the perfect material, but they discovered that garnet
turned out to have the right properties. Bubbles would easily form in the materi
al and could be pushed along it fairly easily. The next problem was to make them
move to the proper location where they could be read back out twistor was a wir
e and there was only one place to go, but in a 2D sheet things would not be so e
asy. Unlike the original experiments, the garnet did not constrain the bubbles t
o move only in one direction, but its bubble properties were too advantageous to
The solution was to imprint a pattern of tiny magnetic bars onto the surface of
the garnet. When a small magnetic field was applied, they would become magnetize
d, and the bubbles would "stick" to one end. By then reversing the field they wo
uld be attracted to the far end, moving down the surface. Another reversal would
pop them off the end of the bar to the next bar in the line.
A memory device is formed by lining up tiny electromagnets at one end with detec
tors at the other end. Bubbles written in would be slowly pushed to the other, f
orming a sheet of twistors lined up beside each other. Attaching the output from
the detector back to the electromagnets turns the sheet into a series of loops,
which can hold the information as long as needed.
Bubble memory is a non-volatile memory. Even when power was removed, the bubbles
remained, just as the patterns do on the surface of a disk drive. Better yet, b
ubble memory devices needed no moving parts: the field that pushed the bubbles a
long the surface was generated electrically, whereas media like tape and disk dr
ives required mechanical movement. Finally, because of the small size of the bub
bles, the density was in theory much higher than existing magnetic storage devic
es. The only downside was performance; the bubbles had to cycle to the far end o
f the sheet before they could be read.
One of the limitations of bubble memory was the slow access. A lagre bubble memo
ry would requure large loops, so accessing a bit require cycling through a huge
number of other bits first. This is why it was considered something that really
did not totally work out.
A conceptual drawing of how a bubble memory works
8 Floppy Disk
A floppy disk is a data storage device that is composed of a circular pi
ece of thin, flexible (i.e. "floppy") magnetic storage medium encased in a squar
e or rectangular plastic wallet. Floppy disks are read and written by a floppy d
isk drive.
In 1967 IBM started developing a simple and inexpensive system for loading micro
code into their System/370 mainframes. It should be faster and more purpose buil
t than tape drives that could also be used to send out updates to customers for
$5. The result of this work was a read-only, 8-inch (20 cm) floppy they called t
he "memory disk", holding 80 kilobytes in 1971.
So the first disks were designed for loading microcodes into the controller of t
he Merlin (IBM 3330) disk pack file (a 100 MB storage device). So, in effect, th
e first floppies were used to fill another type of data storage device. Overnigh
t, additional uses for the floppy were discovered, making it the "new" program a
nd file storage medium.
The first floppy disk was 8 inches in diameter, and was protected by a flexible
plastic jacket. IBM used this size as a way of loading microcode into mainframe
processors, and the original 8 inch disk was not field-writeable. Rewriteable di
sks and drives became useful. Early microcomputers used for engineering, busines
s, or word processing often used one or more 8 inch disk drives for removable st
orage; the CP/M operating system was developed for microcomputers with 8 inch dr
An 8-inch disk could store about a megabyte; many microcomputer applications did
n't need that much capacity on one disk, so a smaller size disk with lower-cost
media and drives was feasible. The 5 inch drive succeeded the 8 inch size in many
applications, and developed to about the same storage capacity as the original
8 inch size, using higher-density media and recording techniques.
5.25 Floppy Disk
Another floppy disk size variant was developed.
In 1976 Alan Shugart developed a new floppy disk. The main reason for this devel
opment was that the normal 8 inch floppy disk was to make it large for using it
in desktop computers. So the new 5.25 inch floppy disk was born. Its storage cap
ability was 110 kilobytes. These new floppy disk drives were cheaper than the on
es for 8 inch floppy disks and replaced them very quickly.
At this time only one side of the floppy disk was used. So in 1978 a double-side
d drive for reading 5.25 inch floppy was introduced. So the storage capability w
as increased again to 360 kilobytes.
The head gap of an 80-track high-density (1.2 MB in the MFM format) 5 1/4-inch d
rive is smaller than that of a 40-track double-density (360 KB) drive but can fo
rmat, read and write 40-track disks well provided the controller supports double
stepping or has a switch to do such a process. A blank 40-track disk formatted
and written on an 80-track drive can be taken to its native drive without proble
ms, and a disk formatted on a 40-track drive can be used on an 80-track drive. D
isks written on a 40-track drive and then updated on an 80 track drive become un
readable on any 40-track drives due to track width incompatibility.
Single sided disks were coated on both sides, despite the availability of more e
xpensive double sided disks. The reason usually given for the higher cost was th
at double sided disks were certified error-free on both sides of the media. Arch
itectural differences among computer platforms negated this claim, however, with
RadioShack TRS-80 Model I computers using one side and the Apple II machines th
e other. Double-sided disks could be used in drives for single-sided disks, one
side at a time, by turning them over (flippy disks); more expensive dual-head dr
ives which could read both sides without turning over were later produced, and l
ater became used universally.
Compact Discs
A compact disc (or CD) is an optical disc used to store digital data, or
iginally developed for storing digital audio.
A standard compact disc, often known as an audio CD to differentiate it from lat
er variants, stores audio data is in a format compliant with the red book standa
rd. An audio CD consists of several stereo tracks stored using 16-bit PCM coding
at a sampling rate of 44.1 kHz. Standard compact discs have a diameter of 120mm
, though 80mm versions exist in circular and "business-card" forms. The 120mm di
scs can hold 74 minutes of audio, and versions holding 80 or even 90 minutes hav
e been introduced. The 80mm discs are used as "CD-singles" or novelty "business-
card CDs". They hold about 20 minutes of audio.
Red Book is the standard for audio CDs (Compact Disc Digital Audio system, or CD
DA). It is named after one of a set of colour-bound books that contain the techn
ical specifications for all CD and CD-ROM formats. The Red Book was released by
Sony and Philips in 1980, but the idea of the CD is older. In the 1960s James T.
Russell had the idea to use light for recording and replaying music. So he inve
nted in 1970 an optical digital television recording and playback machine, but t
he world did not jump on. In 1975 representatives of Philips visited Russell at
his lab. They discounted his invention but they put million of dollars in develo
pment of the CD and presented it together with Sony in 1980. So James T. Russell
was the original inventor of the idea of the CD.
The first CD player was called Sony CDP-101. It was presented on the 1st October
1982 and was able to play audio CDs. The price was 625 US dollar.
A CD is made from 1.2 millimetres (0.047 in) thick, polycarbonate plastic and we
ighs 1520 grams. From the center outward, components are: the center spindle hole
(15 mm), the first-transition area (clamping ring), the clamping area (stacking
ring), the second-transition area (mirror band), the program (data) area, and t
he rim. The inner program area occupies a radius from 25 to 58 mm.
A thin layer of aluminium or, more rarely, gold is applied to the surface making
it reflective. The metal is protected by a film of lacquer normally spin coated
directly on the reflective layer. The label is printed on the lacquer layer, us
ually by screen printing or offset printing.
CD data is represented as tiny indentations known as "pits", encoded in a spiral
track moulded into the top of the polycarbonate layer. The areas between pits a
re known as "lands". Each pit is approximately 100 nm deep by 500 nm wide, and v
aries from 850 nm to 3.5 m in length. The distance between the tracks, the pitch,
is 1.5 m.
Scanning velocity is 1.21.4 m/s (constant linear velocity) equivalent to approxim
ately 500 rpm at the inside of the disc, and approximately 200 rpm at the outsid
e edge. (A disc played from beginning to end slows down during playback.)
The program area is 86.05 cm2 and the length of the recordable spiral is (86.05
cm2 / 1.6 m) = 5.38 km. With a scanning speed of 1.2 m/s, the playing time is 74
minutes, or 650 MB of data on a CD-ROM. A disc with data packed slightly more de
nsely is tolerated by most players (though some old ones fail). Using a linear v
elocity of 1.2 m/s and a narrower track pitch of 1.5 m increases the playing time
to 80 minutes, and data capacity to 700 MB.
The pits in a CD are 500 nm wide, between 830 nm and 3,000 nm long and 150 nm de
A CD is read by focusing a 780 nm wavelength (near infrared) semiconductor laser
through the bottom of the polycarbonate layer. The change in height between pit
s and lands results in a difference in the way the light is reflected. By measur
ing the intensity change with a photodiode, the data can be read from the disc.
The pits and lands themselves do not directly represent the zeros and ones of bi
nary data. Instead, non-return-to-zero, inverted encoding is used: a change from
pit to land or land to pit indicates a one, while no change indicates a series
of zeros. There must be at least two and no more than ten zeros between each one
, which is defined by the length of the pit. This in turn is decoded by reversin
g the eight-to-fourteen modulation used in mastering the disc, and then reversin
g the Cross-Interleaved Reed-Solomon Coding, finally revealing the raw data stor
ed on the disc. These encoding techniques (defined in the Red Book) were origina
lly designed for the CD Digital Audio, but they later became a standard for almo
st all CD formats (such as CD-ROM).
CDs are susceptible to damage during handling and from environmental exposure. P
its are much closer to the label side of a disc, enabling defects and contaminan
ts on the clear side to be out of focus during playback. Consequently, CDs are m
ore likely to suffer damage on the label side of the disc. Scratches on the clea
r side can be repaired by refilling them with similar refractive plastic, or by
careful polishing. The edges of CDs are sometimes incompletely sealed, allowing
gases and liquids to corrode the metal reflective layer and to interfere with th
e focus of the laser on the pits.
The digital data on a CD begins at the center of the disc and proceeds toward th
e edge, which allows adaptation to the different size formats available. Standar
d CDs are available in two sizes. By far, the most common is 120 millimetres (4.
7 in) in diameter, with a 74- or 80-minute audio capacity and a 650 or 700 MB (7
37,280,000 bytes) data capacity. This capacity was reportedly specified by Sony
executive Norio Ohga so as to be able to contain the entirety of London Philharm
onic Orchestra's recording of Beethoven's Ninth Symphony on one disc. This diame
ter has been adopted by subsequent formats, including Super Audio CD, DVD, HD DV
D, and Blu-ray Disc. 80 mm discs ("Mini CDs") were originally designed for CD si
ngles and can hold up to 24 minutes of music or 210 MB of data but never became
popular. Today, nearly every single is released on a 120 mm CD, called a Maxi si
Novelty CDs are also available in numerous shapes and sizes, and are used chiefl
y for marketing. A common variant is the "business card" CD, a single with porti
ons removed at the top and bottom making the disk resemble a business card.
Physical size Audio Capacity CD-ROM Data Capacity Definition
120 mm 7480 min 650700 MB Standard size
80 mm 2124 min 185210 MB Mini-CD size
80x54 mm 80x64 mm ~6 min 10-65 MB "Business card" size
3.5 Floppy Disk
In the early 1980s, a number of manufacturers introduced smaller floppy
drives and media in various formats. A consortium of 21 companies eventually set
tled on a 3 1/2-inch floppy disk (actually 90 mm wide), similar to a Sony design
, but improved to support both single-sided and double-sided media, with formatt
ed capacities of 360 KB and 720 KB respectively. Single-sided drives shipped in
1983, and double sided in 1984. What became the most common format, the double-s
ided, high-density (HD) 1.44 MB disk drive, shipped in 1986.
The first Macintosh computers used single-sided 3.5 inch floppy disks, but with
400 KB formatted capacity. These were followed in 1986 by double-sided 800 KB fl
oppies. The higher capacity was achieved at the same recording density by varyin
g the disk rotation speed with arm position so that the linear speed of the head
was closer to constant. Later Macs could also read and write 1.44 MB HD disks i
n PC format with fixed rotation speed.
All 3 1/2-inch disks have a rectangular hole in one corner which, if obstructed,
write-enabled the disk. The HD 1.44 MB disks have a second, unobstructed hole i
n the opposite corner which identifies them as being of that capacity.
In IBM-compatible PCs, the three densities of 3 1/2-inch floppy disks are backwa
rds-compatible: higher density drives can read, write and format lower density m
edia. It is physically possible to format a disk at the wrong density, although
the resulting disk will not work properly. Fresh disks manufactured as high dens
ity can theoretically be formatted at double density only if no information has
been written on the disk in high density, or the disk has been thoroughly demagn
etized with a bulk eraser, as the magnetic strength of a high density record is
stronger and overrides lower density, remaining on the disk and causing problems
The 3,5'' disks had, by way of their rigid case's slide-in-place metal cover, th
e significant advantage of being much better protected against unintended physic
al contact with the disk surface when the disk was handled outside the disk driv
e. When the disk was inserted, a part inside the drive moved the metal cover asi
de, giving the drive's read/write heads the necessary access to the magnetic rec
ording surfaces. Adding the slide mechanism resulted in a slight departure from
the previous square outline. The rectangular shape had the additional merit that
it made it impossible to insert the disk sideways by mistake, as had indeed bee
n possible with earlier formats.
Like the 5'', the 3,5'' disk underwent an evolution of its own. They were origina
lly offered in a 360 KB single-sided and 720 KB double-sided double-density form
at. A newer "high-density" format, displayed as "HD" on the disks themselves and
storing 1440 KB of data, was introduced in the mid-80s. IBM used it on their PS
/2 series introduced in 1987.
The CD-ROM, an abbreviation for Compact Disk Read-Only-Memory, is an optic
al data storage medium using the same physical format as the audio compact discs
. Digital information is encoded at near-microscopic size, allowing a large amou
nt of information to be stored. CDs record binary data as tiny pits (or non-pits
) pressed into the lower surface of the plastic disc; a semiconductor laser beam
in the player reads these. Most CDs cannot be written with a laser, but CD-R di
scs have coloured dyes that can be burned (written to) once, and CD-RW (rewritable
) discs contain phase-change material that can be written and overwritten severa
l times. The standard CD-ROM can hold approximately 650-700 megabytes of data, a
lthough data compression technology allows larger capacities. The yellow-book st
andard for the CD-ROM was first established in 1985 by Sony and Philips.
It may also be known as a pre-pressed compact disc which contains data.
Computers can read CD-ROMs, but cannot write on them. CD-ROMs are popularly used
to distribute computer software, including video games and multimedia applicati
ons, though any data can be stored (up to the capacity limit of a disc). Some CD
s, called enhanced CDs, hold both computer data and audio with the latter capabl
e of being played on a CD player, while data (such as software or digital video)
is only usable on a computer.
CD-ROMs are identical in appearance to audio CDs, and data are stored an
d retrieved in a very similar manner (only differing from audio CDs in the stand
ards used to store the data). Discs are made from a 1.2 mm thick disc of polycar
bonate plastic, with a thin layer of aluminium to make a reflective surface. The
most common size of CD-ROM is 120 mm in diameter, though the smaller Mini CD st
andard with an 80 mm diameter, as well as numerous non-standard sizes and shapes
(e.g., business card-sized media) are also available.
Data is stored on the disc as a series of microscopic indentations. A laser is s
hone onto the reflective surface of the disc to read the pattern of pits and lan
ds ("pits", with the gaps between them referred to as "lands"). Because the dept
h of the pits is approximately one-quarter to one-sixth of the wavelength of the
laser light used to read the disc, the reflected beam's phase is shifted in rel
ation to the incoming beam, causing destructive interference and reducing the re
flected beam's intensity. This pattern of changing intensity of the reflected be
am is converted into binary data.
Data stored on CD-ROMs follows the standard CD data encoding techniques describe
d in the Red Book specification (originally defined for audio CD only). This inc
ludes cross-interleaved ReedSolomon coding (CIRC), eight-to-fourteen modulation (
EFM), and the use of pits and lands for coding the bits into the physical surfac
e of the CD.
The data structures used to group data on a CD-ROM are also derived from the Red
Book. Like audio CDs (CD-DA), a CD-ROM sector contains 2,352 bytes of user data
, divided into 98 24-byte frames. Unlike audio CDs, the data stored in these sec
tors corresponds to any type of digital data, not audio samples encoded accordin
g to the audio CD specification. In order to structure, address and protect this
data, the CD-ROM standard further defines two sector modes, Mode 1 and Mode 2,
which describe two different layouts for the data inside a sector. A track (a gr
oup of sectors) inside a CD-ROM only contains sectors in the same mode, but if m
ultiple tracks are present in a CD-ROM, each track can have its sectors in a dif
ferent mode from the rest of the tracks. They can also coexist with audio CD tra
cks as well, which is the case of mixed mode CDs.
Both Mode 1 and 2 sectors use the first 16 bytes for header information, but dif
fer in the remaining 2,336 bytes due to the use of error correction bytes. Unlik
e an audio CD, a CD-ROM cannot rely on error concealment by interpolation, and t
herefore requires a higher reliability of the retrieved data. In order to achiev
e improved error correction and detection, a CD-ROM adds a 32-bit cyclic redunda
ncy check (CRC) code for error detection, a and a third layer of ReedSolomon erro
r correction using a Reed-Solomon Product-like Code (RSPC). Mode 1, used mostly
for digital data, contains 288 bytes per sector for error detection and correcti
on, leaving 2,048 bytes per sector available for data. Mode 2, which is more app
ropriate for image or video data, contains no error detection or correction byte
s, having therefore 2,336 available data bytes per sector. Note that both modes,
like audio CDs, still benefit from the lower layers of error correction at the
frame level.
Before being stored on a disc with the techniques described above, each CD-ROM s
ector is scrambled to prevent some problematic patterns from showing up. These s
crambled sectors then follow the same encoding process described in the Red Book
in order to be finally stored on a CD.
According to the guidelines on transfer rates:
Transfer speed KiB/s
MiB/s [10]
1 150 1.2288 0.146 200500
2 300 2.4576 0.293 400-1,000
4 600 4.9152 0.586 8002,000
8 1,200 9.8304 1.17 1,6004,000
10 1,500 12.288 1.46 2,0005,000
12 1,800 14.7456 1.76 2,4006,000
20 1,2003,000 up to 24.576 up to 2.93 4,000 (CAV)
32 1,9204,800 up to 39.3216 up to 4.69 4,800 (CAV)
36 2,1605,400 up to 44.2368 up to 5.27 7,200 (CAV)
40 2,4006,000 up to 49.152 up to 5.86 8,000 (CAV)
48 2,8807,200 up to 58.9824 up to 7.03 9,600 (CAV)
52 3,1207,800 up to 63.8976 up to 7.62 10,400 (CAV)
56 3,3608,400 up to 68.8128 up to 8.20 11,200 (CAV)
72 6,75010,800 up to 88.4736 up to 10.5 2,000 (multi-beam)
Digital Audio Tape
Digital Audio Tape (DAT or R-DAT) is a signal recording and playback med
ium introduced by Sony in 1987. In appearance it is similar to a compact audio c
assette, using 4 mm magnetic tape enclosed in a protective shell, but is roughly
half the size at 73 mm 54 mm 10.5 mm. As the name suggests the recording is dig
ital rather than analog, DAT converting and recording at higher equal or lower s
ampling rates than a CD (48, 44.1 or 32 kHz sampling rate, and 16 bits quantizat
ion) without data compression. This means that the entire input signal is retain
ed. If a digital source is copied then the DAT will produce an exact clone, unli
ke other digital media such as Digital Compact Cassette or MiniDisc, both of whi
ch use lossy data compression. The technology of DAT is closely based on that of
video recorders, using a rotating head and helical scan to record data. This, c
rucially or not, prevents DATs from being physically edited.
The technology of DAT is closely based on that of video recorders, using
a rotating head and helical scan to record data. This prevents DATs from being
physically edited in the cut-and-splice manner of analog tapes, or open-reel dig
ital tapes like ProDigi or DASH.
The DAT standard allows for four sampling modes: 32 kHz at 12 bits, and 32 kHz,
44.1 kHz or 48 kHz at 16 bits. Certain recorders operate outside the specificati
on, allowing recording at 96 kHz and 24 bits (HHS). Some early machines aimed at
the consumer market did not operate at 44.1 kHz when recording so they could no
t be used to 'clone' a compact disc. Since each recording standard uses the same
tape, the quality of the sampling has a direct relation to the duration of the
recording 32 kHz at 12 bits will allow six hours of recording onto a three-hour
tape while HHS will only give 90 minutes from the same tape. Included in the sig
nal data are subcodes to indicate the start and end of tracks or to skip a secti
on entirely; this allows for indexing and fast seeking. Two-channel stereo recor
ding is supported under all sampling rates and bit depths, but the R-DAT standar
d does support 4-channel recording at 32 kHz.
DAT tapes are between 15 and 180 minutes in length, a 120-minute tape being 60 m
eters in length. DAT tapes longer than 60 meters tend to be problematic in DAT r
ecorders due to the thinner media. DAT machines running at 48 kHz and 44.1 kHz s
ample rates transport the tape at 8.15 mm/s. DAT machines running at 32 kHz samp
le rate transport the tape at 4.075 mm/s.
There are many uses of digital audio tapes.
First, DAT was used professionally in the 1990s by the professional audio record
ing industry as part of an emerging all-digital production chain also including
digital multi-track recorders and digital mixing consoles that was used to creat
e a fully digital recording. In this configuration, it is possible for the audio
to remain digital from the first AD converter after the mic preamp until it is
in a CD player.
Second, DAT was envisaged by proponents as the successor format to analogue audi
o cassettes in the way that the compact disc was the successor to vinyl-based re
cordings. It sold well in Japan, where high-end consumer audio stores stocked DA
T recorders and tapes into the 2010s and second-hand stores generally continued
to offer a wide selection of mint condition machines. However, there and in othe
r nations, the technology was never as commercially popular as CD or cassette. D
AT recorders proved to be comparatively expensive and few commercial recordings
were available. Globally, DAT remained popular, for a time, for making and tradi
ng recordings of live music, since available DAT recorders predated affordable C
D recorders.
Last but not the least, The format was designed for audio use, but through the I
SO Digital Data Storage standard was adopted for general data storage, storing f
rom 1.3 to 80 GB on a 60 to 180 meter tape depending on the standard and compres
sion. It is a sequential-access medium and is commonly used for backups. Due to
the higher requirements for capacity and integrity in data backups, a computer-g
rade DAT was introduced, called DDS (Digital Data Storage). Although functionall
y similar to audio DATs, only a few DDS and DAT drives (in particular, those man
ufactured by Archive for SGI workstations) are capable of reading the audio data
from a DAT cassette. SGI DDS4 drives no longer have audio support; SGI removed
the feature due to "lack of demand".
Digital Data Storage
Digital Data Storage (DDS) is a format for storing and backing up comput
er data on magnetic tape that evolved from Digital Audio Tape (DAT) technology,
which was originally created for CD-quality audio recording. In 1989, Sony and H
ewlett Packard defined the DDS format for data storage using DAT tape cartridges
. Tapes conforming to the initial DDS format can be "played" by either DAT or DD
S tape machines. However, most DDS tape drives cannot retrieve the audio stored
on a DAT cartridge.
DDS uses tape with a width of 3.8mm, with the exception of the latest fo
rmats, DAT 160 and DAT 320, which are 8mm wide. Initially, the tape was 60 meter
s (197 feet) or 90 meters (295 ft.) long. Advancements in materials technology h
ave allowed the length to be increased significantly in successive versions. A D
DS tape drive uses helical scanning for recording, the same process used by a vi
deo cassette recorder (VCR). If errors are present, the write heads rewrite the
There are several generations for the digital data storage:
DDS-1 - Stores up to 1.3 GB uncompressed (2.6 GB compressed) on a 60 m c
artridge, 2 GB uncompressed (4 GB compressed) on a 90 m cartridge. The DDS-1 Car
tridge often does not have the -1 designation. It can often be recognized by hav
ing 4 vertical bars separated from DDS by the words e "Digital Data Storage".
DDS-2 - Stores up to 4 GB uncompressed (8 GB compressed) on a 120 m cart
DDS-3 - Stores up to 12 GB uncompressed (24 GB compressed) on a 125 m ca
rtridge. DDS-3 uses PRML (Partial Response Maximum Likelihood). PRML minimizes e
lectronic noise for a cleaner data recording.
DDS-4 - DDS-4 stores up to 20 GB uncompressed (40 GB compressed) on a 15
0 m cartridge. This format is also called DAT 40.
DAT 72 - DAT 72 stores up to 36 GB uncompressed (72 GB compressed) on a
170 m cartridge. The DAT 72 standard was developed by HP and Certance. It has th
e same form-factor as DDS-3 and -4 and is sometimes referred to as DDS-5.
DAT-160 - DAT 160 was launched in June 2007 by HP, stores up to 80 GB un
compressed (160 GB compressed). A major change from the previous generations is
the width of the tape. DAT 160 uses 8 mm wide tape in a slightly thicker cartrid
ge while all prior versions use 3.81 mm wide tape. Despite the difference in tap
e widths, DAT 160 drives are backwards compatible with DAT 72 and DAT 40 (DDS-4)
tapes. Native capacity is 80 GB and native transfer rate was raised to 6.9 MB/s
, mostly due to prolonging head/tape contact to 180 (compared to 90 previously).
DAT-320 - In November 2009 HP launched the new DAT 320 which stores up to 160 GB
uncompressed (marketed as 320 GB assuming 2:1 compression).
FUTURE GEN 8 - It is expected to store approximately 300 GB uncompressed.
Digital Versatile Disc
DVD is the new generation of optical disc storage technology. DVD is ess
entially a bigger, faster CD that can hold cinema-like video, better-than-CD aud
io, still photos, and computer data. DVD aims to encompass home entertainment, c
omputers, and business information with a single digital format. It has replaced
laserdisc, is well on the way to replacing videotape and video game cartridges,
and could eventually replace audio CD and CD-ROM. DVD has widespread support fr
om all major electronics companies, all major computer hardware companies, and a
ll major movie and music studios. With this unprecedented support, DVD became th
e most successful consumer electronics product of all time in less than three ye
ars of its introduction.
According to Wikipedia, the basic types of DVD (12 cm diameter, single-s
ided or homogeneous double-sided) are referred to by a rough approximation of th
eir capacity in gigabytes. In draft versions of the specification, DVD-5 indeed
held five gigabytes, but some parameters were changed later on as explained abov
e, so the capacity decreased. Other formats, those with 8 cm diameter and hybrid
variants, acquired similar numeric names with even larger deviation.
The 12 cm type is a standard DVD, and the 8 cm variety is known as a MiniDVD. Th
ese are the same sizes as a standard CD and a mini-CD, respectively. The capacit
y by surface (MiB/cm2) varies from 6.92 MiB/cm2 in the DVD-1 to 18.0 MiB/cm2 in
the DVD-18.
As with hard disk drives, in the DVD realm, gigabyte and the symbol GB are usual
ly used in the SI sense (i.e., 109, or 1,000,000,000 bytes). For distinction, gi
bibyte (with symbol GiB) is used (i.e., 10243 (230), or 1,073,741,824 bytes).
Each DVD sector contains 2,418 bytes of data, 2,048 bytes of which are user data
. There is a small difference in storage space between + and - (hyphen) formats:
Designation Sides Layers
(total) Diameter
(cm) Capacity
SS SL 1 1 8 1.46 1.36
DVD-2 SS DL 1 2 8 2.66 2.47
DVD-3 DS SL 2 2 8 2.92 2.72
DVD-4 DS DL 2 4 8 5.32 4.95
DVD-5 SS SL 1 1 12 4.70 4.37
DVD-9 SS DL 1 2 12 8.54 7.95
DVD-10 DS SL 2 2 12 9.40 8.75
DS SL+DL 2 3 12 13.24 12.33
DVD-18 DS DL 2 4 12 17.08 15.90
It can be interpreted in a way that SS means that the disc is single-sid
ed; DS means double-sided; SL means single layered; and DL means dual-layered.
The following table is now for the re-writable discs:
Designation Sides Layers
(total) Diameter
(cm) Capacity
DVD-R SS SL (1.0) 1 1 12 3.95 3.68
DVD-R SS SL (2.0) 1 1 12 4.70 4.37
DVD-RW SS SL 1 1 12 4.70 4.37
DVD+R SS SL 1 1 12 4.70 4.37
DVD+RW SS SL 1 1 12 4.70 4.37
DVD-R DS SL 2 2 12 8.54 7.96
DVD-RW DS SL 2 2 12 8.54 7.96
DVD+R DS SL 2 2 12 8.55 7.96
DVD+RW DS SL 2 2 12 8.55 7.96
DVD-RAM SS SL 1 1 8 1.46 1.36*
DVD-RAM DS SL 2 2 8 2.65 2.47*
DVD-RAM SS SL (1.0) 1 1 12 2.58 2.40
DVD-RAM SS SL (2.0) 1 1 12 4.70 4.37
DVD-RAM DS SL (1.0) 2 2 12 5.16 4.80
DVD-RAM DS SL (2.0) 2 2 12 9.40 8.75
The following table is for the capacity differences of the writable DVDs:
Type Sectors Bytes kB
DVD-R SL 2,298,496 4,707,319,808 4,707,319.808 4,707.320
4.707 4,596,992 4,489.250 4.384
DVD+R SL 2,295,104 4,700,372,992 4,700,372.992 4,700.373
4.700 4,590,208 4,482.625 4.378
DVD-R DL 4,171,712 8,543,666,176 8,543,666.176 8,543.666
8.544 8,343,424 8,147.875 7.957
DVD+R DL 4,173,824 8,547,991,552 8,547,991.552 8,547.992
8.548 8,347,648 8,152.000 7.961
The following table shows the transfer rates of the different DVDs:
Drive speed Data rate ~Write time (minutes)
Single-Layer Dual-Layer
1 11.08 1.39 57 103
2 22.16 2.77 28 51
2.4 26.59 3.32 24 43
2.6 28.81 3.60 22 40
4 44.32 5.54 14 26
6 66.48 8.31 9 17
8 88.64 11.08 7 13
10 110.80 13.85 6 10
12 132.96 16.62 5 9
16 177.28 22.16 4 6
18 199.44 24.93 3 6
20 221.60 27.70 3 5
22 243.76 30.47 3 5
24 265.92 33.24 2 4
Multimedia Card
The Multimedia Card (MMC) is a flash memory card standard. Unveiled in 1
997 by Siemens and SanDisk, it is based on Toshiba's NAND-based flash memory, an
d is therefore much smaller than earlier systems based on Intel NOR-based memory
such as Compact Flash. MMC is about the size of a postage stamp: 24mm x 32mm x
1.5mm. MMC originally used a 1-bit serial interface, but newer versions of the s
pecification allow transfers of 4 or sometimes even 8 bits at a time. They have
been more or less superseded by SD cards, but still see significant use because
MMC cards can be used in any device which supports SD cards.
Typically, an MMC card is used as storage media for a portable device, in a form
that can easily be removed for access by a PC. MMC cards are currently availabl
e in sizes up to and including 2 GB, and are used in almost every context in whi
ch memory cards are used, like cell phones, mp3 players, digital cameras, and PD
As. Since the introduction of Secure Digital few companies build MMC slots into
their devices, but the slightly-thinner, pin-compatible MMC cards can be used in
any device that supports SD cards.
In 2004, the Reduced-Size MultiMediaCard (RS-MMC) was introduced as a smaller fo
rm factor of the MMC, about half the size: 24 mm 18 mm 1.4 mm. The RS-MMC uses a
simple mechanical adapter to elongate the card so it can be used in any MMC (or
SD) slot. RS-MMCs are currently available in sizes up to and including 2 GB.
The modern continuation of an RS-MMC is commonly known as MiniDrive (MD-MMC). A
MiniDrive is generally a microSD card adapter in the RS-MMC form factor. This al
lows a user to take advantage of the wider range of modern MMCs available to exc
eed the historic 2 GB limitations of older chip technology.
The only significant hardware implementations of RS-MMCs were Nokia and Siemens,
who used to use RS-MMC in their Series 60 Symbian smartphones, the Nokia 770 In
ternet Tablet, and generations 65 and 75 (Siemens). However, since 2006 all of N
okia's new devices with card slots have used miniSD or microSD cards, with the c
ompany appearing to abandon the MMC standard in its products. Siemens exited the
mobile phone business completely in 2006. Siemens continue to use MMC for some
PLC storage.
The Dual-Voltage MultimediaCard (DV-MMC) is one of the first substantial changes
in MMC was the introduction of dual-voltage cards that support operations at 1.
8 V in addition to 3.3 V. Running at lower voltages reduces the card's energy co
nsumption, which is important in mobile devices. However, simple dual-voltage pa
rts quickly went out of production in favour of MMCplus and MMCmobile which offe
r capabilities in addition to dual-voltage support.
The version 4.x of the MMC standard, introduced in 2005, brought in two very sig
nificant changes to compete against SD cards: support for running at higher spee
ds (26 MHz and 52 MHz) than the original MMC (20 MHz) or SD (25 MHz, 50 MHz) and
a four- or eight-bit-wide data bus. Version 4.x full-size cards and reduced-siz
e cards can be marketed as MMCplus and MMCmobile respectively. Version 4.x cards
are fully backward compatible with existing readers but require updated hardwar
e/software to use their new capabilities; even though the four-bit-wide bus and
high-speed modes of operation are deliberately electrically compatible with SD,
the initialization protocol is different, so firmware/software updates are requi
red to support these features in an SD reader.
MMCmicro is a micro-size version of MMC. With dimensions of 14 mm 12 mm 1.1 mm,
it is even smaller and thinner than RS-MMC. Like MMCmobile, MMCmicro supports du
al voltage, is backward compatible with MMC, and can be used in full-size MMC an
d SD slots with a mechanical adapter. MMCmicro cards have the high-speed and fou
r-bit-bus features of the 4.x spec but not the eight-bit bus, due to the absence
of the extra pins. It was formerly known as S-card when introduced by Samsung o
n December 13, 2004. It was later adapted and introduced in 2005 by the MultiMed
iaCard Association (MMCA) as the third form factor memory card in the MultiMedia
Card family. MMCmicro appears very similar to microSD but the two formats are no
t physically compatible and have incompatible pinouts.
The MiCard is a backward-compatible extension of the MMC standard with a theoret
ical maximum size of 2048 GB (2 TB) announced on June 2, 2007. The card is compo
sed of two detachable parts, much like a microSD card with an SD adapter. The sm
all memory card fits directly in a USB port while it also has MMC-compatible ele
ctrical contacts, which with an included electromechanical adapter fits in tradi
tional MMC and SD card readers. To date, only one manufacturer has produced card
s in this format. Developed by Industrial Technology Research Institute of Taiwa
n, as of the announcement 12 Taiwanese companies (including A-DATA Technology, A
sustek, BenQ, Carry Computer Eng. Co., C-One Technology, DBTel, Power Digital Ca
rd Co., and RiCHIP) had signed on to manufacture the new memory card. However, a
s of June 2011 none of the listed companies has released any such cards, and nor
have any further announcements been made about plans for the format. The card w
as announced to be available starting in the third quarter of 2007. It was expec
ted to save the 12 Taiwanese companies who plan to manufacture the product and r
elated hardware up to USD 40 million in licensing fees that presumably would oth
erwise be paid to owners of competing flash memory formats. The initial card was
to have a capacity of 8 GB, while the standard would support sizes up to 2048 G
B. It was stated to have data transfer speeds of 480 Mbit/s (60 Mbyte/s), with p
lans to increase data throughput over time.
Memory Stick
A Memory Stick is a removable memory card format, launched by Sony in Oc
tober 1998.
Typically, a Memory Stick is used as storage media for a portable device, in a f
orm that can easily be removed for access by a PC. For example, Sony digital cam
eras use Memory Sticks for storing image files. With a Memory Stick reader a use
r could copy the information from the stick to the PC.
The original Memory Sticks were approximately the size and thickness of a stick
of chewing gum, and came in sizes from 4 MB up to and including 128 MB. This siz
e limitation became limiting fairly quickly, so Sony introduced the now-uncommon
Memory Stick Select, which was similar in concept (if not in execution) to the
way in which 5.25" floppy disks used both sides of a disk. A Memory Stick Select
was two (or rarely four) separate 128 MB partitions which the user could switch
between using a switch on the card. This solution was fairly unpopular, but did
allow for users with older Memory Stick devices to use higher-capacity flash me
mory. The 256 MB Memory Stick Select is still being manufactured by Lexar.
The original memory stick was launched in October 1998, was available in sizes u
p to 128 MB, and a sub-version, Memory Stick Select allowed two banks of 128 MB
selectable by a slider switch, essentially two cards squeezed into one. The larg
est capacity Memory Stick currently available is 64 GB. According to Sony, the M
emory Stick PRO has a maximum theoretical size of 2 TB.
Typically, Memory Sticks are used as storage media for a portable device, in a f
orm that can easily be removed for access by a personal computer. For example, S
ony digital compact cameras use Memory Stick for storing image files. With a Mem
ory Stick-capable Memory card reader a user can copy the pictures taken with the
Sony digital camera to a computer. Sony typically includes Memory Stick reader
hardware in its first party consumer electronics, such as digital cameras, digit
al music players, PDAs, cellular phones, the VAIO line of laptop computers, and
the PlayStation Portable.
Memory Sticks include a wide range of actual formats.
The original Memory Stick is approximately the size and thickness of a stick of
chewing gum. It was available in sizes from 4 MB to 128 MB. The original Memory
Stick is no longer manufactured.
In response to the storage limitations of the original Memory Stick, Sony introd
uced the Memory Stick Select. The Memory Stick Select was two separate 128 MB pa
rtitions which the user could switch between using a (physical) switch on the ca
rd. This solution was fairly unpopular, but it did give users of older Memory St
ick devices more capacity. Its size was still the same as the original Memory St
The Memory Stick PRO, introduced in 2003 as a joint effort between Sony and SanD
isk, would be the longer-lasting solution to the space problem. Most devices tha
t use the original Memory Sticks support both the original and PRO sticks since
both formats have identical form factors. Some readers that were not compatible
could be upgraded to Memory Stick PRO support via a firmware update. Memory Stic
k PROs have a marginally higher transfer speed and a maximum theoretical capacit
y of 32 GB, although it appears capacities higher than 4 GB are only available i
n the PRO Duo form factor. High Speed Memory Stick PROs are available, and newer
devices support this high speed mode, allowing for faster file transfers. All M
emory Stick PROs larger than 1 GB support this High Speed mode, and High Speed M
emory Stick Pros are backwards-compatible with devices that don't support the Hi
gh Speed mode. High capacity memory sticks such as the 4 GB versions are expensi
ve compared to other types of flash memory such as SD cards and CompactFlash.
The Memory Stick Duo was developed in response to Sony's need for a smaller flas
h memory card for pocket-sized digital cameras, cell phones and the PlayStation
Portable. It is slightly smaller than the competing Secure Digital (SD) format a
nd roughly two thirds the length of the standard Memory Stick form factor, but c
osts more. Memory Stick Duos are available with the same features as the larger
standard Memory Stick, available with and without high speed mode, and with and
without MagicGate support. The Memory Stick PRO Duo has replaced the Memory Stic
k Duo due to its 128 MB size limitation, but has kept the same form factor as th
e Duo.
The Memory Stick PRO Duo (MSPD) quickly replaced the Memory Stick Duo due to the
Duo's size limitation of 128 MB and slow transfer speed. Memory Stick PRO Duos
are available in all the same variants as the larger Memory Stick PRO, with and
without High Speed mode, and with and without MagicGate support.
Sony has released two different versions of Memory Stick PRO Duo. One is a 16 GB
version on March 2008 and other is a 32 GB version on August 21, 2009, In 2009
Sony and SanDisk also announced the joint development of an expanded Memory Stic
k PRO format tentatively named "Memory Stick PRO Format for Extended High Capaci
ty" that would extend capacity to a theoretical maximum of 2 terabytes. Sony has
since finalized the format and released its specification under the new name, M
emory Stick XC.
On December 11, 2006, Sony, together with SanDisk, announced the Memory Stick PR
O-HG Duo. While only serial and 4-bit parallel interfaces are supported in the M
emory Stick PRO format, an 8-bit parallel interface was added to the Memory Stic
k PRO-HG format. Also, the maximum interface clock frequency was increased from
40 MHz to 60 MHz. With these enhancements, a theoretical transfer rate of 480 Mb
it/s (60 Mbyte/s) is achieved, which is three times faster than the Memory Stick
PRO format.
In a joint venture with SanDisk, Sony released a new Memory Stick format on Febr
uary 6, 2006. The Memory Stick Micro (M2) measures 15 12.5 1.2 mm (roughly one-q
uarter the size of the Duo) with 64 MB, 128 MB, 256 MB, 512 MB, 1 GB, 2 GB, 4 GB
, 8 GB, and 16 GB capacities available. The format has a theoretical limit of 32
GB and maximum transfer speed of 160 Mbit/s. However, as with the PRO Duo forma
t, it has been expanded through the XC series as Memory Stick XC Micro and Memor
y Stick XC-HG Micro, both with the theoretical maximum capacity of 2 TB.
On January 7, 2009, SanDisk and Sony announced the Memory Stick XC format (tenta
tively named "Memory Stick Format Series for Extended High Capacity" at the time
). The Memory Stick XC has a maximum 2 TB capacity, 64 times larger than that of
the Memory Stick PRO which is limited to 32 GB. XC series has the same form fac
tors as PRO series, and supports MagicGate content protection technology as well
as Access Control function as PRO series does. In line with the rest of the ind
ustry, the XC series uses the newer exFAT file system due to size and formatting
limitations of FAT/FAT16/FAT32 filesystems used in the PRO series. A maximum tr
ansfer speed of 480 Mbit/s (60 Mbyte/s) is achieved through 8-bit parallel data
Sony announced the release of the Memory Stick PRO-HG Duo HX on May 17th, 2011 w
hich was considered the fastest card ever made by the manufacturer. It measures
20 x 31 x 1.6 mm, with 8 GB, 16 GB or 32 GB versions available. Also, the format
offers a maximum transfer speed of 50 MB per second.
A microdrive is originally a miniaturized hard disk in the format of a C
ompactFlash-Card developed by IBM. The first generation of microdrives had a cap
acity of 340 MB. This version was already used by the NASA. The next generation
were available with a capacity of 512 MB and 1 GB. Microdrives have a magnetic m
emory with a high capacity and a disc-diameter of 1 inch. These small hard disks
can be easily destroyed by vibrations and too low air pressure. Microdrives ar
e usually used in PDAs and digital cameras.
The Microdrive was developed and launched in 1999 by IBM with a capacity
of 170 MB. Capacity expanded to 8 GB by 2006. They weigh about 16 g (~1/2 oz),
with dimensions of 42.836.45mm ( They were the smallest hard drives in t
he world at the time. From 1999 to 2003, they were known as IBM Microdrives, and
from 2003 as Hitachi Microdrives, after Hitachi bought IBM's hard drive divisio
n. Microdrive was a registered trademark by IBM and Hitachi for each period.
IBM initially released 170 MB and 340 MB models. The next year, 512 MB and 1 GB
models became available. In December 2002 Hitachi bought IBM's disk drive busine
ss, including the Microdrive technology and brand. By 2003, 2GB models were intr
oduced. Over the years, larger sizes have become available.
In 2004, Seagate launched 2.5 and 5 GB models, and tends to refer to them as eit
her 1-inch hard drives, or CompactFlash hard drives due to the trademark issue.
These drives are also commonly known as the Seagate ST1. In 2005 Seagate launche
d an 8 GB model. Seagate also sold a standalone consumer product based on these
drives with a product known as the Pocket Hard Drive. These devices came in the
shape of a hockey puck with an integrated USB2.0 cable.
Universal Serial Bus
A USB Flash Drive is essentially NAND-type flash memory integrated with
a USB interface used as a small, lightweight, removable data storage device. Thi
s hot-swappable, non-volatile, solid-state device is universally compatible with
post-Windows 98 platforms, Macintosh platforms, and most Unix-like platforms.
USB Flash Drive are also known as "pen drives", "thumb drives", "flash drives",
"USB keys", "USB memory keys", "USB sticks", "jump drives", "keydrives","vault d
rives" and many more names. They are also sometimes miscalled memory sticks (a S
ony trademark describing a different type of portable memory).
A flash drive consists of a small printed circuit board encased with a robust pl
astic casing, making the drive sturdy enough to be carried around in a pocket, a
s a keyfob, or on a lanyard. Only the USB connector protrudes from this plastic
protection, and is often covered by a removable plastic cap. Most flash drives f
eature the larger type-A USB connection, although some feature the smaller "mini
USB" connection.
Flash drives are active only when powered by a USB computer connection, and requ
ire no other external power source or battery power source; key drives are run o
ff the limited supply afforded by the USB connection (5V). To access the data st
ored in a flash drive, the flash drive must be connected to a computer, either b
y direct connection to the computer's USB port or via a USB hub.
USB flash drives are often used for the same purposes for which floppy disks or
CD-ROMs were used, i.e., for storage, back-up and transfer of computer files. Th
ey are smaller, faster, have thousands of times more capacity, and are more dura
ble and reliable because they have no moving parts. Until about 2005, most deskt
op and laptop computers were supplied with floppy disk drives in addition to USB
ports, but floppy disk drives have been abandoned due to their lower capacity c
ompared to USB flash drives.
USB flash drives use the USB mass storage standard, supported natively by modern
operating systems such as Linux, OS X, Windows, and other Unix-like systems, as
well as many BIOS boot ROMs. USB drives with USB 2.0 support can store more dat
a and transfer faster than much larger optical disc drives like CD-RW or DVD-RW
drives and can be read by many other systems such as the Xbox 360, PlayStation 3
, DVD players and in a number of handheld devices such as smartphones and tablet
A flash drive consists of a small printed circuit board carrying the circuit ele
ments and a USB connector, insulated electrically and protected inside a plastic
, metal, or rubberized case which can be carried in a pocket or on a key chain,
for example. The USB connector may be protected by a removable cap or by retract
ing into the body of the drive, although it is not likely to be damaged if unpro
tected. Most flash drives use a standard type-A USB connection allowing connecti
on with a port on a personal computer, but drives for other interfaces also exis
Trek Technology and IBM began selling the first USB flash drives commercially in
2000. Trek Technology sold a model under the brand name "ThumbDrive", and IBM m
arketed the first such drives in North America with its product named the "DiskO
nKey", which was developed and manufactured by M-Systems. IBM's USB flash drive
became available on December 15, 2000, and had a storage capacity of 8 MB, more
than five times the capacity of the then-common floppy disks.
In 2013, most second generation (2nd) USB flash drives have USB 2.0 connectivity
. USB 2.0 the Hi-Speed specification has a 480 Megabit per second (Mbit/s) upper
bound on the transfer rate, but after protocol overhead, that translates to onl
y 35 Megabytes per second (MB/s) effective throughput. The fastest USB 2.0 flash
drives approach that speed. That is considerably slower than hard disk drive or
Solid-state drive can transfer through a SATA interface.
File transfer speeds vary considerably. Speeds may be given in megabytes (Mbyte)
per second, megabits per second (Mbit/s) or optical drive multipliers such as "
180X" (180 times 150 Kibibyte(KiB) per second). Typical fast drives claim to rea
d at up to 30 megabytes/s (MB/s) and write at about half that speed. This is abo
ut 20 times faster than USB 1.1 "full speed" devices, which are limited to a max
imum speed of 12 Mbit/s (1 MB/s with overhead). The speed of the device is signi
ficantly affected by the access pattern. For example small writes to random loca
tions are much slower (and cause more wear) than long sequential reads.
Like USB 2.0 before it, USB 3.0 offers dramatically improved data transfer rates
compared to its predecessor. It was announced in late 2008, but consumer device
s were not available until the beginning of 2010. The USB 3.0 interface specifie
s transfer rates up to 5 Gbit/s (625 MB/s), compared to USB 2.0's 480 Mbit/s (60
MB/s). All USB 3.0 devices are backward compatible with USB 2.0 ports. Computer
s with such ports are becoming very popular and common. Many newer laptops and d
esktops have at least one such port. USB 3.0 port expansion cards are available
to upgrade older systems, and many newer motherboards feature two or more USB 3.
0 jacks. Even though the interface allows extremely high data transfer speeds, a
s of 2011 most USB 3.0 flash drives do not utilize the full speed of the interfa
ce due to limitations of their memory controllers (though some four channel memo
ry controllers are now coming to market).
Flash memory combines a number of older technologies, with lower cost, lower pow
er consumption and small size made possible by advances in microprocessor techno
logy. The memory storage was based on earlier EPROM and EEPROM technologies. The
se had limited capacity, were slow for both reading and writing, required comple
x high-voltage drive circuitry, and could only be re-written after erasing the e
ntire contents of the chip.
Hardware designers later developed EEPROMs with the erasure region broken up int
o smaller "fields" that could be erased individually without affecting the other
s. Altering the contents of a particular memory location involved copying the en
tire field into an off-chip buffer memory, erasing the field, modifying the data
as required in the buffer, and re-writing it into the same field. This required
considerable computer support, and PC-based EEPROM flash memory systems often c
arried their own dedicated microprocessor system. Flash drives are more or less
a miniaturized version of this.
The development of high-speed serial data interfaces such as USB made semiconduc
tor memory systems with serially accessed storage viable, and the simultaneous d
evelopment of small, high-speed, low-power microprocessor systems allowed this t
o be incorporated into extremely compact systems. Serial access requires far few
er electrical connections for the memory chips than does parallel access, which
has simplified the manufacture of multi-gigabyte drives.
Computers access modern flash memory systems very much like hard disk drives, wh
ere the controller system has full control over where information is actually st
There are typically five parts to a flash drive:
Standard-A USB plug provides a physical interface to the host computer.
USB mass storage controller a small microcontroller with a small amount of on-ch
ip ROM and RAM.
NAND flash memory chip(s) stores data (NAND flash is typically also used in digi
tal cameras).
Crystal oscillator produces the device's main 12 MHz clock signal and controls t
he device's data output through a phase-locked loop.
Cover - typically made of plastic or metal - to protect the electronics against
mechanical stress and even possible short circuits
Additionally, we have:
Jumpers and test pins for testing during the flash drive's manufacturing or load
ing code into the microprocessor.
LEDs indicate data transfers or data reads and writes.
Write-protect switches Enable or disable writing of data into memory.
Unpopulated space provides space to include a second memory chip. Having this se
cond space allows the manufacturer to use a single printed circuit board for mor
e than one storage size device.
USB connector cover or cap reduces the risk of damage, prevents the entry of dir
t or other contaminants, and improves overall device appearance. Some flash driv
es use retractable USB connectors instead. Others have a swivel arrangement so t
hat the connector can be protected without removing anything.
Transport aid the cap or the body often contains a hole suitable for connection
to a key chain or lanyard. Connecting the cap, rather than the body, can allow t
he drive itself to be lost.
Some drives offer expandable storage via an internal memory card slot, much like
a memory card reader.
Most flash drives ship preformatted with the FAT32, or ExFat file systems. The u
biquity of this file system allows the drive to be accessed on virtually any hos
t device with USB support. Also, standard FAT maintenance utilities (e.g., ScanD
isk) can be used to repair or retrieve corrupted data. However, because a flash
drive appears as a USB-connected hard drive to the host system, the drive can be
reformatted to any file system supported by the host operating system.
Defragmenting: Flash drives can be defragmented. There is a widespread opinion t
hat defragmenting brings little advantage (as there is no mechanical head that m
oves from fragment to fragment), and that defragmenting shortens the life of the
drive by making many unnecessary writes. However some sources claim that defrag
menting a flash drive can improve performance (mostly due to improved caching of
the clustered data), and the additional wear on flash drives may not be signifi
Even Distribution: Some file systems are designed to distribute usage over an en
tire memory device without concentrating usage on any part (e.g., for a director
y) to prolong the life of simple flash memory devices. Some USB flash drives hav
e this 'wear leveling' feature built into the software controller to prolong dev
ice life, while others do not, so it is not necessarily helpful to install one o
f these file systems.
Hard Drive: Sectors are 512 bytes long, for compatibility with hard drives, and
the first sector can contain a master boot record and a partition table. Therefo
re, USB flash units can be partitioned just like hard drives.
USB usage has its advantages. It includes:
Flash drives use little power, have no fragile moving parts, and for most capaci
ties are small and light. Data stored on flash drives is impervious to mechanica
l shock, magnetic fields, scratches and dust. These properties make them suitabl
e for transporting data from place to place and keeping the data readily at hand
Flash drives also store data densely compared to many removable media. In mid-20
09, 256 GB drives became available, with the ability to hold many times more dat
a than a DVD or even a Blu-ray disc.
Flash drives implement the USB mass storage device class so that most modern ope
rating systems can read and write to them without installing device drivers. The
flash drives present a simple block-structured logical unit to the host operati
ng system, hiding the individual complex implementation details of the various u
nderlying flash memory devices. The operating system can use any file system or
block addressing scheme. Some computers can boot up from flash drives.
Specially manufactured flash drives are available that have a tough rubber or me
tal casing designed to be waterproof and virtually "unbreakable". These flash dr
ives retain their memory after being submerged in water, and even through a mach
ine wash. Leaving such a flash drive out to dry completely before allowing curre
nt to run through it has been known to result in a working drive with no future
problems. Channel Five's Gadget Show cooked one of these flash drives with propa
ne, froze it with dry ice, submerged it in various acidic liquids, ran over it w
ith a jeep and fired it against a wall with a mortar. A company specializing in
recovering lost data from computer drives managed to recover all the data on the
drive. All data on the other removable storage devices tested, using optical or
magnetic technologies, were destroyed.
It also has its disadvantages:
Like all flash memory devices, flash drives can sustain only a limited number of
write and erase cycles before the drive fails.This should be a consideration wh
en using a flash drive to run application software or an operating system. To ad
dress this, as well as space limitations, some developers have produced special
versions of operating systems (such as Linux in Live USB) or commonplace applica
tions (such as Mozilla Firefox) designed to run from flash drives. These are typ
ically optimized for size and configured to place temporary or intermediate file
s in the computer's main RAM rather than store them temporarily on the flash dri
When used in the same manner as external rotating drives (hard drives, optical d
rives, or floppy drives), i. e. in ignorance of their technology, USB drives' fa
ilure is more likely to be sudden: while rotating drives can fail instantaneousl
y, they more frequently give some indication (noises, slowness) that they are ab
out to fail, often with enough advance warning that data can be removed before t
otal failure. USB drives give little or no advance warning of failure.
A few USB flash drives include a write-protect mechanism consisting of a switch,
on the housing of the drive itself, that prevents the host computer from writin
g or modifying data on the drive. This feature is becoming less common. Write-pr
otection makes a device suitable for repairing virus-contaminated host computers
without risk of infecting the USB flash drive itself. A write-locked SD card in
a USB flash card reader adapter is an effective way to avoid any writes on the
flash medium. The SD card as a Write Once Read Many device has an essentially un
limited life.
A drawback to the small size of flash drives is that they are easily misplaced,
left behind, or otherwise lost. This is a particular problem if the data they co
ntain are sensitive. As a consequence, some manufacturers have added encryption
hardware to their drivesalthough software encryption systems which can be used in
conjunction with any mass storage medium achieve the same thing. Most drives ca
n be attached to keychains, necklaces and lanyards. The USB plug is usually fitt
ed with a removable and easily lost protective cap, or is retractable.
USB flash drives are more expensive per unit of storage than large hard drives,
but are less expensive in capacities of a few tens of gigabytes as of 2011. Maxi
mum available capacity is increasing with time, but is less than larger hard dri
ves. This balance is changing, but the rate of change is slowing.
Most USB-based flash technology integrates a printed circuit board with a metal
tip, which is simply soldered on. As a result, the stress point is where the two
pieces join. The quality control of some manufacturers does not ensure a proper
solder temperature, further weakening the stress point. Since many flash drives
stick out from computers, they are likely to be bumped repeatedly and may break
at the stress point. Most of the time, a break at the stress point tears the jo
int from the printed circuit board and results in permanent damage. However, som
e manufacturers produce discreet flash drives that do not stick out, and others
use a solid metal uni-body that has no easily discernible stress point.
SD Card
Secure Digital (also known as SD) is a flash memory memory card format.
SD cards are based on Toshiba's older Multimedia Card (MMC) format, but add litt
le-used DRM encryption features and allow for faster file transfers, as well as
being physically slightly thicker. Devices with SD slots can use the thinner MMC
cards, but SD cards won't fit into the thinner MMC slots. Standard SD cards mea
sure 32 mm by 24 mm by 2.1 mm.
Typically, an SD card is used as storage media for a portable device, in a form
that can easily be removed for access by a PC. SD cards are currently available
in sizes up to and including 2 GB, and are used in almost every context in which
memory cards are used, and in nearly every application, they are the most popul
ar format. SD support is standard in PDAs, with Dell, palmOne, HP, Toshiba, Shar
p, and others including SD slots in all of their PDAs. Digital cameras (includin
g Kodak's cameras) tend to support SD cards, as well, although Olympus and Fuji
(with xD cards) and Sony (with Memory Sticks) favor their own proprietary format
s, and many professional cameras support CompactFlash in addition to or instead
of SD, to allow pro photographers to use multi-gigabyte microdrives. (Sony has s
tarted to include secondary SD slots on some newer cameras.) SD cards can also b
e found in flash-memory-based mp3 players, GPS receivers, the occasional cell ph
one, and occasional portable game systems.
The Secure Digital standard was introduced in 1999 as an evolutionary improvemen
t over MultiMediaCards (MMC). The Secure Digital standard is maintained by the S
D Card Association (SDA). SD technologies have been implemented in more than 400
brands across dozens of product categories and more than 8,000 models.[1]
The Secure Digital format includes four card families available in three differe
nt form factors. The four families are the original Standard-Capacity (SDSC), th
e High-Capacity (SDHC), the eXtended-Capacity (SDXC), and the SDIO, which combin
es input/output functions with data storage.
The SDA has extended the SD specification in various ways:
-It defined electrically identical cards in smaller sizes: miniSD and microSD (o
riginally named TransFlash or TF). Smaller cards are usable in larger slots thro
ugh use of a passive adapter. By comparison, Reduced Size MultiMediaCards (RS-MM
Cs) are simply shorter MMCs and can be used in MMC slots by use of a physical ex
-It defined higher-capacity cards, some with faster speeds and added capabilitie
s: SDHC (Secure Digital High Capacity) and SDXC (Secure Digital eXtended Capacit
y). These cards redefine the interface so that they cannot be used in older host
-It defined an SDIO card family that provides input-output functions and may als
o provide memory functions. These cards are only fully functional in host device
s designed to support their input-output functions.
SD cards have their own features. Cards can protect their contents from erasure
or modification, prevent access by non-authorised users, and protect copyrighted
content using digital rights management (DRM).
The host device can command the SD card to become read-only (to reject subsequen
t commands to write information to it). There are both reversible and irreversib
le host commands that achieve this.
The user can designate most full-size SD cards as read-only by use of a sliding
tab that covers a notch in the card. (The miniSD and microSD formats do not supp
ort a write protection notch.) When looking at the SD card from the top, the rig
ht side (the side with the beveled corner) must be notched. On the left side the
re may be a write-protection notch. If the notch is omitted, the card can be rea
d and written. If the card is notched, it is read-only. If the card has a notch
and a sliding tab which covers the notch, the user can slide the tab upward (tow
ard the contacts) to declare the card read/write, or downward to declare it read
-only. The presence of a notch, and the presence and position of a tab, have no
effect on the SD card's operation. A host device that supports write protection
should refuse to write to an SD card that is designated read-only in this way. S
ome host devices do not support write protection, which is an optional feature o
f the SD specification. Drivers and devices that do obey a read-only indication
may give the user a way to override it. Cards sold with content which must not b
e altered are permanently marked read-only by having a notch and no sliding tab.
A host device can lock an SD card using a password of up to 16 bytes, typically
supplied by the user. A locked card interacts normally with the host device exce
pt that it rejects commands to read and write data. A locked card can be unlocke
d only by providing the same password. The host device can, after supplying the
old password, specify a new password or disable locking. Without the password (t
ypically, in the case that the user forgets the password), the host device can c
ommand the card to erase all the data on the card for future re-use (except card
data under DRM), but there is no way to gain access to the existing data.
All cards incorporate DRM copy-protection. Roughly 10% of the storage capacity o
f an SD card is a "Protected Area" not available to the user, but is used by the
on-card processor to verify the identity of an application program that it then
allows to read protected content. The card prohibits other accesses, such as us
ers trying to make copies of protected files.
An SD card's speed is measured by how quickly information can be read from, or w
ritten to, the card. In applications that require sustained write throughput, su
ch as video recording, the device might not perform satisfactorily if the SD car
d's class rating falls below a particular speed. For example, a high-definition
camcorder may require a card of not less than Class 6, suffering dropouts or cor
rupted video if a slower card is used. Digital cameras with slow cards may take
a noticeable time after taking a photograph before being ready for the next, whi
le the camera writes the first picture.
A card's speed depends on many factors, including:
-The frequency of soft errors that the card's controller must re-try
-The fact that, on most cards, writing data requires the controller to read and
erase a larger region, then rewrite that entire region with the desired part cha
-File fragmentation: where there is not sufficient space for a file to be record
ed in a contiguous region, it is split into non-contiguous fragments. This does
not cause rotational or head-movement delays as with electromechanical hard driv
es, but may decrease speed; for instance, by requiring additional reads and comp
utation to determine where on the card the file's next fragment is stored.
-With early SD cards the speed was specified as a "times" ("") rating, which comp
ared the average speed of reading data to that of the original CD-ROM drive. Thi
s was superseded by the Speed Class Rating, which guarantees a minimum rate at w
hich data can be written to the card.
-The newer families of SD card improve card speed by increasing the bus rate (th
e frequency of the clock signal that strobes information into and out of the car
d). Whatever the bus rate, the card can signal to the host that it is "busy" unt
il a read or a write operation is complete. Compliance with a higher speed ratin
g is a guarantee that the card limits its use of the "busy" indication.
Many companies are now trying to enhance the current SD cards.
Random Access Memory
Random Access Memory is a form of computer data storage. A random-access device
allows stored data to be accessed directly in any random order. In contrast, oth
er data storage media such as hard disks, CDs, DVDs and magnetic tape, as well a
s early primary memory types such as drum memory, read and write data only in a
predetermined order, consecutively, because of mechanical design limitations. Th
erefore, the time to access a given data location varies significantly depending
on its physical location.
Today, random-access memory takes the form of integrated circuits. Strictly spea
king, modern types of DRAM are not random access, as data is read in bursts, alt
hough the name DRAM / RAM has stuck. However, many types of SRAM, ROM, OTP, and
NOR flash are still random access even in a strict sense. RAM is normally associ
ated with volatile types of memory (such as DRAM memory modules), where its stor
ed information is lost if the power is removed. Many other types of non-volatile
memory are RAM as well, including most types of ROM.
Computer memory acts as the brains of your computer. The memory within your hard
disk drive allows your computer to store applications, data and files. The amou
nt of memory your computerhas is typically measured in gigabytes. Computer progr
ams are accessed through the help of RAM, random access memory, and the CPU, the
central processing unit. All computers contain hard disk drive memory, as well
as RAM and a CPU chip. If your computer uses both hard disk drive space and RAM
at the same time to run a program, it is using virtual memory. Virtual memory is
not as powerful as RAM memory and is only used if RAM is running low.Programs o
n the computer --- such as Microsoft Word or Office Suite --- are stored within
the hard disk drive. Think of the hard drive like a large filing cabinet that ho
lds all of your most important documents. When you click on a specific program,
that data is transferred to the computer's RAM memory through a process known as
DMA, direct memory access. Next, the CPU loads the program data which has been
stored on the RAM. Once transferred to the CPU, the program is processed and rea
dy to be utilized. The main benefit of this process is speed. Processing files d
irectly from the hard disk drive takes much longer than processing them from the
Although RAM acts as memory for a computer, it is different than the memory stor
ed on the computer's hard disk drive. RAM is a form of short-term memory, allowi
ng your computer to run applications as long as the computer is on. Hard drive m
emory is stored forever, or until you delete it. Adding more RAM will speed up y
our computer's performance as you're essentially giving your computer more room
to store files.
Different RAM Types and its uses
The type of RAM doesn't matter nearly as much as how much of it you've got, but
using plain old SDRAM memory today will slow you down. There are main types of R
AM: SDRAM, DDR and Rambus DRAM.
SDRAM (Synchronous DRAM)
Almost all systems used to ship with 3.3 volt, 168-pin SDRAM DIMMs. SDRAM is not
an extension of older EDO DRAM but a new type of DRAM altogether. SDRAM started
out running at 66 MHz, while older fast page mode DRAM and EDO max out at 50 MH
z. SDRAM is able to scale to 133 MHz (PC133) officially, and unofficially up to
180MHz or higher. As processors get faster, new generations of memory such as DD
R and RDRAM are required to get proper performance.
DDR (Double Data Rate SDRAM)
DDR basically doubles the rate of data transfer of standard SDRAM by transferrin
g data on the up and down tick of a clock cycle. DDR memory operating at 333MHz
actually operates at 166MHz * 2 (aka PC333 / PC2700) or 133MHz*2 (PC266 / PC2100
). DDR is a 2.5 volt technology that uses 184 pins in its DIMMs. It is incompati
ble with SDRAM physically, but uses a similar parallel bus, making it easier to
implement than RDRAM, which is a different technology.
Despite it's higher price, Intel has given RDRAM it's blessing for the consumer
market, and it will be the sole choice of memory for Intel's Pentium 4. RDRAM is
a serial memory technology that arrived in three flavors, PC600, PC700, and PC8
00. PC800 RDRAM has double the maximum throughput of old PC100 SDRAM, but a high
er latency. RDRAM designs with multiple channels, such as those in Pentium 4 mot
herboards, are currently at the top of the heap in memory throughput, especially
when paired with PC1066 RDRAM memory.
DIMMs and. RIMMs
DRAM comes in two major form factors: DIMMs and RIMMS.
DIMMs are 64-bit components, but if used in a motherboard with a dual-channel co
nfiguration (like with an NvidianForce chipset) you must pair them to get maximu
m performance. So far there aren't many DDR chipset that use dual-channels. Typi
cally, if you want to add 512 MB of DIMM memory to your machine, you just pop in
a 512 MB DIMM if you've got an available slot. DIMMs for SDRAM and DDR are diff
erent, and not physically compatible. SDRAM DIMMs have 168-pins and run at 3.3 v
olts, while DDR DIMMs have 184-pins and run at 2.5 volts.
RIMMs use only a 16-bit interface but run at higher speeds than DDR. To get maxi
mum performance, Intel RDRAM chipsets require the use of RIMMs in pairs over a d
ual-channel 32-bit interface. You have to plan more when upgrading and purchasin
Memory (RAM) and its influence on computer performance
Why does the RAM memory influence the computer performance?
At first, technically speaking, the RAM memory does not have any kind of influe
nce on the processor performance of the computer: the RAM memory does not have t
he power of making the computer processor work faster, that is, the RAM memory d
oes not increase the processing performance of the processor.
So, what is the relationship between the RAM memory and the performance? The sto
ry is not so simple as it seems and we will need to explain a little more how th
e computer works for you to understand the relationship between the RAM memory a
nd the performance of the computer.
The computer processor search for instructions that are stored in the RAM memory
of the computer to be executed. If those instructions are not stored in the RAM
memory, they will have to be transferred from the hard disk (or from any other
storage system, such as floppy disks, CD-ROMs and Zip-disks) to the RAM memory -
the well-known process of "loading" a program.
So, a greater amount of RAM memory means that more instructions fit into that me
mory and, therefore, bigger programs can be loaded at once. All the present oper
ating systems work with the multitask concept, where we can run more than one pr
ogram at once. You can, for example, have a word processor and a spreadsheet ope
n ("loaded") at the same time in the RAM memory. However, depending on the amoun
t of RAM memory that your computer has, it is possible that those programs have
too many instructions and, consequently, do not "fit" at the same time (or even
alone, depending on the program) in the RAM memory.
At first, if you want the computer to load a program and it does not "fit" in th
e RAM memory because there is little of it installed in the computer or because
it is already too full, the operating system would have to show a message like "
Insufficient Memory".
But it does not happen because of a feature that all processors since the 386 ha
ve, called virtual memory. With this feature, the computers processor creates a f
ile in the hard disk called swap file, that is used to store RAM memory data. So
, if you attempt to load a program that does not fit in the RAM, the operating s
ystem sends to the swap file parts of programs that are presently stored in the
RAM memory but are not being accessed, freeing space in the RAM memory and allow
ing the program to be loaded. When you need to access a part of the program that
the system has stored in the hard disk, the opposite process happens: the syste
m stores in the disk parts of memory that are not in use at the time and transfe
rs the original memory content back.
The problem is that the hard disk is a mechanical system, and not an electronic
one. This means that the data transfer between the hard disk and the RAM memory
is much slower than the data transfer between the processor and the RAM memory.
For you to have an idea of magnitude, the processor communicates with the RAM me
mory typically at a transfer rate of 800 MB/s (100 MHz bus), while the hard disk
s transfer data at rates such as 33 MB/s, 66 MB/s and 100 MB/s, depending on the
ir technology (DMA/33, DMA/66 and DMA/100, respectively).
So, every time the computer performs a change of data from the memory to the swa
p file of the hard disk, you notice a slowness, since this change is not immedia
When we install more RAM memory in the computer, the probability of running out of
RAM memory and having the necessity to make a change with the hard disk swap fi
le is smaller and, therefore, you notice that the computer is faster than before
Volatile Memory
Volatile memory is computer memory that requires power to maintain the s
tored information. Most modern semiconductor volatile memory is either Static RA
M (SRAM) or dynamic RAM (DRAM). SRAM retains its contents as long as the power i
s connected and is easy to interface to but uses six transistors per bit. Dynami
c RAM is more complicated to interface to and control and needs regular refresh
cycles to prevent its contents being lost. However, DRAM uses only one transisto
r and a capacitor per bit, allowing it to reach much higher densities and, with
more bits on a memory chip, be much cheaper per bit. SRAM is not worthwhile for
desktop system memory, where DRAM dominates, but is used for their cache memorie
s. SRAM is commonplace in small embedded systems, which might only need tens of
kilobytes or less.
Non-volatile Memory
Non-volatile memory is computer memory that can retain the stored inform
ation even when not powered. Examples of non-volatile memory include read-only m
emory (ROM), flash memory, most types of magnetic computer storage devices (e.g.
hard disks, floppy discs and magnetic tape), optical discs, and early computer
storage methods such as paper tape and punched cards.
Read-Only Memory
Read-only memory (ROM) is a class of storage medium used in computers an
d other electronic devices. Data stored in ROM cannot be modified, or can be mod
ified only slowly or with difficulty, so it is mainly used to distribute firmwar
e (software that is very closely tied to specific hardware, and unlikely to need
frequent updates).
Strictly, read-only memory refers to memory that is hard-wired, such as diode ma
trix and the later mask ROM. Although discrete circuits can be altered (in princ
iple), ICs cannot and are useless if the data is bad. Despite the simplicity, sp
eed and economies of scale of mask ROM, field-programmability often make reprogr
ammable memories more flexible and inexpensive. As of 2007, actual ROM circuitry
is therefore mainly used for applications such as microcode, and similar struct
ures, on various kinds of processors.
Other types of non-volatile memory such as erasable programmable read only memor
y (EPROM) and electrically erasable programmable read-only memory (EEPROM or Fla
sh ROM) are sometimes referred to, in an abbreviated way, as "read-only memory"
(ROM); although these types of memory can be erased and re-programmed multiple t
imes, writing to this memory takes longer and may require different procedures t
han reading the memory. When used in this less precise way, "ROM" indicates a no
n-volatile memory which serves functions typically provided by mask ROM, such as
storage of program code and nonvolatile data.
Since ROM (at least in hard-wired mask form) cannot be modified, it is really on
ly suitable for storing data which is not expected to need modification for the
life of the device. To that end, ROM has been used in many computers to store lo
ok-up tables for the evaluation of mathematical and logical functions (for examp
le, a floating-point unit might tabulate the sine function in order to facilitat
e faster computation). This was especially effective when CPUs were slow and ROM
was cheap compared to RAM.
Notably, the display adapters of early personal computers stored tables of bitma
pped font characters in ROM. This usually meant that the text display font could
not be changed interactively. This was the case for both the CGA and MDA adapte
rs available with the IBM PC XT.
The use of ROM to store such small amounts of data has disappeared almost comple
tely in modern general-purpose computers. However, Flash ROM has taken over a ne
w role as a medium for mass storage or secondary storage of files.
Different Types of ROM
Programmable read-only memory (PROM), or one-time programmable ROM (OTP), can be
written to or programmed via a special device called a PROM programmer. Typical
ly, this device uses high voltages to permanently destroy or create internal lin
ks (fuses or antifuses) within the chip. Consequently, a PROM can only be progra
mmed once.
Erasable programmable read-only memory (EPROM) can be erased by exposure to stro
ng ultraviolet light (typically for 10 minutes or longer), then rewritten with a
process that again needs higher than usual voltage applied. Repeated exposure t
o UV light will eventually wear out an EPROM, but the endurance of most EPROM ch
ips exceeds 1000 cycles of erasing and reprogramming. EPROM chip packages can of
ten be identified by the prominent quartz "window" which allows UV light to ente
r. After programming, the window is typically covered with a label to prevent ac
cidental erasure. Some EPROM chips are factory-erased before they are packaged,
and include no window; these are effectively PROM.
Electrically erasable programmable read-only memory (EEPROM) is based on a simil
ar semiconductor structure to EPROM, but allows its entire contents (or selected
banks) to be electrically erased, then rewritten electrically, so that they nee
d not be removed from the computer (or camera, MP3 player, etc.). Writing or fla
shing an EEPROM is much slower (milliseconds per bit) than reading from a ROM or
writing to a RAM (nanoseconds in both cases).
Electrically alterable read-only memory (EAROM) is a type of EEPROM that can be
modified one bit at a time. Writing is a very slow process and again needs highe
r voltage (usually around 12 V) than is used for read access. EAROMs are intende
d for applications that require infrequent and only partial rewriting. EAROM may
be used as non-volatile storage for critical system setup information; in many
applications, EAROM has been supplanted by CMOS RAM supplied by mains power and
backed-up with a lithium battery.
Flash memory (or simply flash) is a modern type of EEPROM invented in 1984. Flas
h memory can be erased and rewritten faster than ordinary EEPROM, and newer desi
gns feature very high endurance (exceeding 1,000,000 cycles). Modern NAND flash
makes efficient use of silicon chip area, resulting in individual ICs with a cap
acity as high as 32 GB as of 2007; this feature, along with its endurance and ph
ysical durability, has allowed NAND flash to replace magnetic in some applicatio
ns (such as USB flash drives). Flash memory is sometimes called flash ROM or fla
sh EEPROM when used as a replacement for older ROM types, but not in application
s that take advantage of its ability to be modified quickly and frequently.
Difference between Storage and Memory
People often confuse the terms memory and storage, especially when descr
ibing the amount they have of each. The term memory refers to the amount of RAM
installed in the computer, whereas the term storage refers to the capacity of th
e computers hard disk. To clarify this common mix-up, it helps to compare your co
mputer to an office that contains a desk and a file cabinet.
The file cabinet represents the computer's hard disk, which provides sto
rage for all the files and information you need in your office. When you come in
to work, you take out the files you need from storage and put them on your desk
for easy access while you work on them. The desk is like memory in the computer
: it holds the information and data you need to have handy while you're working.
Consider the desk-and-file-cabinet metaphor for a moment. Imagine what it would
be like if every time you wanted to look at a document or folder you had to retr
ieve it from the file drawer. It would slow you down tremendously, not to mentio
n drive you crazy. With adequate desk space our metaphor for memory you can lay
out the documents in use and retrieve information from them immediately, often w
ith just a glance.
Heres another important difference between memory and storage: the information st
ored on a hard disk remains intact even when the computer is turned off. However
, any data held in memory is lost when the computer is turned off. In our desk s
pace metaphor, its as though any files left on the desk at closing time will be t
hrown away.
The computer storage and its memory make use of the binary system, or th
e base 2 numeral system. It represents the numerical values by using the digits
0 and 1. Counting in binary is similar to counting in any other number system. B
eginning with a single digit, counting proceeds through each symbol, in increasi
ng order. Before examining binary counting, it is useful to briefly discuss the
more familiar decimal counting system as a frame of reference. While decimal cou
nting uses the ten symbols 0 through 9. Counting primarily involves incremental
manipulation of the "low-order" digit, or the rightmost digit, often called the
"first digit". When the available symbols for the low-order digit are exhausted,
the next-higher-order digit (located one position to the left) is incremented a
nd counting in the low-order digit starts over at 0. On the other hand, In binar
y, counting follows similar procedure, except that only the two symbols 0 and 1
are used. Thus, after a digit reaches 1 in binary, an increment resets it to 0 b
ut also causes an increment of the next digit to the left. Since binary is a bas
e-2 system, each digit represents an increasing power of 2, with the rightmost d
igit representing 20, the next representing 21, then 22, and so on. To determine
the decimal representation of a binary number simply take the sum of the produc
ts of the binary digits and the powers of 2 which they represent. For example, t
he binary number 100101 is converted to decimal form.
A bit is the basic unit of information in computing and digital communic
ations. A bit can have only one of two values, and may therefore be physically i
mplemented with a two-state device. The most common representation of these valu
es are 0 and 1. The term bit is a contraction of binary digit. The two values ca
n also be interpreted as logical values (true/false, yes/no), algebraic signs (+
/-), activation states (on/off), or any other two-valued attribute. The correspo
ndence between these values and the physical states of the underlying storage or
device is a matter of convention, and different assignments may be used even wi
thin the same device or program. The length of a binary number may be referred t
o as its bit-length. In information theory, one bit is typically defined as the
uncertainty of a binary random variable that is 0 or 1 with equal probability, o
r the information that is gained when the value of such a variable becomes known
The encoding of data by discrete symbols was used in Bacon's cipher (162
6). The encoding of data by discrete bits was used in the punched cards invented
by Basile Bouchon and Jean-Baptiste Falcon (1732), developed by Joseph Marie Ja
cquard (1804), and later adopted by Semen Korsakov, Charles Babbage, Hermann Hol
lerith, and early computer manufacturers like IBM. Another variant of that idea
was the perforated paper tape. In all those systems, the medium (card or tape) c
onceptually carried an array of hole positions; each position could be either pu
nched through or not, thus carrying one bit of information. The encoding of text
by bits was also used in Morse code (1844) and early digital communications mac
hines such as teletypes and stock ticker machines (1870). Ralph Hartley suggeste
d the use of a logarithmic measure of information in 1928. Claude E. Shannon fir
st used the word bit in his seminal 1948 paper A Mathematical Theory of Communic
ation. He attributed its origin to John W. Tukey, who had written a Bell Labs me
mo on 9 January 1947 in which he contracted "binary digit" to simply "bit". Inte
restingly, Vannevar Bush had written in 1936 of "bits of information" that could
be stored on the punched cards used in the mechanical computers of that time. T
he first programmable computer built by Konrad Zuse used binary notation for num
In the earliest non-electronic information processing devices, such as J
acquard's loom or Babbage's Analytical Engine, a bit was often stored as the pos
ition of a mechanical lever or gear, or the presence or absence of a hole at a s
pecific point of a paper card or tape. The first electrical devices for discrete
logic (such as elevator and traffic light control circuits, telephone switches,
and Konrad Zuse's computer) represented bits as the states of electrical relays
which could be either "open" or "closed". When relays were replaced by vacuum t
ubes, starting in the 1940s, computer builders experimented with a variety of st
orage methods, such as pressure pulses traveling down a mercury delay line, char
ges stored on the inside surface of a cathode-ray tube, or opaque spots printed
on glass discs by photolithographic techniques.
In the 1950s and 1960s, these methods were largely supplanted by magnetic storag
e devices such as magnetic core memory, magnetic tapes, drums, and disks, where
a bit was represented by the polarity of magnetization of a certain area of a fe
rromagnetic film, or by a change in polarity from one direction to the other. Th
e same principle was later used in the magnetic bubble memory developed in the 1
980s, and is still found in various magnetic strip items such as metro tickets a
nd some credit cards. In modern semiconductor memory, such as dynamic random acc
ess memory or flash memory, the two values of a bit may be represented by two le
vels of electric charge stored in a capacitor. In programmable logic arrays and
certain types of read-only memory, a bit may be represented by the presence or a
bsence of a conducting path at a certain point of a circuit. In optical discs, a
bit is encoded as the presence or absence of a microscopic pit on a reflective
surface. In one-dimensional bar codes, bits are encoded as the thickness of alte
rnating black and white lines.
The byte is a unit of digital information in computing and telecommunica
tions that most commonly consists of eight bits. Historically, the byte was the
number of bits used to encode a single character of text in a computer and for t
his reason it is the smallest addressable unit of memory in many computer archit
ectures. The size of the byte has historically been hardware dependent and no de
finitive standards existed that mandated the size. The de facto standard of eigh
t bits is a convenient power of two permitting the values 0 through 255 for one
byte. The international standard IEC 80000-13 codified this common meaning. Many
types of applications use information representable in eight or fewer bits and
processor designers optimize for this common usage. The popularity of major comm
ercial computing architectures has aided in the ubiquitous acceptance of the 8-b
it size. The unit octet was defined to explicitly denote a sequence of 8 bits be
cause of the ambiguity associated at the time with the byte.
Early computers used a variety of 4-bit binary coded decimal (BCD) repre
sentations and the 6-bit codes for printable graphic patterns common in the U.S.
Army (Fieldata) and Navy. These representations included alphanumeric character
s and special graphical symbols. These sets were expanded in 1963 to 7 bits of c
oding, called the American Standard Code for Information Interchange (ASCII) as
the Federal Information Processing Standard which replaced the incompatible tele
printer codes in use by different branches of the U.S. government. ASCII include
d the distinction of upper and lower case alphabets and a set of control charact
ers to facilitate the transmission of written language as well as printing devic
e functions, such as page advance and line feed, and the physical or logical con
trol of data flow over the transmission media. During the early 1960s, while als
o active in ASCII standardization, IBM simultaneously introduced in its product
line of System/360 the 8-bit Extended Binary Coded Decimal Interchange Code (EBC
DIC), an expansion of their 6-bit binary-coded decimal (BCDIC) representation us
ed in earlier card punches. The prominence of the System/360 led to the ubiquito
us adoption of the 8-bit storage size, while in detail the EBCDIC and ASCII enco
ding schemes are different. In the early 1960s, AT&T introduced digital telephon
y first on long-distance trunk lines. These used the 8-bit -law encoding. This la
rge investment promised to reduce transmission costs for 8-bit data. The use of
8-bit codes for digital telephony also caused 8-bit data octets to be adopted as
the basic data unit of the early Internet.
The development of 8-bit microprocessors in the 1970s popularized this s
torage size. Microprocessors such as the Intel 8008, the direct predecessor of t
he 8080 and the 8086, used in early personal computers, could also perform a sma
ll number of operations on four bits, such as the DAA (decimal add adjust) instr
uction, and the auxiliary carry (AC/NA) flag, which were used to implement decim
al arithmetic routines. These four-bit quantities are sometimes called nibbles,
and correspond to hexadecimal digits. The term octet is used to unambiguously sp
ecify a size of eight bits, and is used extensively in protocol definitions.
The byte is also defined as a data type in certain programming languages
. The C and C++ programming languages, for example, define byte as an "addressab
le unit of data storage large enough to hold any member of the basic character s
et of the execution environment" (clause 3.6 of the C standard). The C standard
requires that the char integral data type is capable of holding at least 256 dif
ferent values, and is represented by at least 8 bits (clause Various
implementations of C and C++ reserve 8, 9, 16, 32, or 36 bits for the storage o
f a byte. The actual number of bits in a particular implementation is documented
as CHAR_BIT as implemented in the limits.h file. Java's primitive byte data typ
e is always defined as consisting of 8 bits and being a signed data type, holdin
g values from -128 to 127. The C# programming language, along with other .NET-la
nguages, has both the unsigned byte (named byte) and the signed byte (named sbyt
e), holding values from 0 to 255 and -128 to 127, respectively. In addition, the
C and C++ standards require that there are no "gaps" between two bytes. This me
ans every bit in memory is part of a byte. In data transmission systems, a byte
is defined as a contiguous sequence of binary bits in a serial data stream, such
as in modem or satellite communications, which is the smallest meaningful unit
of data. These bytes might include start bits, stop bits, or parity bits, and th
us could vary from 7 to 12 bits to contain a single 7-bit ASCII code.