Вы находитесь на странице: 1из 9

Memory technologies

Introduction
This lesson will provide you with the knowledge on technology associated with the
computer memory, in terms of storing, managing and transferring data.

Learning outcomes

After completing this lesson you should be able to describe the basic concepts and technologies
used in computer memory. Therefore, you would be able to,
• Explain the operations of memory
• Compare and Contrast Static RAM and Dynamic RAM
• Explain the concept of hierarchical memory

Memory Chips and Cells


The basic element of primary memory is considered as a memory cell, which represents
one bit of data. Memory cells are fabricated using flip-flops. Memory cell can be in two
discrete states, which are used to represent binary 1 and 0. The system can write into
the memory cell at least once. Also the system reads the memory cell to get the status
of it.

Figure 1: Writing to a Memory Cell Figure 2: Reading from a Memory Cell

As we discussed in the previous lesson memory cells can be found in two types.

1. Volatile Cells – Those which loose the bit value when power is not supplied
2. Non-Volatile Cells – Those which do not loose values when no power is supplied

The primary memory is organized by collecting a large number of cells, where each cell
stores binary digit. The computer’s memory system considers a group of bits during data
retrievals and data storing operations. A group of bits is called a word. Thus, the memory
system becomes an organization, which stores a number of words. The size of the word
may vary from one byte to many, where a byte is a collection of 8 bits. The capacity of
memory storage is therefore denoted by the number of bytes it can store.
Read Write Memory (RWM)
Considering the facility of accessing randomly, the main memory is often referred to as
Random Access Memory (RAM). But, it also allows the system to access sequentially if it
chose to do so. By definition, RWM/RAM is the main memory workspace used by the
processor during the execution of programs. Thus, RWM is also called main memory.

The major characteristic, based on which the above classification of internal memory is
that the data can be written to RAM over and over again. Both data writing and reading
is done through electrical signals.

As discussed in the previous lesson the non-permanence of value is a very important


characteristic of RAM. The data in RAM will be retained in it only as long as electrical
power is supplied. Data will be lost when the power supply is disconnected. Therefore
RWM is called volatile. When opening and executing a program stored in the computer,
the program file should first be loaded into RWM, and will reside there until the program
is closed or power supply is failed. The CPU executes the instructions in the RAM/RWM
and the result too will initially be stored in the RWM, which will be then send to the
particular mode of output.

Main memory is physically available as memory chips, in fact as memory modules, which
are plugged into the motherboard. Memory module is an organization that contains
several memory chips.

As shown in the previous lesson depending on the mechanism of keeping data in memory
cells, the RWM (RAM in common terms) can mainly be classified into two categories.

1. Dynamic RAM (DRAM)


2. Static RAM (SRAM)

Dynamic RAM (DRAM)


Dynamic RAM chips store data dynamically. That is, data can be written to DRAM chip
over and over again, and also, the memory cells in the chip need to be refreshed in very
short periods. For example, DRAM chip needs to be rewritten in every 15microseconds.
As main memory is usually constructed using DRAM chips, characteristics of DRAM
explain why main memory (RWM) is considered volatile.

Each DRAM chip memory locations, or cells, that are arranged in a matrix of rows and
columns. A row of cells are called a page. Depending on the capacity of the DRAM chip,
the number of cells may vary. The memory cells in the DRAM chips are small capacitors.
The capacitors, which are charged, will be read as 1s and those, which are not charged,
will be read as 0s. Use of capacitors in the place of memory cells causes the volatility
of main memory. The electric charge of memory capacitors drain gradually and
therefore may cause data losses if the cell is not recharged. This necessitates the
constant recharging of memory cells to retain the data. While DRAM is being refreshed,
it cannot be accessed. If the processor makes a data request while the DRAM is being
refreshed, the data will not be available until after the refresh is complete. The
interruptions to power supply, even for a fraction of a second may fail the recharging
of memory cells, and the data will be lost.

Figure 3: Cell organization in a Memory Chip (DRAM Chip)

DRAM uses one capacitor and a transistor per one bit. The role of the transistor is to
read the charge state of the adjacent capacitor/cell. As it uses only one cell to represent
a bit more data can be stored in a smaller area of space, thus DRAM chips become very
dense. Due to the same reason, DRAM is considered as inexpensive, and therefore used
as main memory in modern PCs.

The highlighted disadvantage of DRAM is that it is much slower than the processor, and
therefore cannot serve the processor for better performance. In the early stages of PCs
when the processor speed was 16MHz or less, the DRAM came in-built to the
motherboard, and the distance between the memory and the processor was
considerable. Yet, the DRAM could work at a speed that matches with the processor
speed. But, with the availability of high-speed processors, synchronization between the
DRAM and the processor became impossible, and therefore could not gain the maximum
performance of the processor. As a resolution Static RAM (SRAM) was introduced.

Static RAM (SRAM)


In contrast to DRAM, SRAM does not need to be refreshed constantly. This is a feature
gained over its design, which is formed as a cluster of 6 transistors for each bit, for
which there will be no charge drains. Therefore, the SRAM cells will retain their memory
as long as the power is supplied. The major advantage with SRAM is that it is much faster
than DRAM. Yet, SRAM is not promoted to be the main memory of the system, because,
it is very expensive and has a low density. Therefore, it accommodates only a less
amount of data in a given area, so required to be physically large. In general, SRAM is
30 times larger and more expensive than DRAM.

With these limitations, SRAM could not be used to form the entire main memory; but a
small amount of SRAM was used to improve the system performance. This was much
closer to the CPU and made the system performance many times better. This can further
be discussed with the memory hierarchy.
Read Only Memory (ROM)
Read Only Memory is non-volatile memory, which is constructed of non-volatile memory
cells. As you would recall, Non-volatile memory cells retain data even after the power
is switched off. Thus, ROM is a type of memory, which can hold data permanently.

Therefore, the PCs startup instructions are also stored in ROM as boot up programs,
enabling the PC to have a program to execute when the power is switched on, without
entering it to an empty state.

Random Access Memory (RAM) and Read Only Memory (ROM) are not opposite to each
other. But, in actual terms ROM is also accessible randomly. Even though ROM is known
as Read Only, there are special types of ROMs available, to which the data can be
written. One of such memory type is Electrically Erasable Programmable ROM.

EEPROM is a non-volatile memory, but also re-writable. This enables users to ROMs or
firmware in motherboards.

Section 11.3 – 11.9 of the text book, the ESSENTIALS OF COMPUTER ARCHITECTURE,
Second Edition by Doug Comer discusses the different types of memory and the essential
features of the memory. Please read those sections given as appendix to this lesson.
You would require this knowledge for the second take home assignment.

Summary
In this lesson we discussed how memory operations are done. Further, we discussed how
different memory types adopt different technologies to operate.

In our next lesson, Characteristics of Memory and Memory Hierarchy we will focus how
memory is arranged in order to give better performance.

Appendix – Extracts from ESSENTIALS OF COMPUTER ARCHITECTURE, Second


Edition by Doug Comer
11.3 Static and Dynamic RAM Technologies

The technologies used to implement Random Access Memory can be divided into two
broad categories. Static RAM (SRAM†) is the easiest type for programmers to understand
because it is a straightforward extension of digital logic. Conceptually, SRAM stores each
data bit in a latch, a miniature digital circuit composed of multiple transistors similar
to the latch discussed in Chapter 2. Although the internal implementation is beyond the
scope of this text, Figure 11.1 illustrates the three external connections used for a
single-bit of RAM.
Figure 11.1 Illustration of a miniature static RAM circuit that stores one data
bit. The circuit contains multiple transistors.

In the figure, the circuit has two inputs and one output. When the write enable input is
on (i.e., logical 1), the circuit sets the output value equal to the input (0 or 1); when
the write enable input is off (i.e., logical 0), the circuit ignores the input and keeps the
output at the last setting. Thus, to store a value, the hardware places the value on the
input, turns on the write enable line, and then turns the enable line off again.

Although it performs at high speed, SRAM has a significant disadvantage: high power
consumption (which generates heat). The miniature SRAM circuit contains multiple
transistors that operate continuously. Each transistor consumes a small amount of
power, and therefore, generates a small amount of heat.

The alternative to static RAM, which is known as Dynamic RAM (DRAM‡), consumes less
power. The internal working of dynamic RAM is surprising and can be confusing. At the
lowest level, to store information, DRAM uses a circuit that acts like a capacitor, a
device that stores electrical charge. When a value is written to DRAM, the hardware
charges or discharges the capacitor to store a 1 or 0. Later, when a value is read from
DRAM, the hardware examines the charge on the capacitor and generates he appropriate
digital value.

The conceptual difficulty surrounding DRAM arises from the way a capacitor works:
because physical systems are imperfect, a capacitor gradually loses its charge. In
essence, a DRAM chip is an imperfect memory device — as time passes, the charge
dissipates and a one becomes zero. More important, DRAM loses its charge in a short
time (e.g., in some cases, under a second).

(†SRAM is pronounced “ess-ram.” ‡DRAM is pronounced “dee-ram.”)

How can DRAM be used as a computer memory if values can quickly become zero? The
answer lies in a simple technique: devise a way to read a bit from memory before the
charge has time to dissipate, and then write the same value back again. Writing a value
causes the capacitor to start again with the appropriate charge. So, reading and then
writing a bit will reset the capacitor without changing the value of the bit.

In practice, computers that use DRAM contain an extra hardware circuit, known as a
refresh circuit that performs the task of reading and then writing a bit. Figure 11.2
illustrates the concept.
Figure 11.2 Illustration of a bit in dynamic RAM. An external refresh circuit must periodically read the data value and write
it back again, or the charge will dissipate and the value will be lost.
The refresh circuit is more complex than the figure implies. To keep the refresh circuit
small, architects do not build one refresh circuit for each bit. Instead, a single, small
refresh mechanism is designed that can cycle through the entire memory. As it reaches
a bit, the refresh circuit reads the bit, writes the value back, and then moves on.
Complexity also arises because a refresh circuit must coordinate with normal memory
operations. First, the refresh circuit must not interfere or delay normal memory
operations. Second, the refresh circuit must ensure that a normal write operation does
not change the bit between the time the refresh circuit reads the bit and the time the
refresh circuit writes the same value back. Despite the need for a refresh circuit, the
cost and power consumption advantages of DRAM are so beneficial that most computer
memory is composed of DRAM rather than SRAM.

11.4 The Two Primary Measures of Memory Technology

Architects use several quantitative measures to assess memory technology; two stand
out:

 Density
 Latency and cycle times

11.5 Density

In a strict sense, the term density refers to the number of memory cells per square area
of silicon. In practice, however, density often refers to the number of bits that can be
represented on a standard size chip or plug-in module. For example, a Dual In-line
Memory Module (DIMM) might contain a set of chips that offer 128 million locations of
64 bits per location, which equals 8.192 billion bits or one Gigabyte. Informally, it is
known as a 1 gig module. Higher density is usually desirable because it means more
memory can be held in the same physical space. However, higher density has the
disadvantages of increased power utilization and increased heat generation.

The density of memory chips is related to the size of transistors in the underlying silicon
technology, which has followed Moore’s Law. Thus, memory density tends to double
approximately every eighteen months.

11.6 Separation of Read and Write Performance

A second measure of a memory technology focuses on speed: how fast can the memory
respond to requests? It may seem that speed should be easy to measure, but it is not.
For example, as the previous chapter discusses, some memory technologies take much
longer to write values than to read them. To choose an appropriate memory technology,
an architect needs to understand both the cost of access and the cost of update. Thus,
a principle arises:

In many memory technologies, the time required to fetch information from memory differs from the
time required to store information in memory, and the difference can be dramatic. Therefore, any
measure of memory performance must give two values: the performance of read operations and
the performance of write operations.

11.7 Latency and Memory Controllers

In addition to separating read and write operations, we must decide exactly what to
measure. It may seem that the most important measure is latency (i.e., the time that
elapses between the start of an operation and the completion of the operation).
However, latency is a simplistic measure that does not provide complete information.

To see why latency does not suffice as a measure of memory performance, we need to
understand how the hardware works. In addition to the memory chips themselves,
additional hardware known as a memory controller provides an interface between the
processor and memory. Figure 11.3 illustrates the organization.

Figure 11.3 Illustration of the hardware used for memory access. A controller
Sits between the processor and physical memory.

To access memory, a device (typically a processor) presents a read or write request to


the controller. The controller translates the request into signals appropriate for the
underlying memory, and passes the signals to the memory chips. To minimize latency,
the controller returns an answer as quickly as possible (i.e., as soon as the memory
responds). However, after it responds to a device, a controller may need additional
clock cycle(s) to reset hardware circuits and prepare for the next operation. A second
principle of memory performance arises:

Because a memory system may need extra time between operations, latency is an insufficient
measure of performance; a performance measure needs to measure the time required for successive
operations.

That is, to assess the performance of a memory system, we need to measure how fast
the system can perform a sequence of operations. Engineers use the term memory cycle
time to capture the idea. Specifically, two separate measures are used: the read cycle
time (abbreviated tRC) and the write cycle time (abbreviated tWC). We can summarize:

The read cycle time and write cycle time are used as measures of memory system performance
because they assess how quickly the memory system can handle successive requests.
11.8 Synchronous and Multiple Data Rate Technologies

Like most other digital circuits in a computer, a memory system uses a clock that
controls exactly when a read or write operation begins. As Figure 11.3 indicates, a
memory system must also coordinate with a processor. The controller may also
coordinate with I/O devices. What happens if the processor’s clock differs from the
clock used for memory? The system still works because the controller can hold a request
from the processor or a response from the memory until the other side is ready.

Unfortunately, the difference in clock rates can impact performance — although the
delay is small, if delay occurs on every memory reference, the accumulated effect can
be large. To eliminate the delay, some memory systems use a synchronous clock system.
That is, the clock pulses used with the memory system are aligned with the clock pulses
used to run the processor. As a result, a processor does not need to wait for memory
references to complete. Synchronization can be used with DRAM or SRAM; the two
technologies are named:

SDRAM– Synchronous Dynamic Random Access Memory


SSRAM– Synchronous Static Random Access Memory

In practice, synchronization has been effective; most computers now use synchronous
DRAM as the primary memory technology.

In many computer systems, memory is the bottleneck — increasing memory performance


improves overall performance. As a result, engineers have concentrated on finding
memory technologies with lower cycle times. One approach uses a technique that runs
the memory system at a multiple of the normal clock rate (e.g., double or quadruple).
Because the clock runs faster, the memory can deliver data faster. The technologies are
sometimes called fast data rate memories, typically double data rate or quadruple data
rate. Fast data rate memories have been successful, and are now standard on most
computer systems, including consumer systems such as laptops.

Although we have covered the highlights, our discussion of RAM memory technology does
not begin to illustrate the range of choices available to an architect or the detailed
differences among them. For example, Figure 11.4 lists a few commercially available
RAM technologies:
Figure 11.4 Examples of commercially available RAM technologies. Many other technologies exist.

11.9 Memory Organization

Recall that there are two key aspects of memory: the underlying technology and the
memory organization. As we have seen, an architect can choose from a variety of
memory technologies; we will now consider the second aspect. Memory organization
refers to both the internal structure of the hardware and the external addressing
structure that the memory presents to a processor. We will see that the two are related.

Вам также может понравиться