Вы находитесь на странице: 1из 78

O R Y

M E M
A I N E N T
M G E M NT E D BY:

A S E
E I NO

M A N I A SH A
PR
N N E N
L I
I. L
NG
O
A
R
T O NG
MAR C .
KEN
INTRODUCTION TO MEMORY
MANAGEMNT

In multiprogramming, the CPU


switches from one process to
another. Therefore, processes MUST
be INSIDE the main memory at the
same time. The memory manager
MUST ensure that memory is shared
efficiently and is free from errors. It
must take care of the following
issues in memory management.
1. It must ensure that the memory spaces of
processes are protected so that the unauthorized
access is prevented.
2. It must ensure that each process has enough
memory space to be able to execute. There are
cases wherein a process requires a larger
memory space than what is available.
3. It must keep track of the memory locations used
by each process, It should also know which part
of the memory is free to use.
Review of Relative Addresses
and Dynamic Run-Time Loading
Each memory location is assigned an
absolute address or physical address. This is
unique and it is the actual address of the
memory location.
However, most processes use the relative
address instead of specifying the actual
physical address. The relative address is
based on a reference point or base address.
For example, a relative address is 500
memory locations from the start of the
program. If the program is loaded at memory
location 1500 (which is the base address),
Remember that in dynamic run-time loading, the
absolute address is not generated when loaded, but
only when it is needed by the CPU. During run-time,
the CPU generates the needed relative address (also
referred to as logical address) of an instruction and
converts it to a physical address. Because of this,
relocating a program becomes possible even though
it is executing.
Since the absolute address is computed during
run-time, the problem is execution time will
definitely slow down. This can be avoided using
special hardware to speed-up the conversion
process. This device is called the memory
management unit (MMU). The figure below shows
how the MMU converts a logical address into a
physical address.
Base or
Base
Reference Address
Register
2000

CPU
250 225
MEMORY
0

Logical
LogicalAddress
Address Physical
PhysicalAddress
Address
Computed
Computedbybythe
the Computed
Computedbybythe
the
CPU
CPU MMU
MMU

Logical to Physical Address Transaction as Performed by the MMU


Basic Main Memory Allocation
Strategies
In this part of the topic, two basic main memory
allocation strategies used by the operating system
will be discussed: fixed-partition strategies and
variable-partition strategies.

Fixed-Partition Memory Allocation Strategies


In fixed partitioning, the main memory is
divided into a fixed number of regions or
partitions. The figure below shows an example.
As shown in the figure, the
32-MB main memory is
divided into 8 fixed, equal-
sized partitions of 4 MB each.
These partitions are fixed. If
the operating system
assigned a process into a
partition, any unused portion
of the said partition cannot be
allocated to other processes.

Main Memory is Divided into Fixed, Equal-Sized Partitions


Here are some of the problems that may arise
with this kind of allocation strategy:
A. If the size of partition is larger than the size of the
process,
this leads to wastage of memory or what is called
internal fragmentation. As previously mentioned, any
unused portion of a partition allocated to a process
cannot be allocated to other processes. For example, if a
1-MB process is loaded in the 4-MB partition, the internal
fragmentation is 4 MB - 1 MB = 3 MB. 3 MB of main
memory will be
B. If the size wasted. is smaller than the size of the
of partition
process, the process cannot be loaded. A solution is to
redesign the program or use other programming techniques
such as overlays. Overlaying is a technique wherein only
those instructions or data that are currently needed are
loaded into main memory.
An alternative is to divide the memory into fixed,
unequal-sized partitions. An example is shown
below.

As shown in the figure, the 32-MB main


memory is still divided into 8 fixed, but
unequal-sized partitions. This will
somehow minimized the problems
encountered with equal-sized partitions.
For example, if a 1-MB process is loaded
in the 2-MB partition, the internal
fragmentation is 2 MB - 1 MB = 1 MB.
Only 1 MB of main memory will be
wasted.

Main Memory is Divided into Fixed,


Unequal-Sized Partitions
However, there must be a strategy for
selecting which partition will be allocated to
a process. Obviously, it should be able to
select the smallest partition that will fit an
incoming process to minimize internal
fragmentation. This strategy is called the
best-fit available strategy. For example,
the available partition sizes are 3 MB, 5 MB,
and 6 MB. If an incoming process is 4 MB
in size, it should be allocated to the 5-MB partition
since it is the smallest partition that can
accommodate the process.
External fragmentation is another problem.
Consider the scenario that is illustrated in the
figure
below.
As shown in the figure, following the best-fit available
strategy, P1 is allocated to the 3-MB partition. P2 is
allocated
to the 5-MB partition.
However, P3 is cannot be loaded since it cannot fit in
the last remaining partition.
External fragmentation happens when the available
partitions are not big enough to accommodate any
waiting process.
The choice of partition sizes is another problem. If
most processes are small for the partition, internal
fragmentation will be high. If most processes are
large for the partition, external fragmentation will
be high. The size of partitions would greatly affect
the amount of internal and external fragmentation
of the main memory.

The degree of multiprogramming can also be


affected in fixed-partition memory allocation. For
example, a memory has 8 partitions and the first
one is allocated to the operating system. There
are 7 partitions left for other programs. This
would mean that only a maximum of 7 processes
can be multiprogrammed.
Variable-Partition Memory Allocation
Strategies
In variable-partitioning, also called dynamic
partitioning, only the exact memory space
needed by a process is allocated. There is no fixed
number of partitions and therefore, no fixed limit
on the degree of multiprogramming. These would
all depend on the sizes of the processes and the
size of the main memory.
To further understand variable partitioning,
consider the following example. A computer
system has a 32-MB main memory with the
operating system occupying the first 4 MB.
The following processes are inside the job queue:
The initial state of the main memory is
shown in figure (a). The operating
system occupies the first 4 MB locations
and there are 28 MB free memory spaces
left. A free memory space will be referred
to as a hole.

P1 will be loaded and it can occupy the


first 12 MB of the 28-MB hole. This
would create a new 16-MB hole. This is
shown in figure (b).
P2 will be loaded and it can occupy the first 7
MB of the 16-MB hole. This would create a new
9-MB hole. This is shown in figure (c).

P3 will be loaded and it


can occupy the first 8 MB
of the 9-MB hole. This
would create a new 1-MB
hole. However, P4 cannot
be allocated since its size
is larger than the
available hole. This is
shown in figure (d).
P2 would have finished its execution
after 8 time units. The memory space
will be
de-allocated by the operating
system. This would create two new
holes. This is
shown in figure (e).

P4 will now be loaded and it can


occupy the first 5 MB of the 7-MB
hole. This would create two new
holes: 2 MB and 1 MB. This is
shown in figure (f).
There is no internal fragmentation in variable
partitioning since the operating system allocates the
exact memory space needed by a process. However,
as seen from the example, there is still external
fragmentation.

The operating system maintains a list of holes


in main memory to keep track of all the free
memory spaces.
In variable partitioning, the strategies used by
the operating system in deciding which hole a
process will be placed are:

1. First-Fit Strategy

The operating searches from the beginning of


the main memory. The first hole encountered
that is large enough for the incoming process
will be selected. This is considered the fastest
strategy since searching will end when a big
enough hole is found.
2. Best-Fit Strategy
The operating system searches the entire list of holes
for the smallest hole that can accommodate the
incoming process. A possible drawback of this
approach is there might come a time when external
fragmentation will occur. Too many small holes may be
produced which will not be large enough to
accommodate incoming processes.
3. Worst-Fit Strategy
The operating system searches the entire list of holes for
the largest hole and this will be allocated to the incoming
process. The remaining hole may be large enough to
accommodate other incoming processes. The figure below
will be used to illustrate the placement strategies.
Example for First-Fit, Best-
Fit, and Worst-Fit
Strategies If first-fit
strategy will be used, P4
will be placed in the 4-MB
hole. If best-fit strategy
will be used, P4 will be
placed in the 3-MB hole. If
worst-fit strategy will be
used, P4 will be placed in
the 6-MB hole.
The main problem with external fragmentation is that
free space is not contiguous. As a solution,
compaction can be performed by the operating
system on a regular basis. In compaction, processes
are moved towards the beginning of the main
memory. The holes are therefore grouped together,
forming one large block of free memory. The figure
below illustrates this.
Compaction
From the figure, a 12-MB process cannot be assigned
before compaction. However, it can now be placed after
compaction.
One DISADVANTAGE of compaction is since it
involves moving processes during run-time, it
is only possible if dynamic run-time loading is
used. It also uses a significant amount of CPU
time especially if it involves moving hundreds
of processes. From these reasons, compaction
is rarely applied.
Another solution to external fragmentation is
to allocate the non-contiguous memory space
to a process. This memory management
scheme is called paging.
PAGING
In paging, a process is allowed to occupy non-
contiguous memory space. The processes are
divided into equal-sized blocks called pages. The
main memory is divided into equal-sized blocks
called frames. A block is equal to the size of the
frame. Therefore, a page fits a frame.
As shown in the figure, notice that the memory
allocation for P1 and P2 are loaded into any
available frame of memory. The operating system
maintains a free frame list in main memory to
keep track of all the free frames for allocation.

The operating system


also maintains a page
table for each process in
memory to keep track of
where it has placed the
pages. The figure below
shows the page table of
P1 and P2 from the
illustrated example.
A page table is indexed by page number.
From the given page table, page 0 of P1 can
be found in frame 6 of the memory. Also,
page 2 of P2 can be found in frame 5 of the
memory.
Logical Address to
Physical Address
Translation
The CPU generates a logical address which is
composed of two parts or fields. As shown in the
figure below, the most significant part is the page
number p and the least significant part is the offset
d.
Assuming a logical address is given as

The word being accessed is word 253 of page


10.
To further illustrate, the next example will use
binary numbers.
Assume there is a process whose size is 1 KB
(1,024 bytes) and whose page size is 64
bytes.
The minimum number of bits of its logical address is

Number of Bits = log 1 KB / log 2 = 10 bits

Now, determine exactly how the logical address is


divided.
The number of pages of the process is

Number of Pages = Process Size / Page Size


= 1 KB / 64 bytes
= 16 pages

The number of bits for the page number field is

log 16 / log 2 = 4 bits

The number of bits for the offset field is

log 64 / log 2 = 6 bits


Therefore,

Assume that the CPU generates 1101010111 as the


logical address. This means that 1101 or 13
represents the page number and 010111 or 23
represents the offset. The word being accessed is
word 23 of page 13.
Therefore, a physical address is also composed of
two parts or fields. As shown in the figure below,
the most significant part is the frame number f and
the least significant part is the offset d.
The frame number identifies the frame where the
page is located. The offset identifies the location
of the word within the frame. Since the page size
is the same as the frame size, the offset field is
the same as that of the logical address.
Continuing the previous example, assume that the
size of the main memory is 64 KB.

The number of bits in the physical address is


Number of Bits = log 64 KB / log 2 = 16 bits

Now, determine exactly how the physical address


is divided.
As previously given, the page size is 64 bytes.
Therefore, the frame size is also 64 bytes. The number
of frames in the main memory is
Number of Frames = Main Memory Size / Frame Size
= 64 KB / 64 bytes
= 1,024 frames
The number of bits needed to identify the frames is
log 1,024 / log 2 = 10 bits
The offset field is the same as that of the logical
address which is 6 bits. Therefore,

From the example, the


10-bit logical address is
given as 1101010111.
Assuming the page table
below, determine the
corresponding 16-bit
physical address.
The page number 1101 (13) is used as the index in
accessing the page table. From the given page table,
the frame number is determined to be 0011110101
(245). The physical address is therefore
To summarize, a logical address is converted to a
physical address as follows:

1. Determine the page number of the logical


address. The number of bits of the page number
depends on the number of pages of the process.
2. Use the page number to index into the page
table of the process. The page table maintains the
frame numbers of all the pages of the process.
3. Add the offset field of the logical address to
the frame number to form the physical address.
The figure below illustrates this
procedure.
In paging, the problem of external fragmentation
is eliminated. There is still internal fragmentation but
it only occurs at the last page of each process.
Suppose there is a process whose size is 64,050
bytes and the page size is 64 bytes. The process will
have a total of 64,050 / 64 = 1,000.8 1,001 pages.
The last page will only have 50 bytes in it.
The size of the page (or frame) is another issue in
paging. If the size of the page is too small, the
number of internal fragmentation will decrease.

However, this will result to too many pages, thus


increasing the size of the page table.
Page Table Implementation
The translation from logical address to physical
address should be fast, otherwise system throughput
will decrease. Page tables must be stored in a place
where it can be accessed more quickly. The following
options can be used:

1. Dedicated Registers

Page tables can be stored in high-speed dedicated


registers. This will result to very fast address
translation. However, this could be expensive because
in the real world, page tables have very large
contents.
2. Main Memory

Page tables can also be stored in main memory. The


MMU maintains a page-table base register (PTBR) which
points to memory locations of page tables..

3. Cache Memory

The more popular option is storing page tables in


translation look-aside buffer (TLB). This small but fast
cache memory is used to store the most recently used
page table entries. The page tables are still stored in
main memory. However, the most recently used entries
are copied in the TLB for quick translations in the future.
Virtual Memory
It is an extension of paging which allows a
process to execute even though not all of its
pages are inside the main memory. The entire
process does not have to be loaded into the
main memory in order to execute.
Virtual Memory and the
Locality
of Reference Principle
Locality of reference describes the usual
behavior of programs. When the CPU executes a
program, only some portions of the program are
used at any one time. This means that it is
possible to execute a process without loading all of
its pages in main memory.

The locality of reference also states that when


an instruction is used, there is a high probability
that the next page to be referenced is in the
vicinity of the previous instruction.
The Virtual Memory Page Table

Virtual memory also requires a page table per


process. It uses a valid-invalid bit for each entry in order
to identify if a page is located in main memory or in the
hard disk. If the valid-invalid bit is 1, the page is in main
memory. If the valid-invalid bit is 0, the page is in the
hard disk. The figure below illustrates an example. In
this example, P1 has only 4 pages (pages 0, 3, 4, and 6)
inside the main memory. Pages 1, 2, 5, and 7 are in the
hard disk.
Whenever a logical address
is generated, the MMU hardware
first verifies if the page is in main
memory. If its valid-invalid bit is 1,
the page is in main memory. The
physical address can be generated
right away. If the valid-invalid bit is
0, the page is not in main memory.
This is called a page fault. If this
happens, the MMU sends an
interrupt signal and the operating
system will take the following
actions
1. The operating system will look for a free frame
in main memory for the new page. If there is none, it
will remove a page in main memory to give way for
the new page.
2. If there is a free frame, the operating system
locates the page in the hard disk and schedules it for
transfer to the main memory. This may take time
since hard disks are generally slow devices. The
operating system may schedule another process for
execution to keep the CPU busy.
3. The page table is updated after the page is
copied to the main memory. The interrupted
process continues its execution.

The time needed to take care of a page fault is


called page fault time. Since page fault involves a
substantial amount of delay, the operating system
must make sure that page faults are minimized.
Frame Allocation

This part of the discussion deals with how many


frames should be allocated to each process. The
equal allocation technique simply divides the
frames equally among processes. For example,
there are 2,048 main memory frames and 8
processes. Each process will then be allocated
2,048 / 8 = 256 frames..
Another approach is the proportional allocation
technique wherein frames are allocated to processes
according to their sizes. For example, the available
frames is 500 and the following processes will be
allocated:

P1 = 1,000 pages
P2 = 3,000 pages
P3 = 6,000 pages
The allocation of each process depends on the
proportion of a process relative to the total number
of pages.

Proportion of P1 = 1,000 / 10,000 = 0.1 (10%)


Proportion of P2 = 3,000 / 10,000 = 0.3 (30%)
Proportion of P3 = 6,000 / 10,000 = 0.6 (60%)
Therefore, the number of frames to be allocated per
process is

No. of Frames Allocated to P1 = 0.1 x 500 = 50


frames
No. of Frames Allocated to P2 = 0.3 x 500 = 150
frames
No. of Frames Allocated to P3 = 0.6 x 500 = 300
frames

If there is a new incoming process, the existing


processes may lose some of its allocated frames to
accommodate the new process.
Page Replacement

If page fault occurs, the requested page should be


copied to the main memory. If there are no free
frames available, one of the existing pages in main
memory must be removed to give way for the new
page. If the page to be removed was not modified,
the operating system can overwrite it since a copy of
the same page is in the hard disk. But if the page to
be removed was modified, then it should first update
the copy in the hard disk before it can be overwritten
in the main memory.
In virtual memory, a dirty bit is used for
each entry in the page table to indicate if a
page has been modified or not. Bit 1
represents the modified pages (dirty pages)
while bit 0 represents the unmodified pages
(clean pages). As previously mentioned, a
modified page in main memory should first
be updated in the hard disk before removing
it.
The figure below illustrates a sample page table.
As seen from the figure, the dirty bit of page 3 is
1. If page 3 was selected for replacement, it has to
be written back to the hard disk. The dirty bit of
pages 0, 4, and 6 is 0. If they are selected for
replacement, it can simply be overwritten.

If the page to be replaced in memory belongs


to the faulting process, this is called local
replacement. If the page to be replaced in
memory belongs to another process, this is called
global replacement.
1. Optimal Algorithm
This solution selects the page that will not be
needed or referenced for the longest period of time
in the future. To better understand this algorithm,
consider the following example.

Assume that there are 4 memory frames and the


following page reference pattern:

1, 2, 3, 4, 3, 1, 4, 2, 5, 2, 6, 2, 3, 1
The simulation trace is Time
TIME
Initially, the four main memory frames are
0
empty.
Page 1 is referenced by the process. Since
the main memory is empty, a page fault will
occur. Page 1 is copied from the hard disk to
1 frame 0 of the main memory. The page fault
is called initialization page fault and this
occurs while the memory frames are being
filled-up.
Page 2 is referenced. Since there are free
memory frames, an initialization page fault
2
will occur. Page 2 is copied from the hard
disk to frame 1 of the main memory.
Page 3 is referenced. Since there are free
memory frames, an initialization page fault
3 will occur. Page 3 is copied from the hard
disk to frame 2 of the main memory.

Page 4 is referenced. Since there one


memory frame left, an initialization page
4 fault will occur. Page 4 is copied from the
hard disk to frame 3 of the main memory
Pages 3, 1, 4, and 2 are referenced
5, 6, 7, respectively. There are no page faults and
8 the execution continues normally since all
the referenced pages are in main memory.
Page 5 is referenced. Since the page is
not in main memory, a page fault is
generated which means that a page in
main memory has to be replaced. The
choices for page replacement are
pages 1, 2, 3, and 4. Looking at the
9
next sequence of pages on the given
page reference pattern, page 4 is the
one that will not be used for the
longest period of time. Page 5 will
replace page 4 in frame 3 of the main
memory.
Page 2 is referenced. There is no page
10
fault since page 2 is in main memory.
Page 6 is referenced. Since the page is not
in main memory, a page fault is generated
which means that a page in main memory
has to be replaced. The choices for page
replacement are pages 1, 2, 3, and 5.
11
Looking at the next sequence of pages on
the given page reference pattern, page 5 is
the one that will not be used for the
longest period of time. Page 6 will replace
page 5 in frame 3 of the main memory.
Pages 2, 3, and 1 are referenced
respectively. There are no page faults and
the execution continues normally since all
12, 13,
the referenced pages are in main memory.
14
The total page faults for this algorithm is 2,
excluding the initialization page faults.
Although the optimal algorithm produces the lowest
page fault rate possible, it cannot be implemented
because it requires the operating system to know the
next pages to be referenced. It is rather used as a
performance benchmark of other algorithms
2. First-In, First-Out (FIFO) Algorithm
This solution selects the pages to be replaced on
a first in, first-out basis. In other words, it selects
the oldest page in main memory. Using the previous
example, the simulation trace is
TIME
1, 2, 3, 4 Similar to optimal algorithm.
Pages 3, 1, 4, and 2 are referenced
respectively. There are no page faults
5, 6, 7, 8 and the execution continues normally
since all the referenced pages are in
main memory.
Page 5 is referenced. Since the page is
not in main memory, a page fault is
generated which means that a page in
main memory has to be replaced. The
9
choices for page replacement are pages
1, 2, 3, and 4. The oldest among them is
page 1. Page 5 will replace page 1 in
frame 0 of the main memory.
Page 2 is referenced. There is no page
10
fault since page 2 is in main memory.
Page 6 is referenced. Since the page is
not in main memory, a page fault is
generated which means that a page in
main memory has to be replaced. The
11
choices for page replacement are pages
2, 3, 4, and 5. The oldest among them is
page 2. Page 6 will replace page 2 in
frame 1 of the main memory.
Page 2 is referenced. Since the page is
not in main memory, a page fault is
generated which means that a page in
main memory has to be replaced. The
12
choices for page replacement are pages
3, 4, 5, and 6. The oldest among them is
page 3. Page 2 will replace page 3 in
frame 2 of the main memory.
Page 3 is referenced. Since the page is not
in main memory, a page fault is generated
which means that a page in main has to
13 be replaced. The for page replacement are
pages 2, 4, 5, and 6. The oldest among
them is page 4. Page 3 will replace page 4
in frame 3 of the main memory.
Page 1 is referenced. Since the page is not
in main memory, a page fault is generated
which means that a page in main memory
has to be replaced. The choices for page
replacement are pages 2, 3, 5, and 6. The
14
oldest among them is page 5. Page 1 will
replace page 5 in frame 0 of the main
memory. The total page faults for this
algorithm is 5, excluding the initialization
page faults.
In this algorithm, the operating system uses a
FIFO queue which records the page numbers
according to the time they are placed in main
memory.

The FIFO algorithm has low overhead since the


CPU does not perform much computation to find the
page to be replaced. However, removing the oldest
page may not be the best choice because there is
no guarantee that it will not be used again. It has a
tendency to replace old pages even if they are used
frequently. Generally, this algorithm has a high page
fault rate.
This statement is somehow not true in FIFO
algorithm. There are cases wherein the page
fault rate increases as the number of main
memory frames increases. This is called as
Beladys Anomaly.

3. Second-Chance Algorithm
This solution works in the same manner as
FIFO algorithm. However, every page is tied to a
reference bit. If the reference bit is 1, the page
has been used. If the reference bit is 0, the page
was not used.
The operating system initially resets the
reference bits of all pages to 0. The reference bit of
a page that is used is set to 1.

As in FIFO algorithm, the oldest page is initially


selected for page replacement. However, it first
checks if its reference bit is 1. If it is, the page will
not be replaced and will be placed at the rear of the
queue. The next oldest page will go through the
same process until the right page is selected
(reference bit = 0).
The performance of second-chance algorithm is
much better compared to FIFO algorithm.
However, it has a higher overhead since it
involves processing of reference bits.

4. Least Recently Used (LRU) Algorithm

This solution selects the least recently used page


for replacement. It is based on the assumption that
recently used pages are more likely to be reference
again in the future (locality of reference principle).
Using the previous example, the simulation trace
is
TIME
1, 2, 3, 4 Similar to optimal algorithm.
Pages 3, 1, 4, and 2 are referenced
respectively. There are no page faults
5, 6, 7, 8 and the execution continues normally
since all the referenced pages are in
main memory.
Page 5 is referenced. Since the page is
not in main memory, a page fault is
generated which means that a page in
main memory has to be replaced. The
9 choices for page replacement are pages
1, 2, 3, and 4. The least recently used
among them is page 3.Page 5 will
replace page 3 in frame 2 of the main
memory.
TIME
Page 2 is referenced. There is no page
10
fault since page 2 is in main memory.
Page 6 is referenced. Since the page is not
in main memory, a page fault is generated
which means that a page in main memory
has to be replaced. The choices for page
11
replacement are pages 1, 2, 4, and 5. The
least recently used among them is page 1.
Page 6 will replace page 1 in frame 0 of
the main memory.
Page 2 is referenced. There is no page
12
fault since page 2 is in main memory.
TIME
Page 3 is referenced. Since the page is not
in main memory, a page fault is generated
which means that a page in main memory
has to be replaced. The choices for page
13
replacement are pages 2, 4, 5, and 6. The
least recently used among them is page 4.
Page 3 will replace page 4 in frame 3 of the
main memory.
Page 1 is referenced. Since the page is not
in main memory, a page fault is generated
which means that a page in main memory
has to be replaced. The choices for page
replacement are pages 2, 3, 5, and 6. The
14
least recently used among them is page 5.
Page 1 will replace page 5 in frame 2 of the
main memory. The total page faults for this
algorithm is 4, excluding the initialization
In LRU algorithm, the operating system
maintains a stack wherein the recently used page
is at the top of the stack.

Because it follows the locality of reference


principle, this algorithm relatively demonstrates
good performance. However, it has high
overhead since it involves updating the stack for
each page that
is referenced.
THE END

Вам также может понравиться