Вы находитесь на странице: 1из 90

Info 324 Operating Systems II

Dr. Samar TAWBI


1

memory management

Memory management: objectives


Optimizing the use of the main memory = RAM the biggest number of active processes in order tooptimize the function of the multiprogramming syste. Keep the system the most possible, especially the CPU.
Dynamic allocation when needed.

Memory/addresses physical and logical


Logical memory : the adressing space of a program
Logical address generated by the CPU; also referred to as virtual address

Physical memory :
the main memory RAM of the machine Physical address address seen by this memory

Separate these concepts because, the programs, in general are loaded in different memory positions
So physical address address logical

Translation logical address physical address


In the early systems, a program was always loaded in the same memory zone. the multiprogramming and dynamic allocation leaded to the need of program loading in different positions. Now, this is done by the MMU meanwhile the prog. is running. MMU = Hardware device that maps virtual to physical address In MMU scheme, the value in the relocation register is added to every address generated by a user process at the time it is sent to memory The user program deals with logical addresses; it never sees the real physical addresses
5

The position and function of the MMU

Translation logical address physical address.


A translation mecanism is necessary
Transform the symbolic addresses into real addresses

Base Register (b): the lowest memory address used by a process Length (l): the length of the physical allocated space. Could be written in this form:

1) if a<0 then memory violation 2) else if a > l then memory violation 3) else f(a) = a = b+a 4) a is a valid address

F:S a

R b+a

translation logical address physical address


Base Register = the lowest memory address used
Base register

Programs swapping
A program, or a part of a program, could be temporarly unloaded from memory in order to let other programs to execute
It will be put in secondary memory, normally disk

Contiguous Allocation
Main memory usually divided into two partitions:
Resident operating system, usually held in low memory User processes then held in high memory

Single-partition allocation
Relocation-register scheme used to protect user processes from each other, and from changing operating-system code and data Relocation register contains value of smallest physical address; limit register contains range of logical addresses each logical address must be less than the limit register

Contiguous Allocation
OS prog. 1 prog. 2

available
prog. 3 We have here 4 partitions for the programs each one is loaded in a single zone in the memory
11

Relocation and Limit Registers

12

Contiguous Allocation
Multiple-partition allocation
Hole block of available memory; holes of various size are
scattered throughout memory When a process arrives, it is allocated memory from a hole large enough to accommodate it Operating system maintains information about: a) allocated partitions b) free partitions (holes)
OS process 5 OS process 5 OS process 5 process 9 process 8 process 2 process 2 process 2 OS process 5 process 9 process 10 process 2

13

Fragmentation: non used memory


External Fragmentation total memory space exists to satisfy a request, but it is not contiguous Internal Fragmentation allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used No solution for internal fragmentation.

14

Fixed Partitions
Main Memory divided into a certain number of partitions

Partitions are either of the same size or not. Any process could be loaded in a partition having enough size to hold the process

15

Loading Algorithm for fixed partitions


Partitions of unequal size: using many queues
assign each process to the smallest partition that it could fit in. 1 queue for each partition size.

Try to minimize the internal fragmentation


Problem: certain queues will be empty if theres no process of their size (external frag.)
16

Loading Algorithm for fixed partitions


Partitions of unequal size: using one queue
We choose the smallest free partition able to contain the next process the multiprogramming level increases in spite of the internal fragmentation

17

Fixed partitions
Simple, but... Inefficiency of the memory use : every program, as small as it is, should occupy one entire partition. So theres internal fragmentation.

the unequal sized partitions decrease these problems but they remain...

18

Dynamic partitions
Partitions are variable in number and size. Each process allocates exactly the required memory space Probably unused holes will be formed in the memory: Its the external fragmentation

19

dynamic partitions: example

(d) there is a hole of 64K after loading 3 processes: No more free space for other processes. If P2 is blocked (ex. Waiting for an event), it could be unloaded and we could load P4=128K.
20

dynamic partitions: example

(e-f) P2 is suspended, P4 is loaded. A hole of 224-128=96K is created (external fragmentation) (g-h) P1 terminates, P2 is reloaded: another hole of 320-224=96K... We have 3 small holes, probably useless. COMPACTION to create one hole of 256K

96+96+64=256K external fragmentation

21

Compression (compaction)
A solution for the external fragmentation. the programs are moved in order to reduce into one big hole all the small available holes. Is made when a program asking to execute doesnt find a free hole fitting it, but its size is less than the existing external fragmentation.

Disadvantages:
transfer time of the programs Need to recover all the links between the programs addresses
22

Allocation Algorithms
How to satisfy a request of size n from a list of free holes Which place to allocate in order to reduce the need of compaction

First-fit: Allocate the first hole that is big enough Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size. Produces the smallest leftover hole. Worst-fit: Allocate the largest hole; must also search entire list. Produces the largest leftover hole. Next-fit: choose the first hole after the last allocated place First-fit and best-fit better than worst-fit in terms of speed and storage use

23

Non contiguous allocation


Used in order to reduce the fragmentation.
Divide a program into parts and allow separate allocation for each part The parts are largely smaller than the whole process, so better use of the CPU

The small holes could be used more easily.

There are two techniques to do this:


Segmentation uses parts of the program having logical values (modules) Paging uses arbitrary parts of the program (prog. Division into same sized pages) Combination

24

Segments are logical parts of the program

A
Main Progr.

JUMP(D, 100) LOAD(C,250)

B
data

C data

D Subprogr.

LOAD(B,50)

4 segments: A, B, C, D

25

the segments comme unites alloc memory


0 0 1 3

2 3

1 2

user space

Physical memory

Given that the segments are smaller than entire programs, this technique implies less fragmentation (external in this case)
26

Mechanism for the segmentation


An array contains the begin address of all segments in a process each address in a segment is added to the begin address of the segment by the MMU

0 3

Current segment

Adr of 3 Adr of 2 Adr of 1 Adr of 0

1 2

descriptors table of the segments Physical memory


27

Details
The logical address consists of a pair: <seg nb, offset>
Where the offset is the address in the segment

the segment table contains: segments descriptors


Base address (register) Segment Length Protection Info

In the PCB (Process Control Block) of the process, there will be a pointer to the address in memory of the segments table There will be also the number of segments in the process At the context exchange time, these info will be loaded in the appropriate registries of the CPU.
28

addresses translation in segmentation

If d > length: error!

29

the mechanism in details


In the program

Final address

30

Segments Sharing: seg. 0 is shared


ex: Word program uses the editor for different documents

31

Segmentation and protection


each segment descriptor could contain the protection info:
Segment Length User privilages on this segment: read, write, execute If when calculating the address we find that the user doesnt have the access right interruption These info could vary from one user to another, for the same segment!

limit

base

read, write, execute?

Segments descriptors table


32

Evaluation of the simple segmentation


Advantages: the allocation unit of the memory is
smaller than the whole program a logical entity known by the program The segments may change location in memory the protection and the segments sharing are easy(en principle)

disadvantage: the problem of dynamic partitions:


the external fragmentation is not eliminated:
Holes in memory, compaction?

Another solution is to try to simplify the mechanism by using memory allocation units of equal size. Paging
33

Segmentation VS. Paging

The pb with the segmentation is that the memory allocation unit (the segment) is of variable length. The paging uses fixed memory allocation units, the problem is then resolved.

34

Simple Paging
the memory is partitioned into same sized small parts: the physical pages or frames each process is also partitioned into small parts of same size called pages (logical) The logical pages of a process could be assigned to available frames anywhere in memory. Consequences:
A process could be distributed in the physical memory. the external fragmentation is eliminated
35

example of process loading

36

example of process loading


We can now load a prog D, that needs 5 frames
Even if theres not contiguous frames 5 free

the external fragmentation is limited to the case where the proc. pages number is greater than the available frames nb. Only the last page of a prog Could have internal fragmentation (average: 1/2 frame per proc)

37

Page Table

the entries in the pages table are also called pages descriptors
38

Page table

the OS should maintain a page table for each process


each page descriptor contains the frame number where the corresponding page is physically located A page table is indexed by the page number (logical) in order to obtain the frame number. A list of the available frames is also maintained
(free frame list)
39

Adresses translation
The logical address is easely translated into physical address
The page sizes are powers of 2 the pages begin always at addresses that are powers of 2 Having as much zeros to the right as the offset length So these 0s are sont replaced by the offset

Ex: if 16 bits are used for the addresses and the page size= 1K: if we need 10 bits for the offset, remains then 6 bits for the page number The logical address (n,m) is translated into the physical address (k,m) using n as index in the page table and remplacing it by the address found: k
m doesnt change
40

Mecanisme: material

41

Address Translation (logical-physical) for paging

42

Adresses translation: segmentation and paging


In the segmentation case, as well as in the paging case, we add the offset to the address found of the segment or the page. However, in paging, the addition could be done by simple concatenation:

11010000+1010
=

1101 1010

43

simple Segmentation vs simple Paging


The Paging cares only of the loading problem, while the segmentation aims also the linking problem the segmentation is visible to the program but the Paging is not the segment is a logical protection and sharing unit, while the page is not
So the protection and the sharing are easier in the segmentation

the segmentation requires a more complex hardware for the addresses translation (addition instead of concatenation) the segmentation suffers of external fragmentation (dynamic partitions) the Paging produces internal fragmentation , but not too much (1/2 frame per program) fortunately, the segmentation and the Paging could be combined

44

Paging and segmentation combined


The programs are divided into segments and the segments are paged So each segment address is not a memory address , but an address of the page table of the segment The segments and pages tables could be also paged

45

Addressing
s p d d'

segment table base register: a CPU register

46

Page Table Implementation

Page table is kept in main memory

Page-table base register (PTBR) points to the page table. Page-table length register (PTLR) indicates the size of page table.
One for page table, one for data/instruction Two-memory access problem solved by use of special fastlookup hardware cache (i.e. cache page table in registers)

Every data/instruction access requires 2 memory accesses.


associative registers or Translation Look-aside Buffers (TLBs) = Associative memory


47

TLB : Translation Look-aside Buffer


A small, fast lookup cache called the TLB or ASSOCIATIVE MEMORY.
The TLB is used along with page tables kept in memory. First ask for the logical address in the TLB. If the page number is found, its frame number is immediately available => 1 memory access to get info. If the page number is not in the TLB (a miss) => 2 memory accesses o 1 access to get the frame number. o 1 access to get the desired information. o the page number and frame number are added to the TLB for quick access on the next reference.

This procedure may be handled by the MMU, but today it is often handled by software; i.e. the operating system.
48

Paging Hardware With TLB

49

Associative Registers : TLB


Page # Frame #

Address Translation (A, A)

If A is in associative register, get frame # Otherwise, need to go to page table for frame#

requires additional memory reference

Page Hit ratio - percentage of time a page is found in associative memory.


50

Performance Characteristics of TLB


Typical TLB
Size: 8 - 4,096 entries Hit time: 0.5 - 1 clock cycle Miss penalty: 10 - 100 clock cycles Miss rate: 0.01 - 10%

If a TLB hit takes 1 clock cycle, a miss takes 30 clock cycles, and the miss rate is 1%, the effective memory cycle rate for page mapping
1 X 0.99 + (1+30) X 0.01=1.30 1.30 clock cycles per memory access
51

Effective Access time


Associative lookup time = time unit Assume Memory cycle time = 1 microsecond Hit ratio = Effective access time (EAT) EAT = (1+ ) + (2+ ) (1-) EAT = (2+ - ) s

52

Exercise 2-1
(Contiguous Allocation, dynamic partitions)
Consider the following sequence of allocation (+) and liberation (-) requests in memory space of 1000 blocs, using the contiguous allocation with dynamic partitions: +300, +200, +260, -200, +100, -300, +250, +400, -260, +150, +120, -100, -120, +200, -150, -250, +100, -400, +600, -100, -200, -600 Indicate how, starting from a free memory, the OS realizes the allocation using Best Fit, First Fit strategies.

53

Exercise 2-2
Segmentation
We consider the following segment table : Segment Base Length

0
1

540
1234

234
128

2
3 4

54
2048 976

328
1024 200

Calculate the real addresses corresponding to the following virtual addresses : (0, 128), (1, 99), (1, 100), (2, 465), (3, 888), (4, 100), (4, 344)
54

Exercise 2-3
Paging
In a paged system, the pages are 256 words each and the memory contains 4 frames. Consider the following page table

Calculate the size of the physical memory


Give the number of bits needed for the Programs logical addresses

0 1 2 3 4 5 6 7

3 0 i i 2 i i 1
55

Calculate the real addresses corresponding to the virtual addresses:


(0,240), (2,34), (2,35), (6,42), (7,230) Find the real address corresponding to the virtual address 456

From Paging and segmentation to virtual memory


A process is composed of parts (pages or segments) doesnt have to occupy a contiguous place in the memory Virtual memory separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution. Logical address space can therefore be much larger than physical address space. Allows address spaces to be shared by several processes. Allows for more efficient process creation. Virtual memory can be implemented via: Demand paging Demand segmentation

So the sum of logical memory of running processes may exceed the available physical memory
the base concept of the virtual memory

An image of all the addressing space of the process is kept in secondary memory (normal. disk) where the missing pages could be taken when needed
Swapping mechanism
56

virtual memory: result a mechanism that


combines main and secondary memories

57

New format of the page table


(the same idea as the segment table)
Valid bit 1 if page in mem. 0 if in sec. mem.

Page address

Valid Bit

At the beginning the valid bit = 0 for all pages

58

Pages in RAM or on disk


Page A in RAM and on disk

Page E only on disk

59

Advantages of partial loading


It can enable processes to share memory
Only few parts of each process are loaded

Many pages or segments that are rarely used will not be loaded Its now possible to execute a set of processes even if their total size exceeds the memory size
Its possible to use for logical address more bits than for the addresses in the main memory

60

virtual memory: could be huge!


Ex: 16 bits are required to address a memory of 64KB Using pages of 1KB, 10 bits are required for the offset for the page number of the logical address we can use a number of bits exceeding 6 (more than 26 cases in the page table), because not all pages are loaded in memory simultanously

so the limits of the virtual memory is defined by the number of bits reserved for the address
the logical memory is called virtual memory
Its maintained in secondary memory the parts are loaded in main memory only on demand.
61

Process execution
the OS loads in memory only few parts of the program (including the starting point) each entry of the page table (or segments) has a valid bit that indicates whether this page or segment is in memory or not The resident set is the portion of process loaded in the memory An interruption is generated when a logical address refers to a part that is not in the resident set page fault

62

Steps in Handling a Page Fault

virtual memory

63

When the RAM is full but we need a page not in RAM => page replacement

64

Pages Replacement
1. Find the location of the desired page on disk 2. Find a free frame: - If there is a free frame, use it - If there is no free frame, use a page replacement algorithm to select a victim frame 3. Read the desired page into the (newly) free frame. 4. Update the page and frame tables. 5. Restart the process
65

the victim page...

66

Algorithms for page remplacement


The optimal algorithme (OPT) Replaces page that will not be used for longest period of time 4 frames example: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
1
2 3 4 6 page faults

How do you know this? (impossible to know the future) Used for measuring how well your algorithm performs
67

Optimal Page Replacement

68

LRU Page Replacement


chronological Order (LRU)
Least Recently Used Replace the page that was the oldest to be referenced (the past used to predict the futur)

69

Least Recently Used (LRU) Algorithm


Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
1 2 3 4 5 4

Counter implementation
Every page entry has a counter; every time page is referenced through this entry, copy the clock into the counter When a page needs to be changed, look at the counters to determine which are to change
70

Comparaison OPT-LRU
example: a process of 5 pages if theres only 3 available physical pages. In this example, OPT has 3+3 faults, while LRU has 3+4.

71

First-In-First-Out (FIFO) Algorithm


Logic: the page that has been for longer time in mem has taken its chance to execute when the memory is full, the oldest page is replaced. So: first-in, first-out simple to apply

But: a page that is frequently used is often the oldest, it will be replaced replaced by FIFO!

First-In-First-Out (FIFO) Algorithm


Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 3 frames (3 pages can be in memory at a time per process) 1 2 1 2 3 1 2 4 1 5 3 9 page faults

3
4 frames 1 2 3

2
5 1 2

4
4 5 10 page faults

3
4

FIFO Replacement (Beladys Anomaly)


more frames more page faults
73

Comparaison FIFO - LRU

Contrarly to FIFO, LRU knows that the pages 2 and 5 are the most frequently used In this case, the performance of FIFO is less than LRU: LRU = 3+4, FIFO = 3+6

74

FIFO Implementation
Easy to implement using a queue of frames of the memory that should be updated for each page fault

75

Second-Chance (clock) Page-Replacement Algorithm


like FIFO, but it takes into account the recent pages use
Circular list Structure

the frames that has just been used (bit=1) are not replaced (second chance)
the frames make conceptually a circular buffer When a page is loaded in a frame, a pointeur points to the next frame of the buffer

for each frame of the buffer, a bit used is set to 1 (by the hardware) when:
A page is newly loaded in the frame Its page (of this frame) is used

The next frame of the buffer to be replaced will be the first one having the bit used = 0.
During this search, every used bit = 1 encountered is set to 0

76

Clock Algorithm: an example

the page 727 is loaded in the frame 4. the next victim is 5, then 8.

77

Comparaison: Clock, FIFO and LRU

* indicates that the used bit is 1 The clock protects from the replacement of the frequently used pages by setting the used bit to 1 at each reference LRU = 3+4, FIFO = 3+6, Clock= 3+5
78

Details for Clock algo.

All the bits were 1. while searching all were changed to 0 => the 1st page has been used
79

File management System

80

File management System


these exist No more!

5.25

3.5

81

Disk Structure
Hard disk drives are organized as a concentric stack of disks or platters Each platter has 2 surfaces

82

View of a Hard Drive

83

84

Side View of Cylinders on Disk Drive

Double-sided Disk

Cyl = 79

Cyl = 0

Sides or Heads 1 0

Spindle Motor

Comprise Cylinder 0
Disk Drive

85

Cylinders

Head Stack Assembly

C Y L I N D E R

Head 0 Head 1 Head 2 Head 3 Head 4 Head 5

Sector

Track
86

Unix Examples
In the Unix system, the dynamic creation is simple
Creating a process that is an exact copy of the one asking for the creation. No parameters are necessary

id = fork();

id is the identifier of the new process

the only distinction between the creator process (father) and the new process (child) is in the returned value.
Id == -1 : creation failed Id == 0 : the child process Id != 0 : the identity of the child process

After the fork() function call, the two processes execute the remaining of the same program
87

Unix Examples
If the father process terminates before its child , the child is adopted by the root process

The father process can wit the termination of one of its children using the C language function:

id_child = wait(&status);
id_child : identifier of the terminated child the function wait puts in status the way the process has terminated If theres active children, the father process waits and if not, the function returns -1
88

Process creation example


#include #include #include #include #include <sys/types.h> <sys/wait.h> <unistd.h> <stdlibo.h> <stdio.h>

int main(void){ pid_t pid = fork(); int reason;


switch (pid) {

case -1 : printf("Error in the creation "); exit(1);


case 0 : /* we are in the child */ printf("Pid child = %d\n", getpid()); sleep(20); //sleeps for 20 secondes break; default : /* we are in the father */ printf("Pid father = %d\n", getpid()); printf(wait for child termination...\n"); pid = wait(&reason);

} /* switch */ return (0);}

89

End

90

Вам также может понравиться