Вы находитесь на странице: 1из 22

Virtual Memory & Demand Paging

accommodating 32-bit (& larger) addresses

I wish we were still doing NAND gates...

Finally! A Lecture on Something I Understand PAGING!

6.004 Fall 97...

L41 11/13 1

SAW 8/7/00 10:07

Top 10 reasons for a BIG Address Space

10. Keeping TIs memory division in business. 9. Unique addresses within every internet host. 8. Generating good 6.004 Final problems. 7. Performing ADD via table lookup 6. Support for meaningless advertising hype 5. Emulation of a Turning Machines tape. 4. Supporting lazy programmers. 3. Isolating ISA from ______________________ 2. Usage _________________________________ 1. Programming ____________________________

6.004 Fall 97...

L41 11/13 2

SAW 8/7/00 10:07

Squandering Address Space...


Address Space

COMPLEX, modular program (eg, EMACs).... modules like DOCTOR LIFE convert-to-piglatin STACK: How much to reserve? (consider RECURSION!)

DATA: N variable-size records... Bound N? Bound Size? OBSERVATIONS: Cant BOUND each usage... without compromising use. Actual use is SPARSE Working set even MORE sparse
6.004 Fall 97... L41 11/13 3

SAW 8/7/00 10:07

Transparently Extending the Memory Hierarchy

CPU

FAST STATIC "CACHE"

DYNAMIC RAM "MAIN MEMORY"

DISK

"Secondary Storage"

So, weve used SMALL fast memory + BIG slow memory to fake BIG FAST memory. Can we combine RAM and DISK to fake DISK size at RAM speeds?

VIRTUAL VIRTUALMEMORY MEMORY

use useof ofRAM RAMas ascache cacheto tomuch muchlarger largerstorage storagepool, pool,on on slower slowerdevices devices TRANSPARENCY TRANSPARENCY--VM VMlocations locations"look" "look"the thesame sameto to program programwhether whetheron onDISK DISKor orin inRAM. RAM. ISOLATATION ISOLATATIONof ofRAM RAMsize sizefrom fromsoftware. software.

6.004 Fall 97...

L41 11/13 4

SAW 8/7/00 10:07

Virtual Memory
ILLUSION: Huge memory (232 bytes? 264bytes?) ACTIVE USAGE: Tiny fraction (220 bytes?) HARDWARE: 2 20 bytes of RAM 2 32 bytes of DISK... ... maybe much less! ELEMENTS OF DECEIT: Partition memory into Pages (2K-4K-8K) MAP a few to RAM, others to DISK Keep HOT pages in RAM. VA MMU PA

CPU

RAM

6.004 Fall 97...

L41 11/13 5

SAW 8/7/00 10:07

Simple Pagemap Design


VirtPg #

FUNCTION: Given Virtual Adr,


Map to PHYSICAL adr OR Cause PAGE FAULT allowing page replacement

PhysPg #

Virtual Memory

DR
X X

Physical Memory

PAGEMAP "DIRTY" and "RESIDENT" bits in each entry. Why use HIGH address bits to select page? ... LOCALITY. Keep related data on same page. Why use LOW address bits to select cache line? ... LOCALITY. Keep related data from competing for same cache lines.
6.004 Fall 97... L41 11/13 6

SAW 8/7/00 10:07

Virtual Memory vs. Cache CACHE:


CACHE:

RELATIVELY SHORT BLOCKS FEW LINES: SCARCE RESOURCE MISS TIME: 3X-10X HIT TIMES.
TAG A B =? DATA <A> <B> MAIN MEMORY

VM:
VPAGE NO. OFFS

PAGEMAP

PHYSICAL MEMORY

VERY LONG ACCESS TIME, FAST TRANSFER 5 => MISS TIME: ~10 x Hit Time (=> WRITE-BACK!) => Long Blocks (PAGES) in RAM. PLENTIFUL LINES (eg, tag for each virtual page) TAGs stored in PAGEMAP; DATA in Physical Mem
6.004 Fall 97... L41 11/13 7

SAW 8/7/00 10:07

Virtual Memory: the 6-1 view


The address translation:
Virtual Memory DR
1 1 0 0 1 1 0

Physical Memory

X X

PAGEMAP

Pagemap Characteristics: One entry per ____________________ Page! RESIDENT bit = 1 for pages stored in RAM, or 0 for non-resident (disk or unallocated) Contains PHYSICAL page number of each resident page DIRTY bit says weve changed this page since loading it from disk PAGE FAULT on R=0
6.004 Fall 97... L41 11/13 8

SAW 8/7/00 10:07

Virtual Memory: the 6-3 view


Problem: Translate VIRTUAL ADDRESS to PHYSICAL ADDRESS
VirtPg #

PhysPg #

Algorithm: Demand Paging


int VtoP(VPageNo, PO) { if (!R[VPageNo]) PageFault(VPageNo); return (PA[VPageNo] << p) + PO; } /* Handle a missing page... PageFault(VPageNo) { int i; i = SelectLRUPage(); WritePage(DiskAdr[i], PA[i]); R[i] = 0; PA[VPageNo] = PA[i]; ReadPage(DiskAdr[VPageNo], PA[i]); R[VPageNo] = 1; } */

6.004 Fall 97...

L41 11/13 9

SAW 8/7/00 10:07

The HW/SW Balance


IDEA: DEVOTE HARDWARE TO HIGH-TRAFFIC, PERFORMANCE-CRITICAL PATH USE (slow, cheap) SOFTWARE to handle exceptional cases.
int VtoP(VPageNo, PO) { if (!R[VPageNo]) PageFault(VPageNo); return (PA[VPageNo] << p) + PO; } /* Handle a missing page... PageFault(VPageNo) { int i; i = SelectLRUPage(); WritePage(DiskAdr[i], PA[i]); R[i] = 0; PA[VPageNo] = PA[i]; ReadPage(DiskAdr[VPageNo], PA[i]); R[VPageNo] = 1; } */

HARDWARE performs address translation, DETECTS page faults, whence Running program suspended ("interrupted"); PageFault(...) call is forced; On return from PageFault, running program continues.
6.004 Fall 97... L41 11/13 10

SAW 8/7/00 10:07

Pagemap Arithmetic
p VPageNo v PO PO D R PA 1 1 1 0 1 PAGEMAP

m
PPageNo

PHYSICAL MEMORY

(v+p) (m+p) 2 v

bits in virtual address bits in physical address number of VIRTUAL pages. # of main memory (PHYSICAL) pages. page size. virtual memory locations physical memory locations pagemap size. (in bits)

m 2 p 2 2 2 v +p m+p

v 2 x (m+2)

TYPICAL PAGE SIZE: 1K - 8K bytes. TYPICAL (v+p): 32 (or more!) bits. TYPICAL (m+p): 20-28 bits (1-256 MB).
6.004 Fall 97... L41 11/13 11

SAW 8/7/00 10:07

Pagemap Arithmetic -- continued

VirtPg #
SUPPOSE... 32-bit VA 213 page size (8KB) 220 RAM (1MB) m p v

PhysPg #
THEN: # Physical Pages = ____________________ # Virtual Pages = _____________________ # Page Map Entries = _________________ Use SRAM for page map??? OUCH!
6.004 Fall 97... L41 11/13 12
SAW 8/7/00 10:07

RAM-resident page maps


SMALL page maps can use dedicated SRAM... ... gets expensive for bigger ones. SOLUTION: Move page map to MAIN MEMORY.
Physical Memory

int VtoP(VPageNo, PO) { if (!R[VPageNo]) PageFault(VPageNo); return PA[VPageNo]<<p + PO; } PROBLEM: 2X Performance hit! Each memory reference now takes 2 accesses!
6.004 Fall 97... L41 11/13 13

SAW 8/7/00 10:07

Translation Lookaside Buffer


PROBLEM: 2X Performance hit... ... Each memory reference now takes 2 accesses! SOLUTION: CACHE the pagemap entries!

TLB

(dedicated cache)

Physical Memory

IDEA: LOCALITY in memory reference patterns ==> SUPER locality in references to pagemap. MANY variations on this theme -- eg

Sparse pagemap storage Paging the pagemap


6.004 Fall 97... L41 11/13 14
SAW 8/7/00 10:07

Optimizing for SPARSELY POPULATED VM

TLB

(dedicated cache)

Physical Memory

On TLB Miss: Lookup VPN in data structure (say, list of VPN-PPN pairs); Use (e.g.) hash coding to speed up search. IDEA (Demand Paging): Store only pagemap entries for ALLOCATED pages! Allocate new entries on demand (say, on stack overflow) TIME PENALTY? LOW if TLB hit rate is high!
6.004 Fall 97... L41 11/13 1 5
SAW 8/7/00 10:07

Moving page map to DISK...


Given HUGE virtual memory, even storing pagemap in RAM may be too expensive... ... seems like we could store little-used parts of it on the disk. SAY, isnt that what VIRTUAL MEMORY is for???
0:

SUPPOSE we store page map in virtual memory starting at (virtual) address 0 4 bytes/entry 4KB/page Then theres _____________ entries per page of pagemap... Pagemap entry for VP v is stored at virtual address v*4
6.004 Fall 97... L41 11/13 16

VP 0 VP 1 VP 2

SAW 8/7/00 10:07

Virtual Storage of PageMap


Scheme: Page table stored starting at VA 0 Mapping of page 0 "wired in": vpn 0 = ppn 0 4096-byte (212) page size Note RECURSION to map PT adrs! Each level removes 10 VAdr bits (at 4 bytes per page table entry) /* Translate VA to PA... * assumes page resident,4096-byte pages */ int VtoP(vpn, PO) { if (vpn == 0) return PO; else return PO + /* offset + */ VFetch(vpn*4)<<12; /*PMap[vpn] */ } /* fetch contents word at VIRTUAL address vadr VFetch(vadr) { return PFetch(VtoP(vadr>>12, vadr&0xFFF)); } /* Fetch contents of word at PHYSICAL address padr PFetch(padr) { ... }
6.004 Fall 97... L41 11/13 17

*/

*/

SAW 8/7/00 10:07

Contexts
a CONTEXT is a mapping of VIRTUAL to PHYSICAL locations, as dictated by pagemap contents.
Virtual Memory D R Physical Memory

X X

PAGEMAP

SEVERAL programs may be simultaneously loaded into main memory, each in its separate context:
Virtual Memory 1 Physical Memory Virtual Memory 2

________________ ________________: RELOAD PAGEMAP


6.004 Fall 97... L41 11/13 18
SAW 8/7/00 10:07

Roles of CONTEXTs ... ... a preview


Virtual Memory 1 Physical Memory Virtual Memory 2

1. TIMESHARING among several programs - Separate context for each program OS loads appropriate context into pagemap when switching among pgms 2. Separate context for OS Kernel (eg, interrupt handlers)... Kernel vs User contexts Switch to Kernel context on interrupt; Switch back on interrupt return. HARDWARE SUPPORT: 2 HW pagemaps
6.004 Fall 97... L41 11/13 19

SAW 8/7/00 10:07

Using caches with virtual memory Virtual Virtual Cache: Cache:Tags Tagsmatch matchvirtual virtualaddresses addresses
CPU

CACHE

MMU

DYNAMIC RAM

FAST: No MMU time on HIT. Problem: Cache invalid after context switch

DISK

Physical Physical Cache: Cache:Tags Tagsmatch matchphysical physicaladdresses addresses


CACHE
CPU

MMU
DISK

DYNAMIC RAM

Avoids STALE CACHE DATA after context switch. SLOW: MMU time on HIT.

6.004 Fall 97...

L41 11/13 20

SAW 8/7/00 10:07

BEST OF BOTH WORLDS MMU


CPU DISK

Physical vs. Virtual Cache:

DYNAMIC RAM

CACHE

OBSERVATION : If cache line selection is based on unmapped page offset bits, RAM access in a physical cache can overlap pagemap access:

PAGEMAP select bits PHYSICAL cache, overlapped with pagemap lookup. CACHE select bits

p
6.004 Fall 97... L41 11/13 21

SAW 8/7/00 10:07

Virtual Memory: Issues


Integration with memory system hardware:

Translation Lookaside Buffers, caches Multi-level memory mapping Hardware support for multiple contexts Page Fault handling Working set management heuristics Contexts as a programming construct Transparency issues & compromises

Integration with system software:

6.004 Fall 97...

L41 11/13 22

SAW 8/7/00 10:07

Вам также может понравиться