Академический Документы
Профессиональный Документы
Культура Документы
Saed R. Abed
[Computer Engineering Department, Hashemite University] [Adapted from Otmane Ait Mohamed Slides & Computer Organization and Design, Patterson & Hennessy, 2005, UCB] 1
Outline
Cache write policy Fully Associative Cache N-Way Associative Cache Block Replacement Policy Multilevel Caches (if time)
Processor
executes programs runs on order of nanoseconds to picoseconds needs to access code and data for programs: where are these?
Disk
HUGE capacity (virtually limitless) VERY slow: runs on order of milliseconds so how do we account for this gap?
Memory (DRAM)
smaller than disk (not limitless capacity) contains subset of data on disk: basically portions of programs that are currently being run much faster than disk: memory accesses dont slow down processor quite as much Problem: memory is still too slow (hundreds of nanoseconds) Solution: add more layers
Lower
... Level n
Size of memory at each level
Present the user with as much memory as is available in the cheapest technology. Provide access at the speed offered by the fastest technology.
On-Chip Components Control Instr Data Cache Cache Second Level Cache (SRAM) Secondary Memory (Disk) ITLB DTLB
Datapath
RegFile
1s
Ks
10s 10Ks
100s Ms
1,000s Ts lowest 6
Random is good: access time is the same for all locations DRAM: Dynamic Random Access Memory
- High density (1 transistor cells), low power, cheap, slow - Dynamic: need to be refreshed regularly (~ every 4 ms)
- Low density (6 transistor cells), high power, expensive, fast - Static: content will last forever (until power turned off)
Access time varies from location to location and from time to time (e.g., disk, CDROM) 7
Caches use SRAM for speed Main Memory is DRAM for density
- RAS or Row Access Strobe triggering row decoder - CAS or Column Access Strobe triggering column selector
- Access Time: time between request and when word is read or written (read access and write access times can be different) - Cycle Time: time between successive (read or write) requests - Usually cycle time > access time
smaller faster subset of all lower levels (contains most recently used data) contain at least all the data in all higher levels
Devices Memory (passive) Input (where programs, Output data live when running)
Purpose:
10
Youre writing an assignment paper (Processor) at a table in the Library Library is equivalent to disk
Table is memory
smaller capacity: means you must return book when table fills up easier and faster to find a book there once youve already retrieved it
11
smaller capacity: can have very few open books fit on table; again, when table fills up, you must close a book much, much faster to retrieve data Keep as many recently used books open on table as possible since likely to use them again Also keep as many books on table as possible, since faster than going to library shelves
12
Disk contains everything. When Processor needs something, bring it into all higher levels of memory. On-chip Memory/Cache contains copies of data in memory that are being used. Memory contains copies of data on disk that are being used. Entire idea is based on Temporal Locality: if we use it now, well want to use it again soon (a Big Idea)
13
Take advantage of the principle of locality to present the user with as much memory as is available in the cheapest technology at the speed offered by the fastest technology
Processor
4-8 bytes (word)
L1$
8-32 bytes (block)
L2$
1 to 4 blocks
Main Memory
Inclusive what is in L1$ is a subset of what is in L2$ is a subset of what is in MM that is a subset of is in SM
Secondary Memory
From Processor
Blk Y
15
Hit Rate: fraction of memory accesses found in upper level Hit Time: Time to access the upper level which consists of
From Processor
Blk Y
Miss: data is not in the upper level so needs to be retrieve from a block in the lower level (Blk Y)
Miss Rate = 1 - (Hit Rate) Miss Penalty: Time to bring in a block from the lower level and replace a block in the upper level with it + Time to deliver the block the processor Hit Time << Miss Penalty 16
registers memory
by compiler (programmer?)
by the operating system (virtual memory) virtual to physical address mapping assisted by the hardware (TLB) by the programmer (files)
17
How do we organize cache? Where does each memory address map to? (Remember that cache is subset of memory, so multiple memory addresses map to the same cache location.) (Books from many shelves are on the same table) How do we know which elements are in cache? How do we quickly locate them?
18
In a direct-mapped cache, each memory address is associated with one possible block within the cache
Therefore, we only need to look in a single location in the cache for the data to see if it exists in the cache Block is the unit of transfer between cache and memory Address mapping: (block address) modulo (# of blocks in the cache) First consider block sizes of one word
19
Direct-Mapped Cache (#2/2) Cache 4 Byte Direct Memory Index Mapped Cache Address Memory 0 0 1 1 2 2 3 3 4 5 6 7 Block size = 1 byte 8 9 Cache Location 0 can be A occupied by data from: B C Memory location 0, 4, 8, ... D In general: any memory location E that is multiple of 4 F 20
Since multiple memory blocks map to same cache index, how do we tell which one is in there? How do we select the bytes in the block? Result: divide memory address into three fields
tttttttttttttttttttttttttttttioo
tag to check if have correct block index byte to offset select within block block
23
block index
Byte offset
tag RAM
data RAM
compare hit
mux data
24
All fields are read as unsigned integers. Index: specifies the cache index (which row or line of the cache we should look in) Offset: once weve found correct block, specifies which byte within the block we want Tag: the remaining bits after offset and index are determined; these are used to distinguish between all the memory addresses that map to the same location
25
Suppose we have a 16KB of data in a direct-mapped cache with 4 word blocks Determine the size of the tag, index and offset fields if were using a 32-bit architecture (ie. 32 address lines) Offset
need to specify correct byte within a block block contains 4 words 16 bytes 24 bytes need 4 bits to specify correct byte
26
need to specify correct row in cache cache contains 16 KB = 214 bytes block contains 24 bytes (4 words) # rows/cache = # blocks/cache (since theres one block/row) = bytes/cache bytes/row = 214 bytes/cache 24 bytes/row = 210 rows/cache need 10 bits to specify this many rows
27
tag length = mem addr length - offset - index = 32 - 4 - 10 bits = 18 bits so tag is leftmost 18 bits of memory address
All bytes within block need same address (-4b) Index must be same for every address within a block, so its redundant in tag check, thus can leave off to save memory (- 10 bits in this example)
28
Write-through
Write-back
update word in cache block allow memory word to be stale => add dirty bit to each line indicating that memory needs to be updated when block is replaced => OS flushes cache before I/O !!! SO that cache values become same as memory values changed by I/O Performance trade-offs?
29
The off-chip interconnect and memory architecture affects overall system performance dramatically
on-chip
Assume
1.
CPU
1 clock cycle (1 ns) to send the address from the cache to the Main Memory 50 ns (50 processor clock cycles) for DRAM first word access time, 10 ns (10 clock cycles) cycle time (remaining words in burst for SDRAM) 1 clock cycle (1 ns) to return a word of data from the Main Memory to the cache
Cache bus
2.
3.
Main Memory
on-chip
CPU
If the block size is one word, then for a memory access due to a cache miss, the pipeline will have to stall the number of cycles required to return one data word from memory
1 50 1 52 cycle(s) to send address cycle(s) to read DRAM cycle(s) to return data total clock cycles miss penalty
Cache bus
Main Memory
32
on-chip
What if the block size is four words and a (DDR) SDRAM is used?
cycle(s) to send 1st address cycle(s) to read DRAM cycle(s) to return last data word total clock cycles miss penalty
CPU
Cache bus
1 50 + 3*10 = 80 1 82
Main Memory
Number of bytes transferred per clock cycle (bandwidth) for a single miss is
(4 x 4)/82 = 0.183 bytes per clock
34
Hardware Issues
Bus
Bus
Memory
Memory bank 0
Memory bank 1
Memory bank 2
Memory bank 3
Memory
35
5.3 & 5.5 Improving Cache Performance: Types of Cache Misses (#1/2)
Compulsory Misses
occur when a program is first started cache does not contain any of that programs data yet, so misses are bound to occur cant be avoided easily! (Computer Architecture and Design)
36
Types of Cache Misses (#2/2) 0 1 0 2 1 3 2 3 Conflict Misses 4 miss that occurs because two distinct memory 5 addresses map to the same cache location 6 two blocks (which happen to map to the same 7 location) can keep overwriting each other 8 9 A B C big problem in direct-mapped caches D E how do we lessen the effect of these? F
37
Solution 2: Multiple distinct blocks can fit in the same Cache Index?
0 1 2 3 4 5 6 7 8 9 A B
0 0 1 1
38
no rows: any block can go anywhere in the cache must compare with all tags in entire cache to see if data is there
39
4 0 Byte Offset
40
tag CAM
data RAM
need hardware comparator for every single entry: if we have a 64KB of data in cache with 4B entries, we need 16K comparators: infeasible
42
Capacity Misses
miss that occurs because the cache has a limited size miss that would not occur if we increase the size of the cache sketchy definition, so just get the general idea
43
Tag: same as before Offset: same as before Index: points us to the correct row (called a set in this case)
each set contains multiple blocks once weve found correct set, must compare with all tags in that set to find our data
44
0 1 2 3 4 5 6 7 8 9 A B C D E F
decoder
mux
data mux
data RAM
45
Summary:
cache is direct-mapped with respect to sets each set is fully associative basically N direct-mapped caches working in parallel: each has its own valid bit and data
46
Find correct set using Index value. Compare Tag with all Tag values in the determined set. If a match occurs, its a hit, otherwise a miss. Finally, use the offset field as usual to find the desired data within the desired block.
47
even a 2-way set assoc cache avoids a lot of conflict misses hardware cost isnt that bad: only need N comparators
its Direct-Mapped if its 1-way set assoc its Fully Assoc if its M-way set assoc so these two are just special cases of the more general set associative design
48
Direct-Mapped Cache: index completely specifies which position a block can go in on a miss N-Way Set Assoc (N > 1): index specifies a set, but block can occupy any position within the set on a miss Fully Associative: block can be written into any position Question: if we have the choice, where should we write an incoming block?
49
Solution:
If there are any locations with valid bit off (empty), then usually write the new block into the first one. If all possible locations already have a valid block, we must pick a replacement policy: rule by which we determine which block gets cached out on a miss.
50
Idea: cache out block which has been accessed (read or write) least recently Pro: temporal locality => recent past use implies likely future use: in fact, this is a very effective policy Con: with 2-way set assoc, easy to keep track (one LRU bit); with 4-way or greater, requires complicated hardware and much time to keep track of this
51
We have a 2-way set associative cache with a four word total capacity and one word blocks. We perform the following word accesses (ignore bytes for this problem): 0, 2, 0, 1, 4, 0, 2, 3, 5, 4 How many hits and how many misses will there for the LRU block replacement policy?
52
Addresses 0, 2, 0, 1, 4, 0, ...
0 0 0 0 1
0: miss, bring into set 0 (loc 0) 2: miss, bring into set 0 (loc 1) 0: hit 1: miss, bring into set 1 (loc 0) 4: miss, bring into set 0 (loc 1, replace 2) 0: hit
set 1 set 0 lru set 1 set 0 set 1 set 0 set 1 lru lru lru
2 2 2 4
lru lru lru
53
1 0 1
Processor performs arithmetic Memory stores data Caches simply make data transfers go faster
Each level of memory hierarchy is just a subset of next higher level Caches speed up due to temporal locality: store data used recently Block size > 1 word speeds up due to spatial locality: store words adjacent to the ones used recently
54
size of cache: speed v. capacity direct-mapped v. associative for N-way set assoc: choice of N block replacement policy 2nd level cache? Write through v. write back?
Use performance model to pick between choices, depending on programs, technology, budget, ...
55
56
Addresses divided (for convenience) into Tag, Index, Byte Offset fields
0000000001 0100 0000000001 1100 0000000011 0100 0000000001 0100 Index Offset
57
Will see 3 types of events: cache miss: nothing in cache in appropriate block, so fetch from memory cache hit: cache block is valid and contains proper address, so read desired word cache miss, block replacement: wrong data is in cache at appropriate block, so discard it and fetch desired data from memory
58
Valid bit: determines whether anything is stored in that row (when computer initially turned on, all entries are invalid)
Example Block
0x0-3
0x4-7
0x8-b
0xc-f
...
1022 0 1023 0
59
...
1022 1023 0
0
60
0x4-7
...
1022 0 1023 0
61
No valid data
000000000000000000
0x4-7
...
1022 0 1023 0
62
0x4-7
b
0xc-f
d
...
1022 0 1023 0
63
0x4-7
b
0xc-f
d
...
1022 0 1023 0
64
0x4-7
b
0xc-f
d
...
1022 0 1023 0
65
0x0-3
a
0x4-7
b
0xc-f
d
...
1022 0 1023 0
66
Tag field
0x0-3
a
...
1022 0 1023 0
67
So read block 3
000000000000000000
Tag field
0x0-3
a
...
1022 0 1023 0
68
No valid data
000000000000000000
Tag field
0x0-3
a
...
1022 0 1023 0
69
Tag field
0x0-3
a e
...
1022 0 1023 0
70
Tag field
0x0-3
a e
...
1022 0 1023 0
71
Tag field
0x0-3
a e
...
1022 0 1023 0
72
Tag field
0x0-3
a e
...
1022 0 1023 0
73
1022 0 1023 0
74
1022 0 1023 0
75
from: Cache: Hit, Miss, Miss w. replace Values returned: a ,b, c, d, e, ..., k, l Read address 0x00000030 ? 000000000000000000 0000000011 0000 Read address 0x0000001c ? 000000000000000000 0000000001 1100 Cache Valid 0x4-7 0x8-b 0xc-f 0x0-3 Index Tag 0 0 i j k l 1 1 2 2 0 e f g h 3 1 0 4 0 5 0 6 0 7 0
... ...
76
Answers
0x00000030
Memory
Address ... Value of Word ... a 00000010 b 00000014 c 00000018 d 0000001c ... ... e 00000030 f 00000034 g 00000038 h 0000003c ... ... i 00008010 j 00008014 k 00008018 l 0000801c ... ... 77
0x0000001c
replacment
The
Values read from Cache must equal memory values whether or not cached:
0x00000030 = e 0x0000001c = d
Spatial Locality: if we access a given word, were likely to access other nearby words soon (Another Big Idea) Very applicable with Stored-Program Concept: if we execute a given instruction, its likely that well execute the next few as well Works nicely in sequential array accesses too
78
- on a miss, takes longer time to load a new block from next level
If block size is too big relative to cache size, then there are too few blocks
79
Hit Time = time to find and retrieve data from current level cache Miss Penalty = average time to retrieve data on a current level miss (includes the possibility of misses on successive levels of memory hierarchy) Hit Rate = % of requests that are found in current level cache Miss Rate = 1 - Hit Rate
80
Tag
Cache Data B3 B2 B1 B0
Block Size = 4 bytes
Continually loading data into the cache but discard data (force out) before use it again Nightmare for cache designer: Ping Pong Effect
81
Block Size Tradeoff Conclusions Miss Penalty Miss Rate Exploits Spatial Locality Fewer blocks: compromises temporal locality Block Size Average Access Time Block Size Increased Miss Penalty & Miss Rate
Block Size
82
Things to Remember
Cache Access involves 3 types of events: cache miss: nothing in cache in appropriate block, so fetch from memory cache hit: cache block is valid and contains proper address, so read desired word cache miss, block replacement: wrong data is in cache at appropriate block, so discard it and fetch desired data from memory
83