Вы находитесь на странице: 1из 10

Student ID: 2486313 Cache Memory Simulator

Introduction:

Professor wants to study working of cache memory simulator from a quantitative and
qualitative assignment.
Cache is the simplest cost effective way to achieve high speed memory and its performance
is extremely vital for high speed computers.

Simulation is a very popular way to study and evaluate computer architectures, obtaining
an acceptable estimation of performance before a system is built. For the past two decades,
a primary means of cache memory analysis has been the use of traces of memory access
patterns to drive simulators that determine the miss rate of different cache designs.

Cache is the simplest cost effective way to achieve high speed memory and its
performance is extremely vital for high speed computers.

London South Bank University Page1 of 10


Student ID: 2486313 Cache Memory Simulator

Technical sheet or observation sheet and attached.

Replacement: LRU
Placement: Direct Map
Loop: Nested size 24,16,8

Capacity Block Speed Hits Total % Misses Total %


8 1 Fast 37 144 26 107 144 74
16 1 Fast 128 144 89 16 144 11
32 1 Fast 128 144 89 16 144 11

100

80

60
Miss

40

20

0
1 2 3
Hit

Observation:

If the number is one, the cache is set to be direct map. If a cache is neither direct map nor
fully-associative, it is called set associative.
When we change loop size 16, 8 the percentage is decrease by 1% (Hits) and missing ratio
is increased by 1%. Therefore, Hit and Miss is depends upon the size of the loop and the
block size.
Replacement: LRU
Placement: Fully Associate
Loop: Nested size 24,16,8

Capacity Block Speed Hits Total % Misses Total %


8 1 Fast 42 53 79 11 53 21
16 1 Fast 42 53 79 11 53 21
32 1 Fast 42 53 79 11 53 21

London South Bank University Page2 of 10


Student ID: 2486313 Cache Memory Simulator

90
80
70
60
50
Miss

40
30
20
10
0
1 2 3
Hit

Observation:

If the no. of set 1, the cache is called fully-associative because all the tag must be checked
to determine that are reference missed.
Typically fetch size 8,16,32 or 16 words are best depending upon the memory
characteristics.
A loop buffer is a small, every high speed memory maintained by the instruction stage of
the pipeline and containing the n most recently fetched instruction in sequence.

When we change nested loop outer 12 inner 6 and the block size 4 and cache capacity 8.
the percentage of hits is decreased by 26% and missing ratio is increase by 26%.

Replacement: LRU
Placement: Direct Map
Loop: Nested size 24,16,8

London South Bank University Page3 of 10


Student ID: 2486313 Cache Memory Simulator

Capacity Block Speed Hits Total % Misses Total %


8 4 Fast 103 144 72 41 144 28
16 4 Fast 50 53 94 3 53 6
32 4 Fast 50 53 94 3 53 6

100
90
80
70
60
Miss

50
40
30
20
10
0
1 2 3
Hit

Observation:

If the number is one, the cache is set to be direct map. If a cache is neither direct map nor
fully-associative, it is called set associative.
Typically fetch size 8,16,32 or 16 words are best depending upon the memory
characteristics.
A loop buffer is a small, every high speed memory maintained by the instruction stage of
the pipeline and containing the n most recently fetched instruction in sequence.
After comparing the values of Direct Map, Fully-Associative and the block size 4 the hit
ratio increased by 47%. And miss ratio also increased by 7%. Therefore, if we increase
block size so there is minor changes has been noted in hits and missing.

Replacement: LRU
Placement: Fully Associate
Loop: Nested size 24,16,8

Capacity Block Speed Hits Total % Misses Total %


8 4 Fast 103 144 72 41 144 28

London South Bank University Page4 of 10


Student ID: 2486313 Cache Memory Simulator

16 4 Fast 140 144 97 4 144 3


32 4 Fast 140 144 97 4 144 3

120

100

80
Miss

60

40

20

0
1 2 3
Hit

Observation:

If the no. of set 1, the cache is called fully-associative because all the tag must be checked
to determine that are reference missed.
Typically fetch size 8,16,32 or 16 words are best depending upon the memory
characteristics.
A loop buffer is a small, every high speed memory maintained by the instruction stage of
the pipeline and containing the n most recently fetched instruction in sequence.
After comparing the values of Direct Map, Fully-Associative and the block size 4 the hit
ratio increased by 47%. And miss ratio also increased by 7%. Therefore, if we increase
block size so there is minor changes has been noted in hits and missing.

Replacement: Random
Placement: Direct Map
Loop: Nested size 24,16,8

Capacity Block Speed Hits Total % Misses Total %

London South Bank University Page5 of 10


Student ID: 2486313 Cache Memory Simulator

8 1 Fast 42 53 79 11 53 21
16 1 Fast 42 53 79 11 53 21
32 1 Fast 42 53 79 11 53 21

100

80

60
Miss

40

20

0
1 2 3
Hit

Observation:

In corroborating static random access memory cache per memory bank that provide
efficiently double pick data bandwidth.
When we change loop size the hit percentage is increased by 1% at the loop size 12,6. And
missing ratio decreased by 9%.
I think random placement works best in the combination of loop 12, 6.

Replacement: Random
Placement: Fully Associate
Loop: Nested size 24,16,8

Capacity Block Speed Hits Total % Misses Total %


8 1 Fast 42 53 79 14 53 26
16 1 Fast 42 53 79 11 53 21
32 1 Fast 42 53 79 11 53 21

London South Bank University Page6 of 10


Student ID: 2486313 Cache Memory Simulator

90
80
70
60
50
Miss

40
30
20
10
0
1 2 3
Hit

Observation:

After make changes in a loop size outer 12, inner 6, the percentage of increased by 7%.
And miss ratio is decreased by 12%
Hence its mean that if we change the loop size significantly changes comes.

Replacement: Random
Placement: Direct Map
Loop: Nested size 24,16,8

Capacity Block Speed Hits Total % Misses Total %


8 4 Fast 50 53 94 3 53 6
16 4 Fast 50 53 94 3 53 6
32 4 Fast 50 53 94 3 53 6

100
90
80
70
60
Miss

50
40
30
20
10
0
1 2 3 4
Hit

London South Bank University Page7 of 10


Student ID: 2486313 Cache Memory Simulator

Observation:

After selecting different loop size, the hit is decreased by 30%. The miss ratio increased by
6%.

Replacement: Random
Placement: Fully Associate
Loop: Nested size 24,16,8

Capacity Block Speed Hits Total % Misses Total %


8 4 Fast 46 53 87 9 53 17
16 4 Fast 50 53 94 3 53 6
32 4 Fast 50 53 94 3 53 6

100
90
80
70
60
Miss

50
40
30
20
10
0
1 2 3
Hit

Observation:

London South Bank University Page8 of 10


Student ID: 2486313 Cache Memory Simulator

After comparing fully-associative random and direct map random so there is a minor
change found in both reading. The hit ratio 7% decreased but the average of missing ratio is
9.6%.

Conclusion:

Cache is a flexible, multi-lateral cache simulator developed to help designers in the middle
of the design
cycle make cache configuration decisions that would best aid in attaining the desired
performance goals of the target processor. Cache is an event-driven, timing-sensitive
simulator based on the Latency Effects cache timing model. It can be easily configured to
model various multilateral cache configurations by using its library of cache state and data
movement routines. The simulator can be easily joined to a wide range of event-driven
processor simulators.

We showed implementations of two different cache configurations and their resulting


performance. These configurations included a direct-mapped and fully associate.

cache provides many statistics which can help explain the performance of the potential
cache configurations when running target workloads. Information regarding hit, miss, and
delayed hit ratios tells of the program’s memory access characteristics, while block size and
reuse information tells of the actual data usage within each program. These statistics can all
be used to explain the performance
of each cache configuration as well as help to drive the development of future cache
designs that better handle the reference streams presented by the target workloads.

London South Bank University Page9 of 10


Student ID: 2486313 Cache Memory Simulator

Reference:

1. http://myweb.lsbu.ac.uk/~chalkbs/research/CacheSimDescription.h

tm
[cited 18 March 2006].

2. http://www.zib.de/schintke/ldasim/
[cited 18 March 2006].

3. http://www.cs.ucr.edu/~yluo/cs161L/labs/node6.html
[cited 18 March 2006].

4. http://www.cs.ucsd.edu/users/calder/classes/win06/240A/projects/pr

oj2.html [cited 18 March 2006].

Upload link:

London South Bank University Page10 of 10

Вам также может понравиться