Вы находитесь на странице: 1из 88

ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 1



UNIVERSITY QUESTION PAPERS SOLUTIONS

UNIT I

1. Define Computer Architecture. Illustrate the seven dimensions of an ISA? (10 marks)
2012
The computer designer has to ascertain the attributes that are important for a
new computer and design the system to maximize the performance while staying within
cost, power and availability constraints. The task has few important aspects such as Instruction
Set design, Functional organization, Logic design and implementation.

Instruction Set Architecture (ISA)
ISA refers to the actual programmer visible Instruction set. The ISA serves as Boundary
between the software and hardware. The seven dimensions of the ISA are:

i) Class of ISA: Nearly all ISAs today are classified as General-Purpose-Register
architectures. The operands are either Registers or Memory locations. The two popular versions
of this class are:
Register-Memory ISAs : ISA of 80x86, can access memory as part of many
instructions.
Load -Store ISA Eg. ISA of MIPS, can access memory only with Load or Store instructions.

ii) Memory addressing: Byte addressing scheme is most widely used in all desktop and
server computers. Both 80x86 and MIPS use byte addressing. In case of MIPS the object
must be aligned. An access to an object of s byte at byte address A is aligned if A mod s =0.
80x86 does not require alignment. Accesses are faster if operands are aligned.

iii) Addressing modes: Specify the address of a M object apart from register and constant
operands.
MIPS Addressing modes:
Register mode addressing
Immediate mode addressing
Displacement mode addressing
80 x 86 in additions to the above addressing modes supports the additional modes of
addressing:
i. Register Indirect
ii. Indexed
iii. Based with Scaled index

iv) Types and sizes of operands:
MIPS and x86 support:
8 bit (ASCII character), 16 bit(Unicode character)
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 2

32 bit (Integer/word )
64 bit (long integer/ Double word)
32 bit (IEEE-754 floating point)
64 bit (Double precision floating point)
80x86 also supports 80 bit floating point operand.(extended double
Precision
v) Operations: The general category o f operations are:
Data Transfer
Arithmetic operations
Logic operations
Control operations
MIPS ISA: simple & easy to implement
x86 ISA: richer & larger set of operations

vi) Control flow instructions: All ISAs support:
Conditional & Unconditional Branches
Procedure C alls & Returns MIPS 80x86
Conditional Branches tests content of Register Condition code bits
Procedure Call JAL CALLF
Return Address in a R Stack in M

vii) Encoding an ISA
Fixed Length ISA Variable Length ISA
MIPS 32 Bit long 80x86 (1-18 bytes)
Simplifies decoding Takes less space

Number of Registers and number of Addressing modes have significant impact on the
length of instruction as the register field and addressing mode field can appear many times
in a single instruction.

2. What is dependability? Explain the two measures of Dependability? (06 marks) 2012
The Infrastructure providers offer Service Level Agreement (SLA) or Service Level Objectives
(SLO) to guarantee that their networking or power services would be dependable.
Systems alternate between 2 states of service with respect to an SLA:
1. Service accomplishment, where the service is delivered as specified in SLA
2. Service interruption, where the delivered service is different from the SLA
Failure = transition from state 1 to state 2
Restoration = transition from state 2 to state 1

ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 3

The two main measures of Dependability are Module Reliability and Module Availability.
Module reliability is a measure of continuous service accomplishment (or time to failure) from
a reference initial instant.
1. Mean Time To Failure (MTTF) measures Reliability
2. Failures In Time (FIT) = 1/MTTF, the rate of failures
Traditionally reported as failures per billion hours of operation
Mean Time To Repair (MTTR) measures Service Interruption
Mean Time Between Failures (MTBF) = MTTF+MTTR
Module availability measures service as alternate between the 2 states of accomplishment and
interruption (number between 0 and 1, e.g. 0.9)
Module availability = MTTF / ( MTTF + MTTR)

3. Give the following measurements (06 marks) 2012
Frequency of FP operations=25%
Average CPI of other instructions=1.33
Average CPI of FP operations=4.0
Frequency of FPSQR=2%
CPI of FPSQR=20
Assume that the two design alternative are to decrease the CPI of FPSQR to 2 or to
decrease the average CPI of all FP operations to 2.5 compare the two design alternatives using
the processor performance equations.












4. With a neat diagram explain the classic five stage pipeline for a RISC processor
(10 marks) 2012
Instruction set of implementation in RISC takes at most 5 cycles without pipelining.
The 5 clock cycles are:

ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 4

1. Instruction fetch (IF) cycle:
Send the content of program count (PC) to memory and fetch the current instruction
from memory to update the PC.



2. Instruction decodes / Register fetch cycle (ID):

Decode the instruction and access the register file. Decoding is done in parallel with
reading registers, which is possible because the register specifies are at a fixed location in a
RISC architecture. This corresponds to fixed field decoding. In addition it involves:
- Perform equality test on the register as they are read for a possible branch.
- Sign-extend the offset field of the instruction in case it is needed.
- Compute the possible branch target address.

3. Execution / Effective address Cycle (EXE)

The ALU operates on the operands prepared in the previous cycle and performs one of
the following function defending on the instruction type.


* Register- Register ALU instruction: ALU performs the operation specified in the instruction
using the values read from the register file.
* Register- Immediate ALU instruction: ALU performs the operation specified in the
instruction using the first value read from the register file and that sign extended immediate.

4. Memory access (MEM)

For a load instruction, using effective address the memory is read. For a store
instruction memory writes the data from the 2nd register read using effective address.

5. Write back cycle (WB)
Write the result in to the register file, whether it comes from memory system (for a
LOAD instruction) or from the ALU.

Each instruction taken at most 5 clock cycles for the execution
* Instruction fetch cycle (IF)
* Instruction decode / register fetch cycle (ID)
* Execution / Effective address cycle (EX)
* Memory access (MEM)
* Write back cycle (WB)
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 5


The execution of the instruction comprising of the above subtask can be pipelined. Each of the
clock cycles from the previous section becomes a pipe stage a cycle in the pipeline. A new
instruction can be started on each clock cycle which results in the execution pattern shown
figure 2.1. Though each instruction takes 5 clock cycles to complete, during each clock cycle
the hardware will initiate a new instruction and will be executing some part of the five different
instructions as illustrated in figure 2.1.



Each stage of the pipeline must be independent of the other stages. Also, two different
operations cant be performed with the same data path resource on the same clock. For
example, a single ALU cannot be used to compute the effective address and perform a subtract
operation during the same clock cycle. An adder is to be provided in the stage 1 to compute
new PC value and an ALU in the stage 3 to perform the arithmetic indicated in the instruction
(See figure 2.2). Conflict should not arise out of overlap of instructions using pipeline. In other
words, functional unit of each stage need to be independent of other functional unit. There are
three observations due to which the risk of conflict is reduced.
Separate Instruction and data memories at the level of L1 cache eliminates a
conflict for a single memory that would arise between instruction fetch and data
access.
Register file is accessed during two stages namely ID stage WB. Hardware should allow to
perform maximum two reads one write every clock cycle.
To start a new instruction every cycle, it is necessary to increment and store the
PC every cycle.


ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 6



Buffers or registers are introduced between successive stages of the pipeline so that at the end
of a clock cycle the results from one stage are stored into a register (see figure 2.3). During the
next clock cycle, the next stage will use the content of these buffers as input. Figure 2.4
visualizes the pipeline activity.




ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 7


5. What are the major hurdles of pipelining? Illustrate the branch hazard in detail?
(10 marks) 2012

Hazards may cause the pipeline to stall. When an instruction is stalled, all the
instructions issued later than the stalled instructions are also stalled. Instructions issued earlier
than the stalled instructions will continue in a normal way. No new instructions are fetched
during the stall. Hazard is situation that prevents the next instruction in the instruction stream
from k executing during its designated clock cycle. Hazards will reduce the pipeline
performance.
Performance with Pipeline stall
A stall causes the pipeline performance to degrade from ideal performance. Performance
improvement from pipelining is obtained from:



Assume that,
i) Cycle time overhead of pipeline is ignored
ii) Stages are balanced

With these assumptions

If all the instructions take the same number of cycles and is equal to the number of pipeline
stages or depth of the pipeline, then,
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 8



If there are no pipeline stalls,
Pipeline stall cycles per instruction = zero
Therefore,
Speedup = Depth of the pipeline.

When a branch is executed, it may or may not change the content of PC. If a branch is taken,
the content of PC is changed to target address. If a branch is taken, the content of PC is not
changed
The simple way of dealing with the branches is to redo the fetch of the instruction following a
branch. The first IF cycle is essentially a stall, because, it never performs useful work. One stall
cycle for every branch will yield a performance loss 10% to 30% depending on the branch
frequency

Reducing the Brach Penalties
There are many methods for dealing with the pipeline stalls caused by branch delay
1. Freeze or Flush the pipeline, holding or deleting any instructions after the branch
until the branch destination is known. It is a simple scheme and branch penalty is fixed and
cannot be reduced by software
2. Treat every branch as not taken, simply allowing the hardware to continue as if the
branch were not to executed. Care must be taken not to change the processor state until the
branch outcome is known.
Instructions were fetched as if the branch were a normal instruction. If the branch is
taken, it is necessary to turn the fetched instruction in to a no-of instruction and restart the fetch
at the target address. Figure 2.8 shows the timing diagram of both the situations.

ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 9


3. Treat every branch as taken: As soon as the branch is decoded and target address is
computed, begin fetching and executing at the target if the branch target is known before
branch outcome, then this scheme gets advantage.
For both predicated taken or predicated not taken scheme, the compiler can improve
performance by organizing the code so that the most frequent path matches the hardware
choice.
4. Delayed branch technique is commonly used in early RISC processors. In a delayed
branch, the execution cycle with a branch delay of one is Branch instruction
Sequential successor-1
Branch target if taken

The sequential successor is in the branch delay slot and it is executed irrespective of whether or
not the branch is taken. The pipeline behavior with a branch delay is shown in Figure 2.9.
Processor with delayed branch, normally have a single instruction delay. Compiler has to make
the successor instructions valid and useful there are three ways in which the to delay slot can be
filled by the compiler.


ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 10


The limitations on delayed branch arise from
i) Restrictions on the instructions that are scheduled in to delay slots.
ii) Ability to predict at compiler time whether a branch is likely to be taken or not taken.
The delay slot can be filled from choosing an instruction
a) From before the branch instruction
b) From the target address
c) From fall- through path.
The principle of scheduling the branch delay is shown in fig 2.10




ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 11




6. Explain how an architect must plan for various technology changes that can increase the
lifetime of a successful computer. Further, discuss the role of bandwidth and latency in
determining the performance. (10 Marks) Dec.2013/Jan.2014


Integrated circuit logic technologyTransistor density increases by about 35% per year, quadrupling
in somewhat over four years. Increases in die size are less predictable and slower, ranging from 10% to
20% per year. The combined effect is a growth rate in transistor count on a chip of about 55% per year.
Device speed scales more slowly, as we discuss below.
Semiconductor DRAM (dynamic random-access memory)Density increases by between 40% and
60% per year, quadrupling in three to four years. Cycle time has improved very slowly, decreasing by
about one-third in 10 years. Bandwidth per chip increases about twice as fast as latency decreases. In
addi- tion, changes to the DRAM interface have also improved the bandwidth;
Magnetic disk technologyRecently, disk density has been improving by more than 100% per year,
quadrupling in two years. Prior to 1990, density increased by about 30% per year, doubling in three
years. It appears that disk technology will continue the faster density growth rate for some time to come.
Access time has improved by one-third in 10 years.
Network technologyNetwork performance depends both on the performance of switches and on the
performance of the transmission system, both latency and bandwidth can be improved, though recently
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 12

bandwidth has been the primary focus. For many years, networking technology appeared to improve
slowly: for example, it took about 10 years for Ethernet technology to move from 10 Mb to 100 Mb.
The increased importance of networking has led to a faster rate of progress with 1 Gb Ethernet
becoming available about five years after 100 Mb. The Internet infrastructure in the United States has
seen even faster growth (roughly doubling in bandwidth every year), both through the use of optical
media and through the deployment of much more switching hardware.

7. Briey discuss the different types of dependencies that exist in a program along with indicating
explicitly the hazards that may occur. (10 Marks) Dec.2013/Jan.2014
Data Dependences
There are three different types of dependences: data dependences (also called true data dependences),
name dependences, and control dependences. An instruction j is data dependent on instruction i if either
of the following holds: n Instruction i produces a result that may be used by instruction j, or n
Instruction j is data dependent on instruction k, and instruction k is data dependent on instruction i.
Name Dependences
The second type of dependence is a name dependence. A name dependence occurs when two
instructions use the same register or memory location, called a name, but there is no flow of data
between the instructions associated with that name. There are two types of name dependences between
an instruction i that precedes instruction j in program order:
1. An antidependence between instruction i and instruction j occurs when instruction j writes a register
or memory location that instruction i reads. The original ordering must be preserved to ensure that i
reads the correct value.
2. An output dependenceoccurs when instruction i and instruction j write the same register or memory
location. The ordering between the instructions must be preserved to ensure that the value finally written
corresponds to instruction j.
Data Hazards
RAW (read after write) j tries to read a source before i writes it, so j incorrectly gets the old value.
This hazard is the most common type and corresponds to a true data dependence. Program order must be
preserved to ensure that j receives the value from i. In the simple common five-stage static pipeline
a load instruction followed by an integer ALU instruction that directly uses the load result will lead to a
RAW hazard.
WAW (write after write) j tries to write an operand before it is written by i. The writes end up
being performed in the wrong order, leaving the value written by i rather than the value written by j in
the destination. This hazard corresponds to an output dependence. WAW hazards are present only in
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 13

pipelines that write in more than one pipe stage or allow an instruction to proceed even when a previous
instruction is stalled. The classic five-stage integer pipeline used in Appendix A writes a register only in
the WB stage and avoids this class of hazards, but this chapter explores pipelines that allow instructions
to be reordered, creating the possibility of WAW hazards. WAW hazards can also between a short
integer pipeline and a longer floating-point pipeline For example, a floating point multiply instruction
that writes F4, shortly followed by a load of F4 could yield a WAW hazard, since the load could
complete before the multiply completed.
WAR (write after read) j tries to write a destination before it is read by i, so i incorrectly gets the
new value. This hazard arises from an antidependence. WAR hazards cannot occur in most static issue
pipelines even deeper pipelines or floating point pipelines because all reads are early (in ID) and all
writes are late (in WB). A WAR hazard occurs either when there are some instructions that write results
early in the instruction pipeline, and other instructions that read a source late in the pipeline.

8. Dene computer architecture. Illustrate the seven dimension of ISA. (10 Marks)
Dec. 2013/Jan. 2014
The computer designer has to ascertain the attributes that are important for a new computer
and design the system to maximize the performance while staying within cost, power and
availability constraints. The task has few important aspects such as Instruction Set design, Functional
organization, Logic design and implementation.

Instruction Set Architecture (ISA)
ISA refers to the actual programmer visible Instruction set. The ISA serves as Boundary between the
software and hardware. The seven dimensions of the ISA are:
i) Class of ISA: Nearly all ISAs today are classified as General-Purpose-Register architectures. The
operands are either Registers or Memory locations. The two popular versions of this class are:
Register-Memory ISAs : ISA of 80x86, can access memory as part of many instructions.
Load -Store ISA Eg. ISA of MIPS, can access memory only with Load or Store instructions.
ii) Memory addressing: Byte addressing scheme is most widely used in all desktop and server
computers. Both 80x86 and MIPS use byte addressing. In case of MIPS the object must be aligned.
An access to an object of s byte at byte address A is aligned if A mod s =0. 80x86 does not require
alignment. Accesses are faster if operands are aligned.
iii) Addressing modes: Specify the address of a M object apart from register and constant operands.
MIPS Addressing modes:
Register mode addressing
Immediate mode addressing
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 14

Displacement mode addressing
80 x 86 in additions to the above addressing modes supports the additional modes of addressing:
i. Register Indirect
ii. Indexed
iii. Based with Scaled index
iv) Types and sizes of operands:
MIPS and x86 support:
8 bit (ASCII character), 16 bit(Unicode character)
32 bit (Integer/word )
64 bit (long integer/ Double word)
32 bit (IEEE-754 floating point)
64 bit (Double precision floating point)
80x86 also supports 80 bit floating point operand.(extended double
Precision
v) Operations: The general category o f operations are:
Data Transfer
Arithmetic operations
Logic operations
Control operations
MIPS ISA: simple & easy to implement
x86 ISA: richer & larger set of operations
vi) Control flow instructions: All ISAs support:
Conditional & Unconditional Branches
Procedure C alls & Returns MIPS 80x86
Conditional Branches tests content of Register Condition code bits
Procedure Call JAL CALLF
Return Address in a R Stack in M
vii) Encoding an ISA
Fixed Length ISA Variable Length ISA
MIPS 32 Bit long 80x86 (1-18 bytes)
Simplifies decoding Takes less space

ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 15

Number of Registers and number of Addressing modes have significant impact on the length
of instruction as the register field and addressing mode field can appear many times in a single
instruction.

9. Derive an expression for CPU clock as a function of instruction count (CPI) and clock cycle
time. (O4 Marks) Dec. 2013/Jan. 2014

The CPU Performance Equation
Essentially all computers are constructed using a clock running at a constant rate. These discrete time
events are called ticks, clock ticks, clock periods, clocks, cycles, or clock cycles. Computer designers
refer to the time of a clock period by its duration (e.g., 1 ns) or by its rate (e.g., 1 GHz). CPU time for a
program can then be expressed two ways:
CPU time = CPU clock cycles for a program Clock cycle time

In addition to the number of clock cycles needed to execute a program, we can also count the number of
instructions executedthe instruction path length or instruction count (IC). If we know the number of
clock cycles and the instruction count we can calculate the average number of clock cycles per
instruction (CPI). Because it is easier to work with and because we will deal with simple processors
we use CPI. Designers sometimes also use Instructions per Clock or IPC, which is the inverse of CPI.
CPI is computed as:
CPI = CPU clock cycles for a program/ Instruction Count

This CPU figure of merit provides insight into different styles of instruction sets and implementations,.
By transposing instruction count in the above formula, clock cycles can be defined as . This allows us to
use CPI in the execution time formula:
CPU time = Instruction Count Clock cycle time Cycles per Instruction
it is difficult to change one parameter in complete isolation from others because the basic technologies
involved in changing each characteristic are interdependent:
Clock cycle timeHardware technology and organization
CPIOrganization and instruction set architecture
Instruction countInstruction set architecture and compiler technology

10. Find the yield for a 300 mm Wafer With dies that are 1.5 cm side and 1.0 cm on a
side,assuming defect density of 0.4 per cm
2
and as 4 (alph). (06 Marks) Dec. 2013/Jan. 2014
The total die area is 0.49 cm2. Thus
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 16

(30 / 2)2 - 30/ 2 0.49
706.5/0.49- 94.2/0.99= 1347

11. Write a note on:
Amdhals law Dec. 2013/Jan. 2014
The performance gain that can be obtained by improving some portion of a computer can be
calculated using Amdahls Law. Amdahls Law states that the performance improvement to be gained
from using some faster mode of execution is limited by the fraction of the time the faster mode can be
used.
Amdahls Law defines the speedup that can be gained by using a particular feature. What is speedup?
Suppose that we can make an enhancement to a machine that will improve performance when it is used.
Speedup is the ratio
Speedup = Performance for entire task using the enhancement when possible/ Performance for entire
task without using the enhancement .
Amdahls Law gives us a quick way to find the speedup from some enhancement, which depends on
two factors:
1. The fraction of the computation time in the original machine that can be converted to take advantage
of the enhancementFor example, if 20 seconds of the execution time of a program that takes 60
seconds in total can use an enhancement, the fraction is 20/60. This value, which we will call
Fractionenhanced, is always less than or equal to 1.
2. The improvement gained by the enhanced execution mode; that is, how much faster the task would
run if the enhanced mode were used for the entire program This value is the time of the original
mode over the time of the enhanced mode: If the enhanced mode takes 2 seconds for some portion of
the program that can completely use the mode, while the original mode took 5 seconds for the same
portion, the improvement is 5/2. which is always greater than 1, Speedupenhanced.
Speedupoverall=Executiontimeold/ Executiontimenew = 1/ (1
Fractionenhanced)+Fractionenhanced/Speedupenhanced









ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 17


UNIT II

1. What are the techniques used to reduce branch costs? Explain both static and dynamic
branch prediction used for same?

To keep a pipe line full, parallelism among instructions must be exploited by finding
sequence of unrelated instructions that can be overlapped in the pipeline. To avoid a pipeline
stall,a dependent instruction must be separated from the source instruction by the distance in
clock cycles equal to the pipeline latency of that source instruction. A compilers ability to
perform this scheduling depends both on the amount of ILP available in the program and on the
latencies of the functional units in the pipeline.

The compiler can increase the amount of available ILP by transferring loops.
for(i=1000; i>0 ;i=i-1)
X[i] = X[i] + s;
We see that this loop is parallel by the noticing that body of the each iteration is independent.

The first step is to translate the above segment to MIPS assembly language
Loop: L.D F0, 0(R1) : F0=array element
ADD.D F4, F0, F2 : add scalar in F2
S.D F4, 0(R1) : store result
DADDUI R1, R1, #-8 : decrement pointer
: 8 Bytes (per DW)
BNE R1, R2, Loop : branch R1! = R2

Without any Scheduling the loop will execute as follows and takes 9 cycles for each iteration.
1 Loop: L.D F0, 0(R1) ;F0=vector element
2 stall
3 ADD.D F4, F0, F2 ;add scalar in F2
4 stall
5 stall
6 S.D F4, 0(R1) ;store result
7 DADDUI R1, R1,# -8 ;decrement pointer 8B (DW)
8 stall ;assumes cant forward to branch
9 BNEZ R1, Loop ;branch R1!=zero

We can schedule the loop to obtain only two stalls and reduce the time to 7 cycles:
L.D F0, 0(R1)
DADDUI R1, R1, #-8
ADD.D F4, F0, F2
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 18

Stall
Stall
S.D F4, 0(R1)
BNE R1, R2, Loop

Loop Unrolling can be used to minimize the number of stalls. Unrolling the body of the loop by
our times, the execution of four iteration can be done in 27 clock cycles or 6.75 clock cycles
per iteration.

1 Loop: L.D F0,0(R1)
3 ADD.D F4,F0,F2
6 S.D 0(R1),F4 ;drop DSUBUI & BNEZ
7 L.D F6,-8(R1)
9 ADD.D F8,F6,F2
12 S.D -8(R1),F8 ;drop DSUBUI & BNEZ
13 L.D F10,-16(R1)
15 ADD.D F12,F10,F2
18 S.D -16(R1),F12 ;drop DSUBUI & BNEZ
19 L.D F14,-24(R1)
21 ADD.D F16,F14,F2
24 S.D -24(R1),F16
25 DADDUI R1,R1,#-32 :alter to 4*8
26 BNEZ R1,LOOP

Unrolled loop that minimizes the stalls to 14 clock cycles for four iterations is given below:
1 Loop: L.D F0, 0(R1)
2 L.D F6, -8(R1)
3 L.D F10, -16(R1)
4 L.D F14, -24(R1)
5 ADD.D F4, F0, F2
6 ADD.D F8, F6, F2
7 ADD.D F12, F10, F2
8 ADD.D F16, F14, F2
9 S.D 0(R1), F4
10 S.D -8(R1), F8
11 S.D -16(R1), F12
12 DSUBUI R1, R1,#32
13 S.D 8(R1), F16 ;8-32 = -24
14 BNEZ R1, LOOP

ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 19


Summary of Loop unrolling and scheduling

The loop unrolling requires understanding how one instruction depends on another and how the
instructions can be changed or reordered given the dependences:

1. Determine loop unrolling useful by finding that loop iterations were independent (except for
maintenance code)
2. Use different registers to avoid unnecessary constraints forced by using same registers for
different computations
3. Eliminate the extra test and branch instructions and adjust the loop termination and iteration
code
4. Determine that loads and stores in unrolled loop can be interchanged by observing that loads
and stores from different iterations are independent
Transformation requires analyzing memory addresses and finding that they do not refer to the
same address
5. Schedule the code, preserving any dependences needed to yield the same result as the
original code

To reduce the Branch cost, prediction of the outcome of the branch may be done. The
prediction may be done statically at compile time using compiler support or dynamically using
hardware support. Schemes to reduce the impact of control hazard are discussed below:

Static Branch Prediction

Assume that the branch will not be taken and continue execution down the sequential
instruction stream. If the branch is taken, the instruction that are being fetched and decoded
must be discarded. Execution continues at the branch target. Discarding instructions means we
must be able to flush instructions in the IF, ID and EXE stages.
Alternately, it is possible that the branch can be predicted as taken. As soon as the instruction
decoded is found as branch, at the earliest, start fetching the instruction from the target address.

Average misprediction = untaken branch frequency = 34% for SPEC pgms.

ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 20



The graph shows the misprediction rate for set of SPEC benchmark programs

Dynamic Branch Prediction

With deeper pipelines the branch penalty increases when measured in clock cycles.
Similarly, with multiple issues, the branch penalty increases in terms of instructions lost.
Hence, a simple static prediction scheme is inefficient or may not be efficient in most of the
situations. One approach is to look up the address of the instruction to see if a branch was taken
the last time this instruction was executed and if so, to begin fetching new instruction from the
target address.
This technique is called Dynamic branch prediction.
Why does prediction work?

Underlying algorithm has regularities
Data that is being operated on has regularities
Instruction sequence has redundancies that are artifacts of way that humans/compilers
think about problems.
There are a small number of important branches in programs which have dynamic
behavior for which dynamic branch prediction performance will be definitely better compared
to static branch prediction.

Performance = (accuracy, cost of misprediction)
Branch History Table (BHT) is used to dynamically predict the outcome of the current branch
instruction. Lower bits of PC address index table of 1-bit values
Says whether or not branch taken last time
- No address check
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 21


Problem: in a loop, 1-bit BHT will cause two mispredictions (average is 9 iterations before
exit):
End of loop case, when it exits instead of looping as before
--First time through loop on next time through code, when it predicts exit instead of
looping

Simple two bit history table will give better performance. The four different states of 2 bit
predictor is shown in the state transition diagram.




Correlating Branch Predictor
It may be possible to improve the prediction accuracy by considering the recent behavior of
other branches rather than just the branch under consideration. Correlating predictors are two-
level predictors. Existing correlating predictors add information about the behavior of the most
recent branches to decide how to predict a given branch.

Idea: record m most recently executed branches as taken or not taken, and use that pattern to
select the proper n-bit branch history table (BHT)

In general, (m,n) predictor means record last m branches to select between 2m history tables,
each with n-bit counters

Thus, old 2-bit BHT is a (0,2) predictor
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 22

-Global Branch History: m-bit shift register keeping T/NT status of last m branches.

Each entry in table has m n-bit predictors. In case of (2,2) predictor, behavior of recent
branches selects between four predictions of next branch, updating just that prediction. The
scheme of the table is shown:




Comparisons of different schemes are shown in the graph.




ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 23

Tournament predictor is a multi level branch predictor and uses n bit saturating counter to
chose between predictors. The predictors used are global predictor and local predictor.

Advantage of tournament predictor is ability to select the right predictor for a particular
branch which is particularly crucial for integer benchmarks.

A typical tournament predictor will select the global predictor almost 40% of the time for the
SPEC integer benchmarks and less than 15% of the time for the SPEC FP benchmarks

Existing tournament predictors use a 2-bit saturating counter per branch to choose among two
different predictors based on which predictor was most effective in recent prediction.



Dynamic Branch Prediction Summary

Prediction is becoming important part of execution as it improves the performance of the
pipeline.
Branch History Table: 2 bits for loop accuracy
Correlation: Recently executed branches correlated with next branch
Either different branches (GA)
Or different executions of same branches (PA)
Tournament predictors take insight to next level, by using multiple predictors
usually one based on global information and one based on local information, and combining
them with a selector

ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 24


2. With a neat diagram give the basic structure of tomasulo based MIPS FP unit and
explain the various field of reservation stations (10 marks)

Tomasulu algorithm and Reorder Buffer.

Tomasulu idea:
1. Have reservation stations where register renaming is possible
2. Results are directly forwarded to the reservation station along with the final registers. This is
also called short circuiting or bypassing.

ROB:
1. The instructions are stored sequentially but we have indicators to say if it is speculative or
completed execution.
2. If completed execution and not speculative and reached head of the queue then we commit it.




Three components of hardware-based speculation

1. Dynamic branch prediction to pick branch outcome
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 25

2. Speculation to allow instructions to execute before control dependencies are resolved, i.e.,
before branch outcomes become known with ability to undo in case of incorrect speculation
3. Dynamic scheduling

Speculating with Tomasulo
Key ideas:
1. Separate execution from completion: instructions to execute speculatively but no instructions
update registers or memory until no more speculative
2. Therefore, add a final step after an instruction is no longer speculative, called instruction
commit when it is allowed to make register and memory updates

3. Allow instructions to execute and complete out of order but force them to commit in order
4. Add hardware called the reorder buffer (ROB), with registers to hold the result of an
instruction between completions and commit

Tomasulos Algorithm with Speculation: Four Stages

1. Issue: get instruction from Instruction Queue
_ if reservation station and ROB slot free (no structural hazard),
Control issues instruction to reservation station and ROB, and sends to reservation
station operand values (or reservation station source for values) as well as allocated ROB slot
number

2. Execution: operate on operands (EX)
-When both operands ready then execute; if not ready, watch CDB for result

3. Write result: finish execution (WB)
-write on CDB to all awaiting units and ROB; mark reservation station available

4. Commit: update register or memory with ROB result
-when instruction reaches head of ROB and results present, update register with result
or store to memory and remove instruction from ROB
-if an incorrectly predicted branch reaches the head of ROB, flush the ROB, and restart
at correct successor of branch

ROB Data Structure

ROB entry fields
Instruction type: branch, store, register operation (i.e., ALU or load)
State: indicates if instruction has completed and value is ready
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 26

Destination: where result is to be written register number for register operation (i.e. ALU or
load), memory address for store
Branch has no destination result
Value: holds the value of instruction result till time to commit
Additional reservation station field
Destination: Corresponding ROB entry number
Example
1. L.D F6, 34(R2)
2. L.D F2, 45(R3
3. MUL.D F0, F2, F4
4. SUB.D F8, F2, F6
5. DIV.D F10, F0, F6
6. ADD.D F6, F8, F2

The position of Reservation stations, ROB and FP registers are indicated below:

Assume latencies load 1 clock, add 2 clocks, multiply 10 clocks, divide 40 clocks Show data
structures just before MUL.D goes to commit

Reservation Stations




At the time MUL.D is ready to commit only the two L.D instructions have already committed,
though others have completed execution
Actually, the MUL.D is at the head of the ROB the L.D instructions are shown only for
Understanding purposes #X represents value field of ROB entry number X

Floating point registers
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 27




Reorder Buffer



Example
Loop: LD F0 0 R1
MULTD F4 F0 F2
SD F4 0 R1
SUBI R1 R1 #8
BNEZ R1 Loop

Assume instructions in the loop have been issued twice
Assume L.D and MUL.D from the first iteration have committed and all other
instructions have completed
Assume effective address for store is computed prior to its issue Show data structures

3. With an aid of a neat functional schematic, describe the basic structure of a MIPS oating-
point unit using Tomasulos algorithm that addresses dynamic scheduling. (10 Marks)
Dec.2013/Jan.2014
Tomasulos Algorithm: A Loop-Based Example
To understand the full power of eliminating WAW and WAR hazards through dynamic renaming of
registers, we must look at a loop. Consider the earlier following simple sequence for multiplying the
elements of an array by a scalar in
F2:
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 28

Loop: L.D F0,0(R1)
MUL.D F4,F0,F2
S.D F4,0(R1)
DADDUI R1,R1,-8
BNE R1,R2,Loop; branches if R10
If we predict that branches are taken, using reservation stations will allow multiple executions of this
loop to proceed at once. This advantage is gained without changing the codein effect, the loop is
unrolled dynamically by the hardware, using the reservation stations obtained by renaming to act as
additional registers. Lets assume we have issued all the instructions in two successive iterations of the
loop, but none of the floating-point loads-stores or operations has completed. The reservation stations,
register-status tables, and load and store buffers at this point (The integer ALU operation is ignored, and
it is assumed the branch was predicted as taken.) Once the system reaches this state, two copies of the
loop could be sustained with a CPI close to 1.0 provided the multiplies could complete in four clock
cycles. when extended with multiple instruction issue, Tomasulos approach can sustain more than one
instruction per clock.

4. Discuss how the following advanced techniques are useful in enhancing the performance of
ILP: i) Branch-target buffers; ii) Speculation; m) Value prediction. (10 Marks) Dec.2013/Jan.2014

Dynamic Branch Prediction
With deeper pipelines the branch penalty increases when measured in clock cycles. Similarly,
with multiple issues, the branch penalty increases in terms of instructions lost. Hence, a simple static
prediction scheme is inefficient or may not be efficient in most of the situations. One approach is to look
up the address of the instruction to see if a branch was taken the last time this instruction was executed
and if so, to begin fetching new instruction from the target address.
This technique is called Dynamic branch prediction.
Why does prediction work?
Underlying algorithm has regularities
Data that is being operated on has regularities
Instruction sequence has redundancies that are artifacts of way that humans/compilers think
about problems.
There are a small number of important branches in programs which have dynamic behavior
for which dynamic branch prediction performance will be definitely better compared to static branch
prediction.
Performance = (accuracy, cost of misprediction)
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 29

Branch History Table (BHT) is used to dynamically predict the outcome of the current branch
instruction. Lower bits of PC address index table of 1-bit values
Says whether or not branch taken last time
- No address check
Problem: in a loop, 1-bit BHT will cause two mispredictions (average is 9 iterations before exit):
End of loop case, when it exits instead of looping as before
--First time through loop on next time through code, when it predicts exit instead of looping
Simple two bit history table will give better performance. The four different states of 2 bit predictor is
shown in the state transition diagram.


5. How does a 2 bit dynamic branch predictor work? (04 Marks) Dec.2013/Jan.2014
Dynamic Branch Prediction
With deeper pipelines the branch penalty increases when measured in clock cycles. Similarly,
with multiple issues, the branch penalty increases in terms of instructions lost. Hence, a simple static
prediction scheme is inefficient or may not be efficient in most of the situations. One approach is to look
up the address of the instruction to see if a branch was taken the last time this instruction was executed
and if so, to begin fetching new instruction from the target address.
This technique is called Dynamic branch prediction.
Why does prediction work?
Underlying algorithm has regularities
Data that is being operated on has regularities
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 30

Instruction sequence has redundancies that are artifacts of way that humans/compilers think
about problems.
There are a small number of important branches in programs which have dynamic behavior
for which dynamic branch prediction performance will be definitely better compared to static branch
prediction.
Performance = (accuracy, cost of misprediction)
Branch History Table (BHT) is used to dynamically predict the outcome of the current branch
instruction. Lower bits of PC address index table of 1-bit values
Says whether or not branch taken last time
- No address check
Problem: in a loop, 1-bit BHT will cause two mispredictions (average is 9 iterations before exit):
End of loop case, when it exits instead of looping as before
--First time through loop on next time through code, when it predicts exit instead of looping
Simple two bit history table will give better performance. The four different states of 2 bit predictor is
shown in the state transition diagram.



6. Explain with example, the various data dependencies and hazards associated with ILP.
(08 Marks) Dec.2013/Jan.2014
Data Dependences
There are three different types of dependences: data dependences (also called true data dependences),
name dependences, and control dependences. An instruction j is data dependent on instruction i if either
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 31

of the following holds: n Instruction i produces a result that may be used by instruction j, or n
Instruction j is data dependent on instruction k, and instruction k is data dependent on instruction i.
Name Dependences
The second type of dependence is a name dependence. A name dependence occurs when two
instructions use the same register or memory location, called a name, but there is no flow of data
between the instructions associated with that name. There are two types of name dependences between
an instruction i that precedes instruction j in program order:
1. An antidependence between instruction i and instruction j occurs when instruction j writes a register
or memory location that instruction i reads. The original ordering must be preserved to ensure that i
reads the correct value.
2. An output dependenceoccurs when instruction i and instruction j write the same register or memory
location. The ordering between the instructions must be preserved to ensure that the value finally written
corresponds to instruction j.
Data Hazards
RAW (read after write) j tries to read a source before i writes it, so j incorrectly gets the old value.
This hazard is the most common type and corresponds to a true data dependence. Program order must be
preserved to ensure that j receives the value from i. In the simple common five-stage static pipeline
a load instruction followed by an integer ALU instruction that directly uses the load result will lead to a
RAW hazard.
WAW (write after write) j tries to write an operand before it is written by i. The writes end up
being performed in the wrong order, leaving the value written by i rather than the value written by j in
the destination. This hazard corresponds to an output dependence. WAW hazards are present only in
pipelines that write in more than one pipe stage or allow an instruction to proceed even when a previous
instruction is stalled. The classic five-stage integer pipeline used in Appendix A writes a register only in
the WB stage and avoids this class of hazards, but this chapter explores pipelines that allow instructions
to be reordered, creating the possibility of WAW hazards. WAW hazards can also between a short
integer pipeline and a longer floating-point pipeline For example, a floating point multiply instruction
that writes F4, shortly followed by a load of F4 could yield a WAW hazard, since the load could
complete before the multiply completed.
WAR (write after read) j tries to write a destination before it is read by i, so i incorrectly gets the
new value. This hazard arises from an antidependence. WAR hazards cannot occur in most static issue
pipelines even deeper pipelines or floating point pipelines because all reads are early (in ID) and all
writes are late (in WB). A WAR hazard occurs either when there are some instructions that write results
early in the instruction pipeline, and other instructions that read a source late in the pipeline.
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 32


7. Write short notes on:
Pentium 4 processor architecture Dec.2013/Jan.2014

The microarchitecture of the Pentium 4, which is called NetBurst, is similar to that of the Pentium III
(called the P6 microarchitecture): both fetch up to three IA-32 instructions per cycle, decode them into
micro-ops, and send the uops to an out-of-order execution engine that can graduate up to three uops per
cycle. There are, however, many differences that are designed to allow the NetBurst microarchitecture
to operate at a significantly higher clock rate than the P6 microarchitecture and to help maintain or close
the peak to sustained execution throughput. Among the most important of these are:
A much deeper pipeline: P6 requires about 10 clock cycles from the time a simple add
instruction is fetched until the availability of its results. In comparison, NetBurst takes about 20
cycles, including 2 cycles reserved simply to drive results across the chip!
NetBurst uses register renaming (like the MIPS R10K and the Alpha 21264) rather than the
reorder buffer, which is used in P6. Use of register renaming allows many more outstanding
results (potentially up to 128) in NetBurst versus the 40 that are permitted in P6..
There are seven integer execution units in NetBurst versus five in P6. The additions are an
additional integer ALU and an additionaladdress computation unit.

















ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 33


UNIT III

1. Explain the basic VLIW approach for exploiting ILP using multiple issues? (10 marks)
Exploiting ILP: Multiple Issue Computers
Multiple Issue Computers

Benefit
CPIs go below one, use IPC instead (instructions/cycle)
Example: Issue width = 3 instructions, Clock = 3GHz
Peak rate: 9 billion instructions/second, IPC = 3
For our 5 stage pipeline, 15 instructions in flight at any given time
Multiple Issue types
Static
Most instruction scheduling is done by the compiler
Dynamic (superscalar)
CPU makes most of the scheduling decisions
Challenge: overcoming instruction dependencies
Increased latency for loads
Control hazards become worse
Requires a more ambitious design
Compiler techniques for scheduling
-Complex instruction decoding logic

Exploiting ILP: Multiple Issue Computers Static Scheduling

Instruction Issuing
Have to decide which instruction types can issue in a cycle
Issue packet: instructions issued in a single clock cycle
Issue slot: portion of an issue packet
Compiler assumes a large responsibility for hazard checking, scheduling, etc.

Static Multiple Issue
For now, assume a souped-up 5-stage MIPS pipeline that can issue a packet with:
One slot is an ALU or branch instruction
-One slot is a load/store instruction

ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 34


Static Multiple Issue



Static Multiple Issue Scheduling
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 35



Static Multiple Issue w/Loop Unrolling



ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 36

Static Multiple Issue w/Loop Unrolling


Exploiting ILP: Multiple Issue Computers Dynamic Scheduling

Dynamic Multiple Issue Computers
Superscalar computers
CPU generally manages instruction issuing and ordering
Compiler helps, but CPU dominates
Process
Instructions issue in-order
Instructions can execute out-of-order
Execute once all operands are ready
Instructions commit in-order
Commit refers to when the architectural register file is updated (current completed state of
program
Aside: Data Hazard Refresher
Two instructions (i and j), j follows i in program order
Read after Read (RAR)
Read after Write (RAW)
Type:
Problem:
Write after Read (WAR)
Type:
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 37

Problem:
Write after Write (WAW)
Type: Problem:
Superscalar Processors
Register Renaming
Use more registers than are defined by the architecture
Architectural registers: defined by ISA
Physical registers: total registers
Help with name dependencies
Antidependence
Write after Read hazard
Output dependence
Write after Write hazard

Tomasulos Superscalar Computers

R. M. Tomasulo, An Efficient Algorithm for Exploiting Multiple Arithmetic Units, IBM J.
of Research and Development, Jan. 1967
See also: D. W. Anderson, F. J. Sparacio, and R. M. Tomasulo, The IBM System/360 model
91: Machine philosophy and instruction-handling, IBM J. of Research and evelopment, Jan.
1967
Allows out-of-order execution
Tracks when operands are available
Minimizes RAW hazards
Introduced renaming for WAW and WAR hazards
Tomasulos Superscalar Computers


ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 38




Instruction Execution Process
Three parts, arbitrary number of cycles/part
Above does not allow for speculative execution
Issue (aka Dispatch)
If empty reservation station (RS) that matches instruction, send to RS with operands rom
register file and/or know which functional unit will send operand
-If no empty RS, stall until one is available
Rename registers as appropriate
Instruction Execution Process
Execute
All branches before instruction must be resolved
Preserves exception behavior
When all operands available for an instruction, send it to functional unit
Monitor common data bus (CDB) to see if result is needed by RS entry
For non-load/store reservation stations
If multiple instructions ready, have to pick one to send to functional unit
For load/store
Compute address, then place in buffer
Loads can execute once memory is free
Stores must wait for value to be stored, and then execute

Write Back
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 39

Functional unit places on CDB
Goes to both register file and reservation stations
Use of CDB enables forwarding for RAW hazards
Also introduces a latency between result and use of a value

2. What are the key issues in implementing advanced speculation techniques? Explain
them in detail? (10 marks)

Tomasulos w/HW Speculation
Key aspects of this design
Separate forwarding (result bypassing) from actual instruction completion
Assuming instructions are executing speculatively
Can pass results to later instructions, but prevents instruction from performing updates that
cant be undone
Once instruction is no longer speculative it can update register file/memory
New step in execution sequence: instruction commit
Requires instructions to wait until they can commit Commits still happen in order Reorder
Buffer (ROB)

Instructions hang out here before committing
Provides extra registers for RS/RegFile Is a source for operands
Four fields/entry Instruction type
Branch, store, or register operation (ALU & load) Destination field
Register number or store address Value field
Holds value to write to register or data for store
Ready field
Has instruction finished executing?
Note: store buffers from previous version now in ROB
Instruction Execution Sequence
Issue
Issue instruction if opening in RS & ROB
Send operands to RS from RegFile and/or ROB
Execute Essentially the same as before
Write Result Similar to before, but put result into ROB
Commit (next slide)

Committing Instructions
Look at head of ROB
Three types of instructions Incorrectly predicted branch
Indicates speculation was wrong
Flush ROB
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 40

Execution restarts at proper location Store
Update memory
Remove store from ROB Everything else
Update registers
Remove instruction from ROB

RUU Superscalar Computers


Modeling tool Simple Scalar implements an RUU style processor
You will be using this tool after Spring Break
Architecture similar to speculative Tomasulos
Register Update Unit (RUU)
Controls instructions scheduling and dispatching to functional units
Stores intermediate source values for instructions
Ensures instruction commit occurs in order!
Needs to be of appropriate size
Minimum of issue width * number of pipeline stages
Too small of an RUU can be a structural hazard!
Result bus could be a structural hazard



ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 41

3. Briey explain the various aspects that affect the exploitation of instruction level parallelism
(ILP) in a computer system. (08 Marks) Dec.2013/Jan.2014

Without any Scheduling the loop will execute as follows and takes 9 cycles for each iteration.
1 Loop: L.D F0, 0(R1) ;F0=vector element
2 stall
3 ADD.D F4, F0, F2 ;add scalar in F2
4 stall
5 stall
6 S.D F4, 0(R1) ;store result
7 DADDUI R1, R1,# -8 ;decrement pointer 8B (DW)
8 stall ;assumes cant forward to branch
9 BNEZ R1, Loop ;branch R1!=zero

We can schedule the loop to obtain only two stalls and reduce the time to 7 cycles:
L.D F0, 0(R1)
DADDUI R1, R1, #-8
ADD.D F4, F0, F2
Stall
Stall
S.D F4, 0(R1)
BNE R1, R2, Loop


4. What are the features of an ideal or perfect processor? Explain. (10 Marks)
Dec.2013/Jan.2014
The assumptions made for an ideal or perfect processor are as follows:
1. Register renamingThere are an infinite number of virtual registers available and hence all WAW
and WAR hazards are avoided and an unbounded number of instructions can begin execution
simultaneously.
2. Branch predictionBranch prediction is perfect. All conditional branches are predicted exactly.
3. Jump predictionAll jumps (including jump register used for return and computed jumps) are
perfectly predicted. When combined with perfect branch prediction, this is equivalent to having a
processor with perfect speculation and an unbounded buffer of instructions available for execution.
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 42

4. Memory-address alias analysisAll memory addresses are known exactly and a load can be moved
before a store provided that the addresses are not identical.
Assumptions 2 and 3 eliminate all control dependences. Likewise, assumptions 1 and 4 eliminate all but
the true data dependences.
Limitations on the Window Size and Maximum Issue Count To build a processor that even comes close
to perfect branch prediction and perfect alias analysis requires extensive dynamic analysis, since static
compile-time schemes cannot be perfect. Of course, most realistic dynamic schemes will not be perfect,
but the use of dynamic schemes will provide the ability to uncover parallelism that cannot be analyzed
by static compile-time analysis. Thus, a dynamic processor might be able to more closely match the
amount of parallelism uncovered by our ideal processor.
How close could a real dynamically scheduled, speculative processor come to the ideal processor? To
gain insight into this question, consider what the perfect processor must do:
1. Look arbitrarily far ahead to find a set of instructions to issue, predicting all branches perfectly.
2. Rename all register uses to avoid WAR and WAW hazards.
3. Determine whether there are any data dependencies among the instructions in the issue packet; if so,
rename accordingly.
4. Determine if any memory dependences exist among the issuing instructions and handle them
appropriately.
5. Provide enough replicated functional units to allow all the ready instructions to issue.

5. Consider the following three hypothetical, but not a typical processors, which we run with the
SPEC gcc bench mark, determine their relative performance, while the main memory which sets
the miss penalty) time is 50 n sec. Processor 3 hides 30% miss penalty by dynamic scheduling.

Dec.2013/Jan.2014
Processor Frey Ghz Cache miss Pipeline CPI Remarks
1 4 0.005 0.8 MIPS issue Static
2 5 0.0055 1.0 Deeply Pipelined
3 2.8 0.01 0.22 Speculative
superscalar

SPEC CINT2000 base tuning parameters/notes/summary of changes:
+FDO: PASS1=-Qprof_gen PASS2=-Qprof_use
Base tuning: -QxK -Qipo_wp shlW32M.lib +FDO
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 43

shlW32M.lib is the SmartHeap library V5.0 from MicroQuill www.microquill.com
Portability flags:
176.gcc: -Dalloca=_alloca /F10000000 -Op
186.crafy: -DNT_i386
253.perlbmk: -DSPEC_CPU2000_NTOS -DPERLDLL /MT
254.gap: -DSYS_HAS_CALLOC_PROTO -DSYS_HAS_MALLOC_PROTO

6. Explain AMD Opteron memory hierarchy. (O8 Marks) Dec.2013/Jan.2014
Data cache in the Alpha 21264 microprocessor that is found in the Compaq AlphaServer ES40, one of
several models that use it. The cache contains 65,536 (64K) bytes of data in 64-byte blocks with two-
way set-associative placement, write back, and write allocate on a write miss.
Lets trace a cache hit through the steps of a hit the 21264 processor presents a 48-bit virtual address
to the cache for tag comparison, which is simultaneously translated into a 44-bit physical address. (It
also optionally supports 43-bit virtual addresses with 41-bit physical addresses.). The reason Alpha
doesnt use all 64 bits of virtual address is that its designers dont think anyone needs that big of virtual
address space yet, and the smaller size simplifies the Alpha virtual address mapping. The designers
planned to grow the virtual address in future microprocessors.
The physical address coming into the cache is divided into two fields: the 38-bit block address and 6-bit
block offset (64 = 26 and 38 + 6 = 44). The block address is further divided into an address tag and
cache index. Step 1 shows this division. The cache index selects the tag to be tested to see if the desired
block is in the cache. The size of the index depends on cache size, block size, and set associatively.














ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 44

UNIT IV

1. Explain the basic schemes for enforcing coherence in a shared memory multiprocessor
system? (10 marks)

Cache Coherence

Unfortunately, caching shared data introduces a new problem because the view of
memory held by two different processors is through their individual caches, which, without any
additional precautions, could end up seeing two different values. I.e, If two different processors
have two different values for the same location, this difficulty is generally referred to as cache
coherence problem





Informally:

Any read must return the most recent write
-Too strict and too difficult to implement

Better:
Any write must eventually be seen by a read
-All writes are seen in proper order (serialization)

Two rules to ensure this:
If P writes x and then P1 reads it, Ps write will be seen by P1 if the read and write
are sufficiently far apart
Writes to a single location are serialized: seen in one order
Latest write will be seen
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 45

Otherwise could see writes in illogical order (could see older value after a newer
value)

The definition contains two different aspects of memory system:
Coherence
Consistency
A memory system is coherent if,
Program order is preserved.
Processor should not continuously read the old data value.
Write to the same location are serialized.

The above three properties are sufficient to ensure coherence, When a written value will be
seen is also important. This issue is defined by memory consistency model. Coherence and
consistency are complementary.

Basic schemes for enforcing coherence
Coherence cache provides:
migration: a data item can be moved to a local cache and used there in a transparent
fashion.
Replication for shared data that are being simultaneously read.
Both are critical to performance in accessing shared data.
To over come these problems, adopt a hardware solution by introducing a protocol
tomaintain coherent caches named as Cache Coherence Protocols
These protocols are implemented for tracking the state of any sharing of a data block.
Two classes of Protocols
Directory based
Snooping based

Directory based
Sharing status of a block of physical memory is kept in one location called the
directory.
Directory-based coherence has slightly higher implementation overhead than
snooping.
It can scale to larger processor count.

Snooping
Every cache that has a copy of data also has a copy of the sharing status of the block.
No centralized state is kept.
Caches are also accessible via some broadcast medium (bus or switch)
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 46

Cache controller monitor or snoop on the medium to determine whether or not they
have a copy of a block that is represented on a bus or switch access.

Snooping protocols are popular with multiprocessor and caches attached to single
shared memory as they can use the existing physical connection- bus to memory, to interrogate
the status of the caches. Snoop based cache coherence scheme is implemented on a shared bus.
Any communication medium that broadcasts cache misses to all the processors.

Basic Snoopy Protocols
Write strategies
Write-through: memory is always up-to-date
Write-back: snoop in caches to find most recent copy
Write Invalidate Protocol
Multiple readers, single writer
Write to shared data: an invalidate is sent to all caches which snoop and invalidate
any copies
Read miss: further read will miss in the cache and fetch a new copy of the data.
Write Broadcast/Update Protocol (typically write through)
Write to shared data: broadcast on bus, processors snoop, and update any copies
Read miss: memory/cache is always up-to-date.
Write serialization: bus serializes requests! -Bus is single point of arbitration


Examples of Basic Snooping Protocols

Write Invalidate

Write Update
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 47




Assume neither cache initially holds X and the value of X in memory is 0

Example Protocol

Snooping coherence protocol is usually implemented by incorporating a finitestate controller
in each node

Logically, think of a separate controller associated with each cache block
That is, snooping operations or cache requests for different blocks can proceed
independently
In implementations, a single controller allows multiple operations to distinct blocks to
proceed in interleaved fashion
that is, one operation may be initiated before another is completed, even through only
one cache access or one bus access is allowed at time

Example Write Back Snoopy Protocol

Invalidation protocol, write-back cache
Snoops every address on bus
If it has a dirty copy of requested block, provides that block in response to the read request
and aborts the memory access
Each memory block is in one state:
Clean in all caches and up-to-date in memory (Shared)
OR Dirty in exactly one cache (Exclusive)
OR Not in any caches
Each cache block is in one state (track these):
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 48

Shared: block can be read
OR Exclusive: cache has only copy, its writeable, and dirty
OR Invalid: block contains no data (in uniprocessor cache too)
Read misses: cause all caches to snoop bus
Writes to clean blocks are treated as misses


2. Explain the directory based coherence for a distributed memory multiprocessor
system? (10 marks)

Directory Protocols
Similar to Snoopy Protocol: Three states
Shared: 1 or more processors have the block cached, and the value in memory is up-
to-date (as well as in all the caches)
Uncached: no processor has a copy of the cache block (not valid in any cache)
Exclusive: Exactly one processor has a copy of the cache block, and it has written the
block, so the memory copy is out of date
The processor is called the owner of the block
In addition to tracking the state of each cache block, we must track the processors that have
copies of the block when it is shared (usually a bit vector for each memory block: 1 if processor
has copy)
Keep it simple(r):
Writes to non-exclusive data => write miss
Processor blocks until access completes
Assume messages received and acted upon in order sent


Local node: the node where a request originates
Home node: the node where the memory location and directory entry of an address reside
Remote node: the node that has a copy of a cache block (exclusive or shared)
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 49





Comparing to snooping protocols:
identical states
stimulus is almost identical
write a shared cache block is treated as a write miss (without fetch the block)
cache block must be in exclusive state when it is written
any shared block must be up to date in memory
write miss: data fetch and selective invalidate operations sent by the directory controller
(broadcast in snooping protocols)

Directory Operations: Requests and Actions
Message sent to directory causes two actions:
Update the directory
More messages to satisfy request
Block is in Un cached state: the copy in memory is the current value; only possible requests
for that block are:
Read miss: requesting processor sent data from memory &requestor made only
sharing node; state of block made Shared.
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 50

Write miss: requesting processor is sent the value & becomes the Sharing node. The
block is made Exclusive to indicate that the only valid copy is cached. Sharers indicates the
identity of the owner.
Block is Shared => the memory value is up-to-date:
Read miss: requesting processor is sent back the data from memory & requesting
processor is added to the sharing set.
Write miss: requesting processor is sent the value. All processors in the set Sharers are sent
invalidate messages, & Sharers is set to identity of requesting processor. The state of the block
is made Exclusive.
Block is Exclusive: current value of the block is held in the cache of the processor
identified by the set Sharers (the owner) => three possible directory requests:
Read miss: owner processor sent data fetch message, causing state of block in owners
cache to transition to Shared and causes owner to send data to directory, where it is written to
memory & sent back to requesting processor.
Identity of requesting processor is added to set Sharers, which still contains the identity
of the processor that was the owner (since it still has a readable copy). State is shared.
Data write-back: owner processor is replacing the block and hence must write it back,
making memory copy up-to-date (the home directory essentially becomes the owner), the block
is now Uncached, and the Sharer set is empty.
Write miss: block has a new owner. A message is sent to old owner causing the cache
to send the value of the block to the directory from which it is sent to the requesting processor,
which becomes the new owner. Sharers are set to identity of new owner, and state of block is
made Exclusive.


3. Assume we have a computer where the clock per instruction (CPI) is 1.0 when all
memory accesses hit the cache. The only data access are loads and stores and these total
50 % of the instructions. if the mass penalty is 25 clock cycles and the mass rate is
2%.how much faster would the computer be if all instructions were cache hits?

FP programs on average: AMAT= 0.68 -> 0.52 -> 0.34 -> 0.26
Int programs on average: AMAT= 0.24 -> 0.20 -> 0.19 -> 0.19
8 KB Data Cache, Direct Mapped, 32B block, 16 cycle miss, SPEC 92
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 51



4. Explain in brief the types of basic cache optimization? (10 marks)

Cache Optimizations: Six basic cache optimizations
1. Larger block size to reduce miss rate:
- To reduce miss rate through spatial locality.
- Increase block size.
- Larger block size reduce compulsory misses.
- But they increase the miss penalty.
2. Bigger caches to reduce miss rate:
- capacity misses can be reduced by increasing the cache capacity.
- Increases larger hit time for larger cache memory and higher cost and power.
3. Higher associativity to reduce miss rate:
- Increase in associativity reduces conflict misses.
4. Multilevel caches to reduce penalty:
- Introduces additional level cache
- Between original cache and memory.
L1- original cache
L2- added cache.
L1 cache: - small enough
- speed matches with clock cycle time.
L2 cache: - large enough
- capture many access that would go to main memory.
Average access time can be redefined as
Hit timeL1+ Miss rate L1 X ( Hit time L2 + Miss rate L2 X Miss penalty L2)
5. Giving priority to read misses over writes to reduce miss penalty:
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 52

- write buffer is a good place to implement this optimization.
- write buffer creates hazards: read after write hazard.
6. Avoiding address translation during indexing of the cache to reduce hit time:
- Caches must cope with the translation of a virtual address from the processor to a physical
address to access memory.
- Common optimization is to use the page offset.
- That is identical in both virtual and physical addresses- to index the cache.

5. which are the major categories of the advanced optimization of cache performance?
Explain any one in details (10 marks)
Advanced Cache Optimizations
Reducing hit time
1. Small and simple caches
2. Way prediction
3. Trace caches
Increasing cache bandwidth
4. Pipelined caches
5. Multibanked caches
6. Nonblocking caches
Reducing Miss Penalty
7. Critical word first
8. Merging write buffers
Reducing Miss Rate
9. Compiler optimizations
Reducing miss penalty or miss rate via parallelism
10. Hardware prefetching
11. Compiler prefetching

Merging Write Buffer to Reduce Miss Penalty
Write buffer to allow processor to continue while waiting to write to memory
If buffer contains modified blocks, the addresses can be checked to see if address of new data
matches the address of a valid write buffer entry -If so, new data are combined with that entry
Increases block size of write for write-through cache of writes to sequential words, bytes since
multiword writes more efficient to memory
The Sun T1 (Niagara) processor, among many others, uses write merging

ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 53




6. Explain in detail the architecture support for protecting processes from each other via
virtual memory (10 marks)
Virtual Memory and Virtual Machines
Slide Sources: Based on Computer Architecture by Hennessy/Patterson.
Supplemented from various freely downloadable sources Security and Privacy
Innovations in Computer Architecture and System software
Protection through Virtual Memory
Protection from Virtual Machines
Architectural requirements
Performance
Protection via Virtual Memory
Processes
Running program
Environment (state) needed to continue running it
Protect Processes from each other
Page based virtual memory including TLB which caches page table entries Example:
Segmentation and paging in 80x86
Processes share hardware without interfering with each other
Provide User Process and Kernel Process
Readable portion of Processor state:
User supervisor mode bit
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 54

Exception enable/disable bit
Memory protection information
System call to transfer to supervisor mode
Return like normal subroutine to user mode
Mechanism to limit memory access
Memory protection
Virtual Memory
Restriction on each page entry in page table
Read, write, and execute privileges
Only OS can update page table
TLB entries also have protection field
Bugs in OS
Lead to compromising security
Bugs likely due to huge size of OS code
Protection via Virtual Machines
Virtualization
Goal:
Run multiple instances of different OS on the same hardware
Present a transparent view of one or more environments (M-to-N mapping of M real
resources, N virtual resources)
Protection via Virtual Machines
Virtualization- cont.
Challenges:
Have to split all resources (processor, memory, hard drive, graphics card, networking
card etc.) among the different OS -> virtualize the resources
The OS can not be aware that it is using virtual resources instead of real resources

Problems with virtualization
Two components when using virtualization:
Virtual Machine Monitor (VMM)
Virtual Machine(s) (VM)
Para-virtualization:
Operating System has been modified in order to run as a VM
Fully Virtualized:
No modification required of an OS to run as a VM

Virtual Machine Monitor-hypervisor
Isolates the state of each guest OS from each other
Protects itself from guest software
Determines how to map virtual resources to physical resources
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 55

Access to privileged state
Address translation
I/O
Exceptions and interrupts
Relatively small code (compared to an OS)
VMM must run in a higher privilege mode than guest OS

Managing Virtual Memory
Virtual memory offers many of the features required for hardware virtualization
Separates the physical memory onto multiple processes
Each process thinks it has a linear address space of full size
Processor holds a page table translating virtual addresses used by a process and the according
physical memory
Additional information restricts processes from
Reading a page of on another process or
Allow reading but not modifying a memory page or
Do not allow to interpret data in the memory page as instructions and do not try to execute
them
Virtual Memory management thus requires
Mechanisms to limit memory access to protected memory
At least two modes of execution for instructions
Privileged mode: an instruction is allowed to do what it whatever it wants -> kernel mode for
OS
Non-privileged mode: user-level processes
Intel x86 Architecture: processor supports four levels
Level 0 used by OS
Level 3 used by regular applications
Provide mechanisms to go from non-privileged mode to privileged mode -> system call
Provide a portion of processor state that a user process can read but not modify
E.g. memory protection information
Each guest OS maintains its page tables to do the mapping from virtual address to physical
address
Most simple solution: VMM holds an additional table which maps the physical address
of a guest OS onto the machine address
Introduces a third level of redirection for every memory access
Alternative solution: VMM maintains a shadow page table of each guest OS
Copy of the page table of the OS
Page tables still works with regular physical addresses
Only modifications to the page table are intercepted by the VMM

ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 56

7. Dene simultaneous multithreading (SMT). Discuss the design challenges in SMT and the
potential performance advantages from SMT. Give suitable illustrations. (12 Marks)
Dec.2013/Jan.2014
Thread-level and instruction-level parallelism exploit two different kinds of parallel structure in a
program. One natural question to ask is whether it is possible for a processor oriented at instruction level
parallelism to exploit thread level parallelism. The motivation for this question comes from the
observation that a data path designed to exploit higher amounts of ILP, will find that functional units are
often idle because of either stalls or dependences in the code. Could the parallelism among threads to be
used a source of independent instructions that might be used to keep the processor busy during stalls?
Could this thread-level parallelism be used to employ the functional units that would otherwise lie idle
when insufficient ILP exists?
Multithreading, and a variant called simultaneous multithreading, take advantage of these insights by
using thread level parallelism either as the primary form of parallelism exploitationfor example, on top
of a simple pipelined processor or as a method that works in conjunction with ILP mechanisms. In both
cases, multiple threads are being executed with in single processor by duplicating the thread-specific
state (program counter, registers, and so on.) and sharing the other processor resources by multiplexing
them among the threads. Since multithreading is a method for exploiting thread level parallelism,

8. Explain the directory based cache coherence protocol in a distributed memory multi-processor,
indicating the state transition diagram explicitly. (10 Marks) Dec.2013/Jan.2014
Directory Protocols
Similar to Snoopy Protocol: Three states
Shared: 1 or more processors have the block cached, and the value in memory is up-to-date
(as well as in all the caches)
Uncached: no processor has a copy of the cache block (not valid in any cache)
Exclusive: Exactly one processor has a copy of the cache block, and it has written the block,
so the memory copy is out of date
The processor is called the owner of the block
In addition to tracking the state of each cache block, we must track the processors that have copies of
the block when it is shared (usually a bit vector for each memory block: 1 if processor has copy)
Keep it simple(r):
Writes to non-exclusive data => write miss
Processor blocks until access completes
Assume messages received and acted upon in order sent
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 57



Local node: the node where a request originates
Home node: the node where the memory location and directory entry of an address reside
Remote node: the node that has a copy of a cache block (exclusive or shared)



ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 58


9. What do you understand by memory consistency? Explain. Also, discuss how relaxed
consistency models allow reads and write to complete out of order. (10 Marks)
Dec.2013/Jan.2014
Cache coherence ensures that multiple processors see a consistent view of memory. It does not answer
the question of how consistent the view of memory must be. By :how consistent we mean, when must
a processor see a value that has been updated by another processor? Since processors communicate
through shared variables (used both for data values and for synchronization), the question boils down to
this: In what order must a processor observe the data writes of another processor? Since the only way to
observe the writes of another processor is through reads, the question becomes, what properties must
be enforced among reads and writes to different locations by different processors?
Although the question of how consistent memory be seems simple, it is remarkably complicated, as we
can see with a simple example. Here are two code segments from processes P1 and P2, shown side by
side:
P1: A = 0; P2: B = 0;
A = 1; B = 1;
L1: if (B == 0) ... L2: if (A == 0)...
Assume that the processes are running on different processors, and that locations A and B are originally
cached by both processors with the initial value of 0. If writes always take immediate effect and are
immediately seen by other processors, it will be impossible for both if-statements (labeled L1 and L2) to
evaluate their conditions as true, since reaching the if-statement means that either A or B must have
been assigned the value 1. But suppose the write invalidate is delayed, and the processor is allowed to
continue during this delay; then it is possible that both P1 and P2 have not seen the invalidations for B
and A (respectively) before they attempt to read the values.

10. Briey explain the eleven advanced optimizations of cache performance. (11 Marks)
Dec.2013/Jan.2014
Cache Optimizations: Six basic cache optimizations
1. Larger block size to reduce miss rate:
- To reduce miss rate through spatial locality.
- Increase block size.
- Larger block size reduce compulsory misses.
- But they increase the miss penalty.
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 59

2. Bigger caches to reduce miss rate:
- capacity misses can be reduced by increasing the cache capacity.
- Increases larger hit time for larger cache memory and higher cost and power.
3. Higher associativity to reduce miss rate:
- Increase in associativity reduces conflict misses.
4. Multilevel caches to reduce penalty:
- Introduces additional level cache
- Between original cache and memory.
L1- original cache
L2- added cache.
L1 cache: - small enough
- speed matches with clock cycle time.
L2 cache: - large enough
- capture many access that would go to main memory.
Average access time can be redefined as
Hit timeL1+ Miss rate L1 X ( Hit time L2 + Miss rate L2 X Miss penalty L2)
5. Giving priority to read misses over writes to reduce miss penalty:
- write buffer is a good place to implement this optimization.
- write buffer creates hazards: read after write hazard.
6. Avoiding address translation during indexing of the cache to reduce hit time:
- Caches must cope with the translation of a virtual address from the processor to a physical address to
access memory.
- Common optimization is to use the page offset.
- That is identical in both virtual and physical addresses- to index the cache.

11. What is RAID technology? Describe the different PLAID levels, indicating their fault
tolerance, overhead and the salient features. (O9 Marks) Dec.2013/Jan.2014
Although a disk array would have more faults than a smaller number of larger disks when each
disk has the same reliability, dependability can be improved by adding redundant disks to the array to
tolerate faults. That is, if a single disk fails, the lost information can be reconstructed from redundant
information. The only danger is in having another disk fail between the time the first disk fails and the
time it is replaced (termed mean time to repair, or MTTR). Since the mean time to failure (MTTF) of
disks is tens of years, and the MTTR is measured in hours, redundancy can make the measured
reliability of 100 disks much higher than that of a single disk. These systems have become known by the
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 60

acronym RAID, standing originally for redundant array of inexpensive disks, although some have
renamed
it to redundant array of independent disks Another issue in the design of RAID systems is decreasing
the mean time to repair. This reduction is typically done by adding hot spares to the system: extra disks
that are not used in normal operation. When a failure occurs on an active disk in a RAID, an idle hot
spare is first pressed into service. The data missing from the failed disk is then reconstructed onto the
hot spare using the redundant data from the other RAID disks. If this process is performed
automatically, MTTR is significantly reduced because waiting for the operator in the repair process is
no longer the pacing item

No Redundancy (RAID 0)
This notation is refers to a disk array in which data is striped but there is no redundancy to tolerate disk
failure. Striping across a set of disks makes the collection appear to software as a single large disk,
which simplifies storage management. It also improves performance for large accesses, since many
disks
can operate at once. Video editing systems, for example, often stripe their data. RAID 0 something of a
misnomer as there is no redundancy, it is not in the original RAID taxonomy, and striping predates
RAID. However, RAID levels are often left to the operator to set when creating a storage system, and
RAID 0 is often listed as one of the options. Hence, the term RAID 0 has become widely used.
Mirroring (RAID 1)
This traditional scheme for tolerating disk failure, called mirroring or shadowing, uses twice as many
disks as does RAID 0. Whenever data is written to one disk, that data is also written to a redundant disk,
so that there are always two copies of the information. If a disk fails, the system just goes to the
mirror to get the desired information. Mirroring is the most expensive RAID solution, since it requires
the most disks. One issue is how mirroring interacts with striping. Suppose you had, say, four disks
worth of data to store and eight physical disks to use. Would you create four pairs of diskseach
organized as RAID 1and then stripe data across the four RAID 1 pairs? Alternatively, would you
create two sets of four diskseach organized as RAID 0and then mirror writes to both RAID 0 sets?
The RAID terminology has evolved to call the former RAID 1+0 or RAID 10 (striped mirrors) and
the latter RAID 0+1 or RAID 01 (mirrored stripes).
Bit-Interleaved Parity (RAID 3)
The cost of higher availability can be reduced to 1/N, where N is the number of disks in a protection
group. Rather than have a complete copy of the original data for each disk, we need only add enough
redundant information to restore the lost information on a failure. Reads or writes go to all disks in the
group, with one extra disk to hold the check information in case there is a failure. RAID 3 is popular
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 61

in applications with large data sets, such as multimedia and some scientific codes.
Block-Interleaved Parity and Distributed Block-Interleaved Parity (RAID 4 and RAID 5)
Both these levels use the same ratio of data disks and check disks as RAID 3, but they access data
differently. The parity is stored as blocks and associated with a set of data blocks. In RAID 3, every
access went to all disks. Some applications would prefer to do smaller accesses, allowing independent
accesses to occur in parallel. That is the purpose of the next RAID levels. Since error-detection
information in each sector is checked on reads to see if data is correct, such small reads to each disk
can occur independently as long as the minimum access is one sector.
P+Q redundancy (RAID 6)
Parity based schemes protect against a single, self-identifying failures. When a single failure is not
sufficient, parity can be generalized to have a second calculation over the data and another check disk of
information. Yet another parity block is added to allow recovery from a second failure. Thus, the
storage overhead is twice that of RAID 5.

12. What is a queuing model? Explain. Consider an I/O system with a single disk getting an
average 50 I/O requests per second. Assume the average time for a disk to service an I/O request
is 10 msec. What is the utilization of the I/O system? (10 Marks) Dec.2013/Jan.2014
With I/O systems, however, we also have a mathematical tool to guide I/O design that is a little more
work and much more accurate than best case analysis, but much less work than full scale simulation.
Because of the probabilistic nature of I/O events and because of sharing of I/O resources, we can give a
set of simple theorems that will help calculate response time and throughput of an entire I/O system.
This helpful field is called queuing theory. This leads us to Littles Law, which relates the average
number of tasks in the system, the average arrival rate of new tasks, and the average time to perform a
task:
Littles Law applies to any system in equilibrium, as long as nothing inside the black box is creating
new tasks or destroying them. Note that the arrival rate and the response time must use the same time
unit; inconsistency in time units is a common cause of errors.
Lets try to derive Littles Law. Assume we observe a system for Timeobserve minutes. During that
observation, we record how long it took each task to be serviced, and then sum those times. The number
of tasks completed during Timeobserve is Numbertask, and the sum of waiting times is
Timeaccumulated. Then Mean number of tasks in system
if we substitute these three definitions in the formula above, and swap the resulting two terms on the
right hand side, we get Littles Law. Mean number of tasks in system = Arrival rate Mean response
time
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 62

Mean number of tasks in system Timeaccumulated
Mean number of tasks in system = Arrival rate Mean response time

Littles Law and a series of definitions lead to several useful equations:
TimeserverAverage time to service a task; average service rate is 1/Timeserver,
traditionally represented by the symbol in many queueing texts.
TimequeueAverage time per task in the queue.
TimesystemAverage time/task in the system, or the response time, the sum of
Timequeue and Timeserver.
Arrival rateAverage number of arriving tasks/second, traditionally represented by the symbol in many
queueing texts.
LengthserverAverage number of tasks in service.
LengthqueueAverage length of queue.
LengthsystemAverage number of tasks in system, the sum of Lengthqueue and Lengthserver.

13. Explain how the following techniques are useful in uncovering the parallelism among
instructions by creating longer sequences of straight line code:
i) Software pipelining ii) Trace scheduling (10 Marks) Dec.2013/Jan.2014

Software Pipelining: Symbolic Loop Unrolling
We have already seen that one compiler technique, loop unrolling, is useful to uncover parallelism
among instructions by creating longer sequences of straightline code. There are two other important
techniques that have been developed for this purpose: software pipelining and trace scheduling.
Software pipelining is a technique for reorganizing loops such that each iteration in the software-
pipelined code is made from instructions chosen from different iterations of the original loop. This
approach is most easily understood by looking at the scheduled code for the superscalar version of
MIPS, which appeared
in Figure 4.2 on page 231. The scheduler in this example essentially interleaves instructions from
different loop iterations, so as to separate the dependent instructions that occur within a single loop
iteration. By choosing instructions from different iterations, dependent computations are separated from
one another by an entire loop body, increasing the possibility that the unrolled loop can be scheduled
without stalls.
A software-pipelined loop interleaves instructions from different iterations without unrolling the loop,
This technique is the software counterpart to what Tomasulos algorithm does in hardware. The
softwarepipelined loop for the earlier example would contain one load, one add, and one store, each
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 63

from a different iteration. There is also some start-up code that is needed before the loop begins as well
as code to finish up after the loop is completed.

Trace Scheduling: Focusing on the Critical Path
Trace scheduling is useful for processors with a large number of issues per clock, where conditional or
predicated execution is inappropriate or unsupported, and where simple loop unrolling may not be
sufficient by itself to uncover enough ILP to keep the processor busy. Trace scheduling is a way to
organize the global code motion process, so as to simplify the code scheduling by incurring the costs of
possible code motion on the less frequent paths. Because it can generate significant overheads on the
designated infrequent path, it is best used where profile information indicates significant differences in
frequency between different paths and where the profile information is highly indicative of program
behavior independent of the input. Of course, this limits its effective applicability to certain classes of
programs. There are two steps to trace scheduling. The first step, called trace selection, tries to find a
likely sequence of basic blocks whose operations will be put together into a smaller number of
instructions; this sequence is called a trace. Loop unrolling is used to generate long traces, since loop
branches are taken with high probability. Additionally, by using static branch prediction, other
conditional
branches are also chosen as taken or not taken, so that the resultant trace is a straight-line sequence
resulting from concatenating many basic blocks.

14. Explain the characteristics of transaction processing benchmarks. (08 Marks)
Dec.2013/Jan.2014
I/O benchmarks offer another perspective on the response time vs. throughput trade-off. The two
reporting approaches report maximum throughput given either that 90% of response times must be less
than a limit or that the average response time must be less than a limit.
Transaction Processing Benchmarks
Transaction processing (TP, or OLTP for on-line transaction processing) is chiefly concerned with I/O
rate: the number of disk accesses per second, as opposed to data rate, measured as bytes of data per
second. TP generally involves changes to a large body of shared information from many terminals, with
the TP system guaranteeing proper behavior on a failure. Suppose, for example, a banks computer
fails when a customer tries to withdraw money. The TP system would guarantee that the account is
debited if the customer received the money and that the account is unchanged if the money was not
received. Airline reservations systems as well as banks are traditional customers for TP.
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 64

TPC-C uses a database to simulate an order-entry environment of a wholesale supplier, including
entering and delivering orders, recording payments, checking the status of orders, and monitoring the
level of
stock at the warehouses.

15. Explain the relation between faults, errors and failures. (04 Marks) Dec.2013/Jan.2014

To clarify, the relation between faults, errors, and failures is: A fault creates one or more latent errors.
The properties of errors are
a) a latent error becomes effective once activated;
b) an error may cycle between its latent and effective states;
c) an effective error often propagates from one component to another, thereby creating new errors.
Thus, an effective error is either a formerly latent error in that component or it has propagated from
another error in that component or from elsewhere.
A component failure occurs when the error affects the delivered service. These properties are recursive,
and apply to any component in the system.

16. Explain the intel IA 64 architecture, in detail. (12 Marks) Dec.2013/Jan.2014
The Intel IA-64 Instruction Set Architecture
The IA-64 is a RISC-style, register-register instruction set, but with many novel features designed to
support compiler-based exploitation of ILP. Our focus here is on the unique aspects of the IA-64 ISA.
Most of these aspects have been discussed already in this chapter, including predication, compiler-based
parallelism detection, and support for memory reference speculation.
The IA-64 Register Model
The components of the IA-64 register state are:
128 64-bit general-purpose registers, which as we will see shortly are actually 65 bits wide;
128 82-bit floating point registers, which provides two extra exponent bits over the standard 80-bit
IEEE format,
64 1-bit predicate registers,
8 64-bit branch registers, which are used for indirect branches, and
a variety of registers used for system control, memory mapping, performance counters, and
communication with the OS.
The integer registers are configured to help accelerate procedure calls using a register stack mechanism
similar to that developed in the Berkeley RISC-I processor and used in the SPARC architecture.
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 65

Registers 0-31 are always accessible and are addressed as 0-31. Registers 32-128 are used as a register
stack and each procedure is allocated a set of registers (from 0 to 96) for its use.

17. What are the basic schemes for enforcing coherency in multiprocessor systems? (10 Marks)
I/O and Consistency of Cached Data Dec.2013/Jan.2014
Because of caches, data can be found in memory and in the cache. As long as the CPU is the sole device
changing or reading the data and the cache stands between the CPU and memory, there is little danger in
the CPU seeing the old or stale copy. I/O devices give the opportunity for other devices to cause copies
to be inconsistent or for other devices to read the stale copies. generally referred to as the cache-
coherency problem. The question is this: Where does the I/O occur in the computerbetween the I/O
device and the cache or between the I/O device and main memory? If input puts data into the cache and
output reads data from the cache, both I/O and the CPU see the same data, and the problem is solved.
The difficulty in this approach is that it interferes with the CPU. I/O competing with the CPU for cache
access will cause the CPU to stall for I/O. Input may also interfere with the cache by displacing some
information with new data that is unlikely to be accessed soon. For example, on a page fault the CPU
may need to access a few words in a page, but a program is not likely to access every word of the page
if it were loaded into the
cache. Given the integration of caches onto the same integrated circuit, it is also difficult for that
interface to be visible. The goal for the I/O system in a computer with a cache is to prevent the staledata
problem while interfering with the CPU as little as possible. Many systems,

18. Write short notes on:

Queuing theory Dec.2013/Jan.2014

With I/O systems, however, we also have a mathematical tool to guide I/O design that is a little more
work and much more accurate than best case analysis, but much less work than full scale simulation.
Because of the probabilistic nature of I/O events and because of sharing of I/O resources, we can give a
set of simple theorems that will help calculate response time and throughput of an entire I/O system.
This helpful field is called queuing theory. This leads us to Littles Law, which relates the average
number of tasks in the system, the average arrival rate of new tasks, and the average time t o perform a
task:
Littles Law applies to any system in equilibrium, as long as nothing inside the black box is creating
new tasks or destroying them. Note that the arrival rate and the response time must use the same time
unit; inconsistency in time units is a common cause of errors.
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 66

Lets try to derive Littles Law. Assume we observe a system for Timeobserve minutes. During that
observation, we record how long it took each task to be serviced, and then sum those times. The number
of tasks completed during Timeobserve is Numbertask, and the sum of waiting times is
Timeaccumulated. Then Mean number of tasks in system
if we substitute these three definitions in the formula above, and swap the resulting two terms on the
right hand side, we get Littles Law. Mean number of tasks in system = Arrival rate Mean response
time
Mean number of tasks in system Timeaccumulated
Mean number of tasks in system = Arrival rate Mean response time

Littles Law and a series of definitions lead to several useful equations:
TimeserverAverage time to service a task; average service rate is 1/Timeserver, traditionally
represented by the symbol in many queueing texts.
TimequeueAverage time per task in the queue.
TimesystemAverage time/task in the system, or the response time, the sum of Timequeue and
Timeserver.
Arrival rateAverage number of arriving tasks/second, traditionally represented by the symbol in many
queueing texts.
LengthserverAverage number of tasks in service.
LengthqueueAverage length of queue.
LengthsystemAverage number of tasks in system, the sum of Lengthqueue and Lengthserver.















ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 67

UNIT V

1. Explain in detail the hardware support for preserving exception behavior during
speculation
H/W Support: Conditional Execution
Also known as Predicated Execution
Enhancement to instruction set
Can be used to eliminate branches
All control dependences are converted to data dependences
Instruction refers to a condition
Evaluated as part of the execution
True?
Executed normally
False?
Execution continues as if the instruction were a no-op
Example:
-Conditional move between registers

Example
if (A==0)
S = T;
Straightforward Code
BNEZ R1, L;
ADDU R2, R3, R0
L:
Conditional Code
CMOVZ R2, R3, R1
Annulled if R1 is not 0
Conditional Instruction
Can convert control to data dependence
In vector computing, its called if conversion.
Traditionally, in a pipelined system
Dependence has to be resolved closer to front of pipeline
For conditional execution
-Dependence is resolved at end of pipeline, closer to the register write
Another example
A = abs(B)
if (B < 0)
A = -B;
else
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 68

A = B;
Two conditional moves
One unconditional and one conditional move
The branch condition has moved into the instruction
-Control dependence becomes data dependence

Limitations of Conditional Moves
Conditional moves are the simplest form of predicated instructions
Useful for short sequences
For large code, this can be inefficient
Introduces many conditional moves
Some architectures support full predication
All instructions, not just moves
Very useful in global scheduling
Can handle non loop branches nicely
Eg : The whole if portion can be predicated if the frequent path is not taken


Assume: Two issues, one to ALU and one to memory; or branch by itself
Wastes a memory operation slot in second cycle
Can incur a data dependence stall if branch is not taken
-R9 depends on R8


Predicated Execution
Assume: LWC is predicated load and loads if third operand is not 0


ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 69


One instruction issue slot is eliminated
On mispredicted branch, predicated instruction will not have any effect
If sequence following the branch is short, the entire block of the code can be predicated

Some Complications
Exception Behavior
Must not generate exception if the predicate is false
If R10 is zero in the previous example
LW R8, 0(R10) can cause a protection fault
If condition is satisfied
A page fault can still occur
Biggest Issue Decide when to annul an instruction
Can be done during issue
Early in pipeline
Value of condition must be known early, can induce stalls
Can be done before commit
Modern processors do this
Annulled instructions will use functional resources
Register forwarding and such can complicate implementation


2. Explain the prediction and Speculation support provided in IA64? (10 marks)

IA-64 Pipeline Features
Branch Prediction
Predicate Registers allow branches to be turned on or off
Compiler can provide branch prediction hints
Register Rotation
Allows faster loop execution in parallel
Predication Controls Pipeline Stages
Cache Features
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 70

L1 Cache
4 way associative
16Kb Instruction
16Kb Data
L2 Cache
Itanium
6 way associative
96 Kb
Itanium2
8 way associative
256 Kb Initially
256Kb Data and 1Mb Instruction on Montvale!
Cache Features
L3 Cache
Itanium
4 way associative
Accessible through FSB
2-4Mb
Itanium2
2 4 way associative
On Die
3Mb
Up to 24Mb on Montvale chips(12Mb/core)!
Register
Specification
_128, 65-bit General Purpose Registers
_128, 82-bit Floating Point Registers
_128, 64-bit Application Registers
_8, 64-bit Branch Registers
_64, 1-bit Predicate Registers
Register Model
128 General and Floating Point Registers
32 always available, 96 on stack
As functions are called, compiler allocates a specific number of local and output
Registers to use in the function by using register allocation instruction Alloc.
Programs rename registers to start from 32 to 127.
Register Stack Engine (RSE) automatically saves/restores stack to memory when
needed
RSE may be designed to utilize unused memory bandwidth to perform register spill and
fill operations in the background
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 71



On function call, machine shifts register window such that previous output registers become
new locals starting at r32

ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 72






ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 73





ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 74





ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 75




Instruction Encoding
Each instruction includes the opcode and three operands
Each instruction holds the identifier for a corresponding Predicate Register
Each bundle contains 3 independent instructions
Each instruction is 41 bits wide
Each bundle also holds a 5 bit template field

Distributing Responsibility
_ILP Instruction Groups
_Control flow parallelism
Parallel comparison Multiway branches :Influencing dynamic events
Provides an extensive set of hints that the compiler uses to tell the hardware about likely branch
behavior (taken or not taken, amount to fetch at branch target) and memory operations (in what
level of the memory hierarchy to cache data).

ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 76




Use predicates to eliminate branches, move instructions across branches
Conditional execution of an instruction based on predicate register (64 1-bit predicate
registers)
Predicates are set by compare instructions
Most instructions can be predicated each instruction code contains predicate field
If predicate is true, the instruction updates the computation state; otherwise, it behaves
like a nop

Scheduling and Speculation
Basic block: code with single entry and exit, exit point can be multiway branch
Control Improve ILP by statically move ahead long latency code blocks.
Path is a frequent execution path
Schedule for control paths
Because of branches and loops, only small percentage of code is executed regularly
Analyze dependences in blocks and paths
Compiler can analyze more efficiently - more time, memory, larger view of the program
Compiler can locate and optimize the commonly executed blocks

ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 77




Control speculation
Not all the branches can be removed using predication.
Loads have longer latency than most instructions and tend to start timecritical
chains of instructions
Constraints on code motion on loads limit parallelism
Non-EPIC architectures constrain motion of load instruction
IA-64: Speculative loads, can safely schedule load instruction before one or
more prior branches

Control Speculation
Exceptions are handled by setting NaT (Not a Thing) in target register
Check instruction-branch to fix-up code if NaT flag set
Fix-up code: generated by compiler, handles exceptions
NaT bit propagates in execution (almost all IA-64 instructions)
NaT propagation reduces required check points

Speculative Load
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 78

Load instruction (ld.s) can be moved outside of a basic block even if branch target is not
known
Speculative loads does not produce exception - it sets the NaT
Check instruction (chk.s) will jump to fix-up code if NaT is set
Data Speculation
The compiler may not be able to determine the location in memory being referenced
(pointers)
Want to move calculations ahead of a possible memory dependency Traditionally, given
a store followed by a load, if the compiler cannot determine if the addresses will be
equal, the load cannot be moved ahead of the store.
IA-64: allows compiler to schedule a load before one or more stores
Use advance load (ld.a) and check (chk.a) to implement
ALAT (Advanced Load Address Table) records target register, memory address
accessed, and access size

Data Speculation
1. Allows for loads to be moved ahead of stores even if the compiler is unsure if addresses are
the same
2. A speculative load generates an entry in the ALAT
3. A store removes every entry in the ALAT that have the same address
4. Check instruction will branch to fix-up if the given address is not in the ALAT





ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 79


Use address field as the key for comparison
If an address cannot be found, run recovery code
ALAT are smaller and simpler implementation than equivalent structures for superscalars.



ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 80

3. Two primary means of communicating data in a large-scale multiprocessor are message
passing and shared memory. Enlist the advantages and disadvantages of the above two
mechanisms. (O8 Marks) Dec.2013/Jan.2014
For a multiprocessor with multiple address spaces, communication of data is done by explicitly
passing messages among the processors. Therefore, these multiprocessors are often called message
passing multiprocessors.
In message passing multiprocessors, communication occurs by sending messages that request action or
deliver data just as with the network protocols. if one processor wants to access or operate on data in a
remote memory, it can send a message to request the data or to perform some operation on the data. In
such cases, the message can be thought of as a remote procedure call (RPC). When the destination
processor receives the message, either by polling for it or via an interrupt, it performs the operation or
access on behalf of the remote processor and returns the result with a reply message. This type of
message passing is also called synchronous, since the initiating processor sends a request and waits until
the reply is returned before continuing. Software systems have been constructed to encapsulate the
details of
sending and receiving messages, including passing complex arguments or return values, presenting a
clean RPC facility to the programmer.
The second major challenge in parallel processing involves the large latency of remote access in a
parallel processor. In existing shared-memory multiprocessors, communication of data between
processors may cost anywhere from 100 clock cycles to over 1,000 clock cycles, depending on the
communication mechanism, the type of interconnection network, and the scale of the multiprocessor.

4. With suitable illustrations, briey explain the performance of a scientic work load on a:
i) Symmetric shared-memory multiprocessor
ii) Distributed-memory multiprocessor. (12 Marks) Dec.2013/Jan.2014
Cache Coherence
Unfortunately, caching shared data introduces a new problem because the view of memory held
by two different processors is through their individual caches, which, without any additional
precautions, could end up seeing two different values. I.e, If two different processors have two different
values for the same location, this difficulty is generally referred to as cache coherence problem


ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 81


Informally:

Any read must return the most recent write
-Too strict and too difficult to implement
Better:
Any write must eventually be seen by a read
-All writes are seen in proper order (serialization)

Two rules to ensure this:
If P writes x and then P1 reads it, Ps write will be seen by P1 if the read and write are
sufficiently far apart
Writes to a single location are serialized: seen in one order
Latest write will be seen
Otherwise could see writes in illogical order (could see older value after a newer value)

The definition contains two different aspects of memory system:
Coherence
Consistency
A memory system is coherent if,
Program order is preserved.
Processor should not continuously read the old data value.
Write to the same location are serialized.

ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 82

The above three properties are sufficient to ensure coherence, When a written value will be seen is also
important. This issue is defined by memory consistency model. Coherence and consistency are
complementary.

Directory Protocols
Similar to Snoopy Protocol: Three states
Shared: 1 or more processors have the block cached, and the value in memory is up-to-date
(as well as in all the caches)
Uncached: no processor has a copy of the cache block (not valid in any cache)
Exclusive: Exactly one processor has a copy of the cache block, and it has written the block,
so the memory copy is out of date
The processor is called the owner of the block
In addition to tracking the state of each cache block, we must track the processors that have copies of
the block when it is shared (usually a bit vector for each memory block: 1 if processor has copy)
Keep it simple(r):
Writes to non-exclusive data => write miss


5. Explain how do you detect and enhance loop level parallelism. (O8 Marks)
Dec.2013/Jan.2014
The simplest and most common way to increase the amount of parallelism available among instructions
is to exploit parallelism among iterations of a loop. This type of parallelism is often called loop-level
parallelism. Here is a simple example of a loop, which adds two 1000-element arrays, that is completely
parallel:
for
(i=1; i<=1000; i=i+1)
x[i] = x[i] + y[i];
Every iteration of the loop can overlap with any other iteration, although within each loop iteration there
is little or no opportunity for overlap. There are a number of techniques we will examine for converting
such looplevel parallelism into instruction-level parallelism. Basically, such techniques work by
unrolling the loop either statically by the compiler

6. what are the two challenges of parallel processing? Explain. (O4 Marks) Dec.2013/Jan.2014
Two important hurdles, both explainable with Amdahls Law, make parallel processing challenging.
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 83

The first has to do with the limited parallelism available in programs and the second arises from the
relatively high cost of communications. Limitations in available parallelism make it difficult to achieve
good speedups in any parallel processor.
The second major challenge in parallel processing involves the large latency of remote access in a
parallel processor. In existing shared-memory multiprocessors, communication of data between
processors may cost anywhere from 100 clock cycles to over 1,000 clock cycles, depending on the
communication mechanism, the type of interconnection network, and the scale of the multiprocessor.
the typical round-trip delays to retrieve a word from a remote memory for several different shared-
memory parallel processors.

7. What is multiprocessor cache coherency? Explain. (O6 Marks) Dec.2013/Jan.2014
Directory Protocols
Similar to Snoopy Protocol: Three states
Shared: 1 or more processors have the block cached, and the value in memory is up-to-date
(as well as in all the caches)
Uncached: no processor has a copy of the cache block (not valid in any cache)
Exclusive: Exactly one processor has a copy of the cache block, and it has written the block,
so the memory copy is out of date
The processor is called the owner of the block
In addition to tracking the state of each cache block, we must track the processors that have
copies of the block when it is shared (usually a bit vector for each memory block: 1 if processor
has copy)
Keep it simple(r):
Writes to non-exclusive data => write miss
Processor blocks until access completes
Assume messages received and acted upon in order sent

ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 84


Local node: the node where a request originates
Home node: the node where the memory location and directory entry of an address reside
Remote node: the node that has a copy of a cache block (exclusive or shared)



















ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 85

UNIT VI

1. Explain the steps involved in oating-point addition. Use the above algorithm to
compute the sum (-1.001
2
>< 2
-2
) + (-1.111
2
>< 2
0
). (1OMarks) Dec.2013/Jan.2014
First, the sum of the binary 6-bit numbers 1.100112 and 1.100012 25: When the summands are
shifted so they have the same exponent, this is
1.10011
+ .0000110001
Using a 6-bit adder (and discarding the low-order bits of the second addend)
gives
1.10011
+ .00001
1.10100
The first discarded bit is 1. This isnt enough to decide whether to round up. The rest of the discarded
bits, 0001, need to be examined. Or actually, we just need to record whether any of these bits are
nonzero, storing this fact in a sticky bit just as in the multiplication algorithm. So for adding two p-bit
numbers, a p-bit adder is sufficient, as long as the first discarded bit (round) and the OR of the rest of
the bits (sticky) are kept. Then Figure H.11 can be used to determine if a roundup is necessary, just as
with multiplication. In the example above, sticky is 1, so a roundup is necessary. The final sum is
1.101012.

2. Discuss how multiplication is done faster by employing a simple array multiplier for
multiplying two 5-bit numbers, using three CSAs (carry save adders) and one propagate adder.
Give the functional block schematic of the above said multiplier. (10 Marks) Dec.2013/Jan.2014
Probably the most common use of floating-point units is performing matrix operations, and the most
frequent matrix operation is multiplying a matrix times a matrix (or vector), which boils down to
computing an inner product, x1 y1 + x2 y2 + . . . + xn yn. Computing this requires a series of
multiply-add combinations. Motivated by this, the IBM RS/6000 introduced a single instruction that
computes
ab + c, the fused multiply-add. Although this requires being able to read three operands in a single
instruction, it has the potential for improving the performance of computing inner products.
The fused multiply-add computes ab + c exactly and then rounds. Although rounding only once
increases the accuracy of inner products somewhat, that is not its primary motivation. There are two
ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 86

main advantages of rounding once. First, as we saw in the previous sections, rounding is expensive to
implement because it may require an addition. By rounding only once, an addition operation
has been eliminated. Second, the extra accuracy of fused multiply-add can be used to compute correctly
rounded division and square root when these are not available directly in hardware. Fused multiply-add
can also be used to implement efficient floating-point multiple-precision packages. The implementation
of correctly rounded division using fused multiply-add has many details, but the main idea is simple.
Consider again the example from Section H.6 (page H-31), which was computing a/b with a = 13, b =
51. Then 1/b rounds to b = .020, and ab rounds to q = .26, which is not the correctly rounded quotient.
Applying fused multiply-add twice will correctly adjust the result, via the formulas
r = a bq
q = q + rb

Precisions
The standard specifies four precisions: single, single extended, double, and double extended. The
properties of these precisions are Implementations are not required to have all four precisions, but are
encouraged to support either the combination of single and single extended or all of single, double, and
double extended. Because of the widespread use of double precision in scientific computing, double
precision is almost always implemented. Thus the computer designer usually only has to decide whether
to support double extended and, if so, how many bits it should have.

3. Explain the algorithm used for multiplication of two unsigned numbers, with a neat block
diagram. (08 Marks) Dec.2013/Jan.2014
Shifting over Zeros
Although the technique of shifting over zeros is not currently used much, it is instructive to consider. It
is distinguished by the fact that its execution time is operand dependent. Its lack of use is primarily
attributable to its failure to offer enough speedup over bit-at-a-time algorithms. In addition, pipelining,
synchronization with the CPU, and good compiler optimization are difficult with algorithms that run in
variable time. In multiplication, the idea behind shifting over zeros is to add logic that detects when the
low-order bit of the A register is 0 and, if so, skips the addition step and proceeds directly to the shift
stephence the term shifting over zeros. What about shifting for division? In nonrestoring division, an
ALU operation (either an addition or subtraction) is performed at every step. There appears to be
no opportunity for skipping an operation. But think about division this way: To compute a/b, subtract
multiples of b from a, and then report how many subtractions were done. At each stage of the
subtraction process the remainder must fit into the P register

ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 87

4. What are the three techniques used for speeding up integer addition? (O6 Marks)
Dec.2013/Jan.2014
Integer addition is the simplest operation and the most important. Even for programs that dont do
explicit arithmetic, addition must be performed to increment the program counter and to calculate
addresses. Despite the simplicity of addition, there isnt a single best way to perform high-speed
addition. We will
discuss three techniques that are in current use: carry-lookahead, carry-skip, and carry-select.
Carry-Lookahead
An n-bit adder is just a combinational circuit. It can therefore be written by a logic formula whose form
is a sum of products and can be computed by a circuit with two levels of logic. How do you figure out
what this circuit looks like? the formula for the ith sum can be written as
si = ai bi ci + ai bi ci + ai bi ci + ai bi ci
where ci is both the carry-in to the ith adder and the carry-out from the (i1)-st adder.
The problem with this formula is that although we know the values of ai and bithey are inputs to the
circuitwe dont know ci. So our goal is to write ci in terms of ai and bi. To accomplish this, we first
rewrite Equation H.2.2
as ci = gi 1+ p i 1c i 1, g i 1= a i 1b i 1, p i 1 = a i 1 + b i 1

5. Explain SRT algorithm for division. (O6 Marks) Dec.2013/Jan.2014

Iterative Division
The differences between SRT division and ordinary nonrestoring division can be summarized as
follows:
1. ALU decision rule: In nonrestoring division, it is determined by the sign of P; in SRT, it is
determined by the two most-significant bits of P.
2. Final quotient: In nonrestoring division, it is immediate from the successive signs of P; in SRT, there
are three quotient digits (1, 0, 1), and the final quotient must be computed in a full n-bit adder.
3. Speed: SRT division will be faster on operands that produce zero quotient bits.
The simple version of the SRT division algorithm given above does not offer enough of a speedup to be
practical in most cases. However, later on in this section we will study variants of SRT division that are
quite practical.



ADVANCED COMPUTER ARCHITECTURE 12SCS23

Dept of CS&E, SJBIT Page 88

8) Write short notes on:
Floating point multiplication algorithm Dec.2013/Jan.2014
Example: How does the multiplication of the single-precision numbers
1 10000010 000. . . = 1 23
0 10000011 000. . . = 1 24 proceed in binary?
Answer: When unpacked, the significands are both 1.0, their product is 1.0, and so the result is of the
form
1 ???????? 000. . .
To compute the exponent, use the formula biased exp (e1 + e2) = biased exp(e1) + biased exp(e2) bias

The simplest floating-point operation is multiplication, so we discuss it first. A binary floating-point
number x is represented as a significand and an exponent, x = s 2e.
The formula (s1 2e1) (s2 2e2) = (s1 s2) 2e1+e2 shows that a floating-point multiply algorithm
has several parts. The first part multiplies the significands using ordinary integer multiplication.
Because floating point numbers are stored in sign magnitude form, the multiplier need only deal with
unsigned numbers (although we have seen that Booth recoding handles signed twos complement
numbers painlessly). The second part rounds the result. If the significands are unsigned p-bit numbers
(e.g., p = 24 for single precision), then the product can have as many as 2p bits and must be rounded to a
p-bit number.
The third part computes the new exponent. Because exponents are stored with a bias, this involves
subtracting the bias from the sum of the biased exponents.

Вам также может понравиться