Вы находитесь на странице: 1из 14

PERI INSTITUTE OF TECHNOLOGY

MANNIVAKKAM, CHENNAI-48

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

CONTINUOUS ASSESMENT TEST-I CS2253 - COMPUTER ORGANIZATION AND ARCHITECTURE ANSWER KEY
1. Define computer architecture and organization

In describing computer system, a distinction is often made between computer architecture and computer organization. Computer architecture refers to those attributes of a system visible to a programmer, or put another way, those attributes that have a direct impact on the logical execution of a program. Computer

organization refers to the operational units and their interconnection that realize the architecture specification.
2. What is an Opcode? How many bits are needed to specify 32 distinct operations?

In computer science, an opcode (operation code) is the portion of a machine language instruction that specifies the operation to be performed. 5 bits are needed to specify 32 distinct operations. 3. What is SPEC? Specify the formula for SPEC rating SPEC - Standard Performance Evaluation Corporation,
4. Define MIPS

A unit of computing speed equivalent to a million instructions per second.


5. What is Little Endian and Big Endian Big-endian systems are systems in which the most significant byte (see Most significant bit) of the word is stored in the smallest address given and the least significant byte is stored in the largest. In contrast, little endian systems are those in which the least significant byte is stored in the smallest address.

6. State and explain the performance equations The total amount of time (t) required to execute a particular benchmark program is
,

7. What is the need for reduced instruction chip? RISC (reduced instruction set computer) is a microprocessor that is designed to perform a smaller number of types of computer instructions so that it can operate at a higher speed 8. What is meant by the stored program concept? Discuss A stored-program computer is one which stores program instructions in electronic memory. Often the definition is extended with the requirement that the treatment of programs and data in memory be interchangeable or uniform 9. Define word length. In computer architecture, a word is a unit of data of a defined bit length that can be addressed and moved between storage and the computer processor. Usually, the defined bit length of a word is equivalent to the width of the computer's data bus so that a word can be moved in a single operation from storage to a processor register. 10. What are the four basic types of operations that are need to be supported by an instruction set? Data transfer between memory and processor registers. Arithmetic & logic operations on data Program sequencing & control I/O transfers

PART-B 11. (a) Describe the connections between the processor and memory with a neat structure diagram? The PC (Program Counter) contains the memory address of the instruction to be executed. During execution, the contents of the PC are updated to point to the next instruction. Every time that an instruction is to be executed, the program counter releases its contents to the internal bus and sends it to the memory address register. The MAR (Memory Address Register) holds the address of the location to or from which data are to be transferred. As can be seen from the figure above, the connection of the MAR to the main memory is one-way or unidirectional. The MDR (Memory Data Register) contains the data to be written or read out of the addressed location.

During the fetch operation, the MDR contains the instruction to be executed or data needed during execution. In write operation, MDR the data to be written into the main memory. The IR (Instruction Register) contains the instruction that is being executed. Before the IR executes the instruction it needs to be decoded first. As soon as the content of the MDR is transferred to the IR, the decoding process commences. After decoding, execution of the instruction will take place. Operating Steps 1. PC is set to point to the first instruction of the program (the operating system loads the memory address of the first instruction). 2. The contents of the PC are transferred to the MAR (which is automatically transmitted to the MM) and a Read signal is sent to the MM. 3. The addressed word is read out of MM and loaded into the MDR. 4. The contents of MDR are transferred to the IR. The instruction is ready to be decoded and executed. 5. During execution, the contents of the PC are incremented or updated to point to the next instruction. Example Enumerate the different steps needed to execute the machine instruction ADD LOCA, R0 Assume that the instruction itself is stored in the main memory at location INSTR, and that this address is initially in register PC. The first two steps might be expressed as: 1. Transfer the contents of register PC to register MAR. 2. Issue a READ command to the main memory, and then wait until it has transferred the requested word into register MDR. CPU Instruction Execution Steps Instruction execution in a CPU can now be summarized by the following steps: 1. Fetching the instruction from the memory into the instruction register. 2. Incrementing the PC to point to the next instruction to be executed. 3. Determining the type of instruction fetched (instruction decoding). 4. Determining the location of data in the memory. If the instruction uses data. 5. Fetching the required data into internal CPU registers. 6. Executing the instruction. 7. Storing the results in the designated locations. 8. Return to Step 1.

This is commonly referred to as the fetch-decode-execute cycle.

(b) With a neat diagram explain Von Neumann computer architecture

12. (a) Design a 4 bit binary Adder/Subtractor and explain its function

In digital circuits, an addersubtractor is a circuit that is capable of adding or subtracting numbers (in particular, binary). Below is a circuit that does adding or subtracting depending on a control signal. It is also possible to construct a circuit that performs both addition and subtraction at the same time.

b) Perform subtraction of (28)10 (50)10 using 6 bit 2s complement representation


2810 = 011100 5010= 110010 001101 1 -----------------111110 011100 111110 --------------1011110 Equivalent to -22 of 2s complement

13. (a) With example explain each instruction set of data transfer, arithmetic, logic and program control instructions. Three-address instructions: ADD R1, A, B R1 M[A] + M[B] ADD R2, C, D R2 M[C] + M[D] MUL X, R1, R2 M[X] R1 * R2 Two-address instructions: MOV R1, A R1 M[A] ADD R1, B R1 R1 + M[B] MOV R2, C R2 M[C] ADD R2, D R2 R2 + D MUL R1, R2 R1 R1 * R2 MOV X, R1 M[X] R1 One-address instructions: LOAD A AC M[A] ADD B AC AC + M[B] STORE T M[T] AC

LOAD C AC M[C] ADD D AC AC + M[D] MUL T AC AC * M[T] STORE X M[X] AC Zero-address instructions: PUSH A TOS A PUSH B TOS B ADD TOS (A +B) PUSH C TOS C PUSH D TOS D ADD TOS (C + D) MUL TOS (C + D) * (A + B) POP X M[X] TOS (b) (i) (ii) Differentiate RISC and CISC architecture Explain the features of CISC and RISC processors

RISC vs CISC is a topic quite popular on the Net. Everytime Intel (CISC) or Apple (RISC) introduces a new CPU, the topic pops up again. But what are CISC and RISC exactly, and is one of them really better? This article tries to explain in simple terms what RISC and CISC are and what the future might bring for the both of them. This article is by no means intended as an article pro-RISC or pro-CISC. You draw your own conclusions CISC Pronounced sisk, and stands for Complex Instruction Set Computer. Most PC's use CPU based on this architecture. For instance Intel and AMD CPU's are based on CISC architectures. Typically CISC chips have a large amount of different and complex instructions. The philosophy behind it is that hardware is always faster than software, therefore one should make a powerful instructionset, which provides programmers with assembly instructions to do a lot with short programs. In common CISC chips are relatively slow (compared to RISC chips) per instruction, but use little (less than RISC) instructions. RISC Pronounced risk, and stands for Reduced Instruction Set Computer. RISC chips evolved around the mid-1980 as a reaction at CISC chips. The philosophy behind it is that almost no one uses complex assembly language instructions as used by CISC, and people mostly use compilers which never use complex instructions. Apple for instance uses RISC chips.

Therefore fewer, simpler and faster instructions would be better, than the large, complex and slower CISC instructions. However, more instructions are needed to accomplish a task. An other advantage of RISC is that - in theory - because of the more simple instructions, RISC chips require fewer transistors, which makes them easier to design and cheaper to produce. Finally, it's easier to write powerful optimised compilers, since fewer instructions exist.

RISC vs CISC There is still considerable controversy among experts about which architecture is better. Some say that RISC is cheaper and faster and therefor the architecture of the future. Others note that by making the hardware simpler, RISC puts a greater burden on the software. Software needs to become more complex. Software developers need to write more lines for the same tasks. Therefore they argue that RISC is not the architecture of the future, since conventional CISC chips are becoming faster and cheaper anyway. RISC has now existed more than 10 years and hasn't been able to kick CISC out of the market. If we forget about the embedded market and mainly look at the market for PC's, workstations and servers I guess a least 75% of the processors are based on the CISC architecture. Most of them the x86 standard (Intel, AMD, etc.), but even in the mainframe territory CISC is dominant via the IBM/390 chip. Looks like CISC is here to stay Is RISC than really not better? The answer isn't quite that simple. RISC and CISC architectures are becoming more and more alike. Many of today's RISC chips support just as many instructions as yesterday's CISC chips. The PowerPC 601, for example, supports more instructions than the Pentium. Yet the 601 is considered a RISC chip, while the Pentium is definitely CISC. Further more today's CISC chips use many techniques formerly associated with RISC chips. So simply said: RISC and CISC are growing to each other. x86 An important factor is also that the x86 standard, as used by for instance Intel and AMD, is based on CISC architecture. X86 is th standard for home based PC's. Windows 95 and 98 won't run at any other platform. Therefore companies like AMD an Intel will not abandoning the x86 market just overnight even if RISC was more powerful. Changing their chips in such a way that on the outside they stay compatible with the CISC x86 standard, but use a RISC architecture inside is difficult and gives all kinds of overhead which could undo all the possible gains. Nevertheless Intel and AMD are doing this more or less with their current CPU's. Most acceleration mechanisms available to RISC CPUs are now available to the x86 CPU's as well.

Since in the x86 the competition is killing, prices are low, even lower than for most RISC CPU's. Although RISC prices are dropping also a, for instance, SUN UltraSPARC is still more expensive than an equal performing PII workstation is. Equal that is in terms of integer performance. In the floating point-area RISC still holds the crown. However CISC's 7th generation x86 chips like the K7 will catch up with that. The one exception to this might be the Alpha EV-6. Those machines are overall about twice as fast as the fastest x86 CPU available. However this Alpha chip costs about 20000, not something you're willing to pay for a home PC. Maybe interesting to mention is that it's no coincidence that AMD's K7 is developed in co-operation with Alpha and is for al large part based on the same Alpha EV-6 technology. EPIC The biggest threat for CISC and RISC might not be eachother, but a new technology called EPIC. EPIC stands for Explicitly Parallel Instruction Computing. Like the word parallel already says EPIC can do many instruction executions in parallel to one another. EPIC is a created by Intel and is in a way a combination of both CISC and RISC. This will in theory allow the processing of Windows-based as well as UNIX-based applications by the same CPU. It will not be until 2000 before we can see an EPIC chip. Intel is working on it under code-name Merced. Microsoft is already developing their Win64 standard for it. Like the name says, Merced will be a 64-bit chip. If Intel's EPIC architecture is successful, it might be the biggest thread for RISC. All of the big CPU manufactures but Sun and Motorola are now selling x86-based products, and some are just waiting for Merced to come out (HP, SGI). Because of the x86 market it is not likely that CISC will die soon, but RISC may. So the future might bring EPIC processors and more CISC processors, while the RISC processors are becoming extinct. Conclusion The difference between RISC and CISC chips is getting smaller and smaller. What counts is how fast a chip can execute the instructions it is given and how well it runs existing software. Today, both RISC and CISC manufacturers are doing everything to get an edge on the competition. The future might not bring victory to one of them, but makes both extinct. EPIC might make first RISC obsolete and later CISC too.

14. (a) Define addressing mode. Classify addressing mode and explain each type with example

Each instruction of a computer specifies an operation on certain data. The are various ways of specifying address of the data to be operated on. These different ways of specifying data are called the addressing modes. The most common addressing modes are: Immediate addressing mode Direct addressing mode Indirect addressing mode Register addressing mode Register indirect addressing mode Displacement addressing mode Stack addressing mode To specify the addressing mode of an instruction several methods are used. Most often used are : a) Different operands will use different addressing modes. b) One or more bits in the instruction format can be used as mode field. The value of the mode field determines which addressing mode is to be used. The effective address will be either main memory address of a register. Immediate Addressing: This is the simplest form of addressing. Here, the operand is given in the instruction itself. This mode is used to define a constant or set initial values of variables. The advantage of this mode is that no memory reference other than instruction fetch is required to obtain operand. The disadvantage is that the size of the number is limited to the size of the address field, which most instruction sets is small compared to word length. Direct Addressing: In direct addressing mode, effective address of the operand is given in the address field of the instruction. It requires one memory reference to read the operand from the given location and provides only a limited address space. Length of the address field is usually less than the word length. Ex : Move P, Ro, Add Q, Ro P and Q are the address of operand. Indirect Addressing: Indirect addressing mode, the address field of the instruction refers to the address of a word in memory, which in turn contains the full length address of the operand. The advantage of this mode is that for the word length of N, an address space of 2N can be addressed. He disadvantage is that instruction execution requires two memory reference to fetch the operand Multilevel or cascaded indirect addressing can also be used.

Register Addressing: Register addressing mode is similar to direct addressing. The only difference is that the address field of the instruction refers to a register rather than a memory location 3 or 4 bits are used as address field to reference 8 to 16 generate purpose registers. The advantages of register addressing are Small address field is needed in the instruction. Register Indirect Addressing: This mode is similar to indirect addressing. The address field of the instruction refers to a register. The register contains the effective address of the operand. This mode uses one memory reference to obtain the operand. The address space is limited to the width of the registers available to store the effective address. Displacement Addressing: In displacement addressing mode there are 3 types of addressing mode. They are : 1) Relative addressing 2) Base register addressing 3) Indexing addressing. This is a combination of direct addressing and register indirect addressing. The value contained in one address field. A is used directly and the other address refers to a register whose contents are added to A to produce the effective address. Stack Addressing: Stack is a linear array of locations referred to as last-in first out queue. The stack is a reserved block of location, appended or deleted only at the top of the stack. Stack pointer is a register which stores the address of top of stack location. This mode of addressing is also known as implicit addressing. (b) Write in detail note on instruction set architecture

Instruction Set Architecture (ISA) The Instruction Set Architecture (ISA) is the part of the processor that is visible to the programmer or compiler writer. The ISA serves as the boundary between software and hardware. We will briefly describe the instruction sets found in many of the microprocessors used today. The ISA of a processor can be described using 5 catagories: Operand Storage in the CPU Where are the operands kept other than in memory? Number of explicit named operands How many operands are named in a typical instruction? Operand location Can any ALU instruction operand be located in memory? Or must all operands be kept internaly in the CPU? Operations What operations are provided in the ISA? Type and size of operands What is the type and size of each operand and how is it specified? Of all the above the most distinguishing factor is the first. The 3 most common types of ISAs are: 1. Stack - The operands are implicitly on top of the stack. 2. Accumulator - One operand is implicitly the accumulator. 3. General Purpose Register (GPR) - All operands are explicitely mentioned, they are either registers or memory locations. Lets look at the assembly code of A = B + C; in all 3 architectures: Stack PUSH A PUSH B ADD POP C Accumulator LOAD A ADD B STORE C GPR LOAD R1,A ADD R1,B STORE R1,C -

Not all processors can be neatly tagged into one of the above catagories. The i8086 has many instructions that use implicit operands although it has a general register set. The i8051 is another example, it has 4 banks of GPRs but most instructions must have the A register as one of its operands. What are the advantages and disadvantages of each of these approachs?

Stack Advantages: Simple Model of expression evaluation (reverse polish). Short instructions. Disadvantages: A stack can't be randomly accessed This makes it hard to generate eficient code. The stack itself is accessed every operation and becomes a bottleneck. Accumulator Advantages: Short instructions. Disadvantages: The accumulator is only temporary storage so memory traffic is the highest for this approach. GPR Advantages: Makes code generation easy. Data can be stored for long periods in registers. Disadvantages: All operands must be named leading to longer instructions. Earlier CPUs were of the first 2 types but in the last 15 years all CPUs made are GPR processors. The 2 major reasons are that registers are faster than memory, the more data that can be kept internaly in the CPU the faster the program wil run. The other reason is that registers are easier for a compiler to use.

15. (a) Explain instruction straight line sequencing diagram in detail with interrupt This cycle is the logical basis of all stored program computers. Instructions are stored in memory as machine language. Instructions are fetched from memory and then executed. The common fetch cycle can be expressed in the following control sequence. MAR PC. READ. memory. IR MBR. // Place the instruction into the MBR. // The PC contains the address of the instruction. // Put the address into the MAR and read

This cycle is described in many different ways, most of which serve to highlight additional steps required to execute the instruction. Examples of additional steps are: Decode the Instruction, Fetch the Arguments, Store the Result, etc. A stored program computer is often called a von Neumann Machine after one of the originators of the EDVAC. This FetchExecute cycle is often called the von Neumann bottleneck, as the necessity for fetching every instruction from memory slows the computer.

Avoiding the Bottleneck In the simple stored program machine, the following loop is executed. Fetch the next instruction Loop Until Stop Execute the instruction Fetch the next instruction End Loop. The first attempt to break out of this endless cycle was instruction prefetch; fetch the next instruction at the same time the current one is executing. As we can easily see, this concept can be extended. InstructionLevel Parallelism: Instruction Prefetch Break up the fetchexecute cycle and do the two in parallel. This dates to the IBM Stretch (1959)

The prefetch buffer is implemented in the CPU with onchip registers. The prefetch buffer is implemented as a single register or a queue. The CDC6600 buffer had a queue of length 8 (I think). Think of the prefetch buffer as containing the IR (Instruction Register) When the execution of one instruction completes, the next one is already in the buffer and does not need to be fetched. Any program branch (loop structure, conditional branch, etc.) will invalidate the contents of the prefetch buffer, which must be reloaded.

(b) Discuss the operation of bus? Why data bus is bidirectional and address bus are unidirectional in most of processors .
BUS Signals from and to the CPU travel along strips of conducting wire known as buses. It is the highway used to carry the binary electrical signals (1s and 0s) from one device to the other. Bi-directional buses allow information to flow in both directions, where as unidirectional buses only allow data to flow in one direction. (1 )The Address Bus CPU has to provide the address of that particular byte of information stored in memory and I / O devices . The CPU out puts the address onto the address bus as a series of 1s and 0s Each of the 1s and 0s id carried on one paper track of the address bus. Decoding circuitry will decipher this to active the particular memory or I / O location. The address bus is unidirectional, broadcast from the CPU to all other devices. Number of addressable locations =2n where n is the size of the address bus.

(2) The Data Bus When a memory location is selected (through the address bus ), data is transferred between the CPU and this memory location via the data bus. The data bus is a bi-directional bus In the read operation, data is transferred from the device to the CUP. The reverse happens in a write operation. The size of the data bus indicates the size of the data that can be accessed in one read or write operation. Earlier microprocessors have 4 or 8bit data bus but subsequent CPUs have 16,32and 64 bit data bus

Some microprocessors are designed with a wider internal data bus. For example the 8088 has
an 8 bit external data bus is 16 bit wide.

Вам также может понравиться