Вы находитесь на странице: 1из 4

COMPUTER ARCHITECTURE UNIT V ADVANCED ARCHITECTURE PART-A What is parallelism and pipelining in computer Architecture? 2. What is instruction pipelining?

? 3. What is super pipelining? 4. Explain pipelining in CPU design? 5. Write short note on Hazards of pipelining. 6. What is a multiprocess ? 7. Explain about parallel computers. 8. What is parallelism 9. What is pipelining in computer Architecture? 10. What is RISC and CISC?
1.

PART-B 1. 2. 3. 4. 5. What is parallel processing? Explain Flynn's classification of parallel processing. What are the characteristics of RISC architecture? What is pipeline? Explain its simplified & expanded view. Explain about the multiprocessor? What are reasons of pipeline conflicts in pipelined processor ? How are they resolved?

ANSWER KEY UNIT V ADVANCED ARCHITECTURE PART-A

1. LOOK AHEAD, PARALLELISM , AND PIPELINING IN COMPUTER ARCHITECTURE Look ahead techniques were introduced to prefetch instruction in order to overlap I/F (Instruction fetch/decode and execution) operations and to enable functional parallelism. Functional parallelism was supported by two approaches : One is to use multiple functional units simultaneously and the other is to practice pipelining at various processing levels. The later includes pipelined instruction execution, pipelined arithmetic computation, and memory access operations. Pipelining has proven especially attractive in performing identical operations repeatedly over vector data strings. Vector operations were originally carried out implicitly by software controlled looping using scalar pipeline processors. 2. An instruction pipeline reads Consecutive instruction from memory previous instructions are being executed in other segments Pipeline processing can occur not only in data stream but in the instruction stream as well. This causes the instruction fetch and executes phases to overlap and perform simultaneous. The pipeline must be emptied and all instructions that have been read from memory after the branch instruction must be discarded.

3. Pipelining is the concept of overlapping of multiple instructions during execution time. Pipeline splits one task into multiple subtasks. These subtasks of two or more different tasks are executed parallel by hardware unit. The concept of pipeline can be same as the water tab. The amount of water coming out of tab in equal to amount of water enters at pipe of tab. Suppose there are five tasks to be executed. Further assume that each task can be divided into four subtasks so that each of these subtasks are executed by one state of hardware. The execution of these five tasks is super pipe thing. 4. Pipelining is a technique of decomposing a sequential process into sub-operations, with each subprocess being executed in a special dedicated segment that operates concurrently with all other segments A pipeline is a collection of processing segments through which binary information which is deforms partial processing dictated by the way the task is partitioned. The result obtained from the computation in each segment is transferred to next segment in the pipeline. The final result is obtained after the data have passed through all segments.

5. Pipelining is a technique of decomposing a sequential process into sub operations, with each sub process being executed in special dedicated segment that operates Concurrently with all other segments. Pipeline can be visualized as a collection of processing segments through which binary information flows. Each segment performs partial processing. The result obtained from computation in each segment is transferred to next segment in pipeline. It implies a flow of information to an assembly line. The Simplest way of viewing the pipeline structure is that each segment consists of an input register followed by a Combination circuit. The register holds the data and the combinations circuit performs the sub operations in the particular segment. The output of Combination CK is applied to input register

of the next segment. A clock is applied to all registers after enough time has elapsed to perform all segment activity.

6. A multiprocessor system is having two or more processors. So, multiprocessor is which execute more than of one and two processes. The main feature multiprocessor system is to share main memory or other resources by all processors. 7. Parallel computers provides parallelism in unprocessor or multiple processors can enhance the performance of computer. The concurrency in unprocessor or superscalar in terms of hardwares and software implementation can lead to faster execution of programs in computer. Parallel processing provides simultaneous data processing to increase the computational seed of computer. 8. LOOK AHEAD, PARALLELISM , AND PIPELINING IN COMPUTER ARCHITECTURE Look ahead techniques were introduced to prefetch instruction in order to overlap I/F (Instruction fetch/decode and execution) operations and to enable functional parallelism. Functional parallelism was supported by two approaches : One is to use multiple functional units simultaneously and the other is to practice pipelining at various processing levels. 9. Pipelining is an implementation technique where multiple instructions are overlapped in execution. The computer pipeline is divided in stages. Each stage completes a part of an instruction in parallel. The stages are connected one to the next to form a pipe - instructions enter at one end, progress through the stages, and exit at the other end. 10. RISC : It means Reduced instruction set computing. RISC machine use the simple addressing mode. Logic for implementation of these instructions is simple because instruction set is small in RISC machine. CISC: It means complex instruction set computing. It uses wide range of instruction. These instructions produce more efficient result. It uses the micro programmed control unit when RISC machines mostly uses hardwired control unit. It uses high level statement. It is easy to understand for human being. PART-B 1. Parallel processing is another method used to improve performance in computer system, when a system processes two different instructions simultaneously, it is performing parallel processing. Flynns Classification: Flynns classification is based on multiplicity of instruction streams and data streams observed by the CPU during program execution. Let Is and Ds are minimum number of streams flowing at any point in the execution, then the computer organization can be categorized as follows: 1) Single Instruction and Single Data stream (SISD) In this organization, sequential execution of instructions is performed by one CPU containing a single processing element (PE), i.e., ALU under one control unit. Therefore, SISD machines are conventional serial computers that process onlyone stream of instructions and one stream of data. 2) Single Instruction and Multiple Data stream (SIMD)

In this organization, multiple processing elements work under the control of a single control unit. It has one instruction and multiple data stream. All the processing elements of this organization receive the same instruction broadcast from the CU. Main memory can also be divided into modules for generating multiple data streams acting as a distributed memory. Therefore, all the processing elements simultaneously execute the same instruction and are said to be 'lock-stepped' together. Each processor takes the data from its own memory and hence it has on distinct data streams. (Some systems also provide a shared global memory for communications.) Every processor must be allowed to complete its instruction before the next instruction is taken for execution. Thus, the execution of instructions is synchronous. Examples of SIMD organization are ILLIAC-IV, PEPE, BSP, STARAN, MPP, DAP and the Connection Machine (CM-1). 3) Multiple Instruction and Single Data stream (MISD) In this organization, multiple processing elements are organized under the control of multiple control units. Each control unit is handling one instruction stream and processed through its corresponding processing element. But each processing element is processing only a single data stream at a time. Therefore, for handling multiple instruction streams and single data stream, multiple control units and multiple processing elements are organized in this classification. All processing elements are interacting with the common shared memory for the organization of single data stream. The only known example of a computer capable of MISD operation is the C.mmp built byCarnegie-Mellon University. This classification is not popular in commercial machines as the concept of single data streams executing on multiple processors is rarely applied. But for the specialized applications, MISD organization can be very helpful. For example, Real time computers need to be fault tolerant where several processors execute the same data for producing the redundant data. This is also known as N- version programming. All these redundant data are compared as results which should be same; otherwise faulty unit is replaced. Thus MISD machines can be applied to fault tolerant real time computers. 4) Multiple Instruction and Multiple Data stream (MIMD) In this organization, multiple processing elements and multiple control units are organized as in MISD. But the difference is that now in this organization multiple instruction streams operate on multiple data streams. Therefore, for handling multiple instruction streams, multiple control units and multiple processing elements are organized such that multiple processing elements are handling multiple data streams from the Main memory. The processors work on their own data with their own instructions. Tasks executed by different processors can start or finish at different times. They are not lock-stepped, as in SIMD computers, but run asynchronously. This classification actually recognizes the parallel computer. That means in the real sense MIMD organization is said to be a Parallel computer. All multiprocessor systems fall under this classification.

Вам также может понравиться