You are on page 1of 2

HIPPI (HIgh Performance Parallel Interface) is a

computer bus for the attachment of high speed storage devices to supercomputers.[1] It was popular in the late 1980s and into the mid-to-late 1990s, but has since been replaced by everfaster standard interfaces like SCSI and Fibre Channel. The first HIPPI standard defined a 50-wire twisted pair cable, running at 800 Mbit/s (100 MB/s), but was soon upgraded to include a 1600 Mbit/s (200 MB/s) mode running on fibre optic cable. An effort to improve the speed resulted in HIPPI-6400,[2] which was later renamed GSN (for Gigabyte System Network) but saw little use due to competing standards. GSN had a full-duplex bandwidth of 6400 Mbit/s or 800 MB/s in each direction. To understand why HIPPI is no longer used, consider that Ultra3 SCSI offers rates of 160 MB/s, and is available at almost any corner computer store. Meanwhile Fibre Channel offered simple interconnect with both HIPPI and SCSI (it can run both protocols) and speeds of up to 400 MB/s on fibre and 100 MB/s on a single pair of twisted pair copper wires. HIPPI was the first near-gigabit (0.8 Gbit/s) (ANSI) standard for network data transmission. It was specifically designed for supercomputers and was never intended for mass market networks such as Ethernet. Many of the features developed for HIPPI are being integrated into such technologies as InfiniBand. What was remarkable about HIPPI is that it came out when Ethernet was still a 10 Mbit/s data link and SONET at OC-3 (155 Mbit/s) was considered leading edge technology.

Handler's Classificstion:
This classification was done by Wolfgang Handler. This classification mainly identifies the parallelism degree and pipelining degree built inside the hardware structure of the computer. Wolfgang Handler considers parallel - pipeline processing t sub system levels: 1. Process Control Unit(PCU) 2. Arithematic Logic Unit(ALU) 3. Bit - Level Circuit(BLU) Each PCU Corresponds to one CPU. The ALU is considered as an element much smaller than a central processor and mucher lower features than a processor, working under the control of the processor. ALU is generally used to do arithematic and logical caliculations. In general there are many ALUs in a system, working parallely to increase the speed of the system. The BLC corresponds to the combinational logic circuitry needed to perform the bit operations on the ALU. The computer system can be characterized by a triple containing six independent entities given below: T(C) = <K * K', D * D', W * W'> Where K = Number of processors within the computer. K'= Number of PCUs that can be pipelined. D = Number of ALUs under the control of one PCU. D'= Number of ALUs that can be pipelined. W = Word length of ALU. W'= Number of pipelined stages in all ALUs.

Read more: http://wiki.answers.com/Q/What_is_Handlers_classification_of_computer_architecture# ixzz1a3b5NSz3

Interleaved memory is a technique for compensating the relatively slow speed of DRAM. The CPU can access alternative sections immediately without waiting for memory to be cached. Multiplememory banks take turns supplying data. An interleaved memory with "n" banks is said to be n-way interleaved. If there are "n" banks, memory location "i" would reside in bank number i mod n. One way of allocating virtual addresses to memory modules is to divide the memory space into contiguous blocks. The CPU can access alternate sections immediately, without waiting for memory to catch up (through wait states). Interleaved memory is one technique for compensating for the relatively slow speed of dynamic RAM (DRAM). Other techniques include page-mode memory and memory caches. Interleaved memories are the implementation of the concept of accessing more words in a single memory access cycle. This can be achieved by partitioning the memory e.g. into N separate memory modules. Thus, N accesses can be carried out to the memory simultaneously. Interleaved memory is a technique for compensating the relatively slow speed of DRAM. The CPU can access alternative sections immediately without waiting for memory to be cached. Multiplememory banks take turns supplying data. An interleaved memory with "n" banks is said to be n-way interleaved. If there are "n" banks, memory location "i" would reside in bank number i mod n. One way of allocating virtual addresses to memory modules is to divide the memory space into contiguous blocks. The CPU can access alternate sections immediately, without waiting for memory to catch up (through wait states). Interleaved memory is one technique for compensating for the relatively slow speed of dynamic RAM (DRAM). Other techniques include page-mode memory and memory caches. Interleaved memories are the implementation of the concept of accessing more words in a single memory access cycle. This can be achieved by partitioning the memory e.g. into N separate memory modules. Thus, N accesses can be carried out to the memory simultaneously.