Вы находитесь на странице: 1из 8

CO NOTES

1) Difference between the tightly coupled and loosely coupled multiprocessor


Tightly coupled multiprocessor Loosely coupled multiprocessor 1) A multiprocessor system with 1) Each multiprocessor have its common shared memory is own local private memory known as highly coupled multi processor 2) It is also known as shared 2) It is also known as distributed memory memory 3) In this, each processor does 3) In this,memory is not shared not have its own local among the processors memory 4) Tightly coupled 4) Information is passed from multiprocessors provide a one processor to another cache memory with each processor in form of packets C.P.U 5) It has global common 5) A packet consists of an memory that all cpus can address,data content & some access error detection code. 6) Information can therefore be 6) The packets are addressed shared among the cpus by to a specific processor, or placing it in the common depending on the global memory communication system used 7) It can tolerate a higher 7) They are most efficient and degree of interaction interaction between the task between tasks is minimal 8) Communication between 8) Communication between the processors is by means of processors is by means of Input/output (I/O) path. I/O channels 9) A communication path 9) A channel of communication between two cpus can be is established when the established through a link sender processor and between two IOPs and two receiving processor name different cpus each other as source and destination

CO NOTES
10) Communication can be initiated through software initialized interprocessor. Interrupts by means of an instruction in the program of one processor is executed produces interrupts in second processor. 10) The communication can be initiated by one processor with calling a procedure that resides in the memory of the processor with which it wishes to communicate.

2) Mapping and mapping techniques


Cache memory: The active portions of the program and data are placed in a fast small memory, the average memory access time can be reduced, thus reducing the total execution of the program. Such a fast memory is referred as a cache memory. Mapping: The transformation of data from main memory to cache memory is referred to as a mapping process. There are three types of mapping techniques. They are 1) Associative mapping 2) Direct mapping 3) Set-associative mapping To explain these techniques let us consider the following figure in which the main memory can store 32K words of 12 bits each and a cache memory is capable of storing 512 of these words. .

CO NOTES
Main memory 32K x 12 Cache memory 512 x 12 The cpu communicates with both memories. It first sends a 15-bit address to cache. If there is a hit ,the cpu accepts the 12-bit data from cache. If there is a miss, the CPU reads the word from main memory and the word is then transferred to cache. Associative mapping: The fastest and most flexible cache organization uses an associative memory. The associative memory stores both the address and content of the memory word. Let us explain this mapping with an example in which the cache memory consists of 3 words presently. The address value of 15 bits and its corresponding 12-bit word are shown in 5 digit octal number and 4 digit octal number respectively. A CPU address of 15 bits is placed in the argument register and the associative memory is searched for a matching address. CPU address (15 bits) Argument register Address 01000 02777 22345 Data 3450 6710 1234

CPU

CO NOTES
If the address is found,the corresponding data is read and sent to the CPU,if not the main memory is accessed for the word. The address-data pair is then transferred to the cache memory. If the cache is full, an address-data pair must be replaced. To decide which pair has to replace we will use replacement algorithms like Round-robin order, which constitutes a FIFO replacement policy. Direct Mapping: Associated memories are expensive compared to RAM because of the added logic. By using RAM we can design catch as shown in figure, where the CPU address of 15 bits is divided into two fields. The nine LSB bits constitute the index field and the remaining six bits form the tag. The figure shows that main memory needs an address that includes both the tag and index bits. The number of bits in the index field is equal to the number of address bits required to access the cache memory. In general if there are 2k bits words in cache memory and 2n words in main memory. The n bit memory address is divided into two fields k bits for the index and n-k for tag. The direct mapping cache organization uses the n-bit address to access the main memory and the k-bit index to access the cache. 6bits Tag 9bits Index

00 000 Octal Address

32k x 12 Main memory Address =15 bits Data =12 bits

000 Octal address 777

512 x 12 Cache memory Address=9bits Data =12 bits

77 777

CO NOTES
To illustrate Direct mapping let us take an example which is shown in below figure. The word at address zero is presently stored in the cache (index=000,tag=00,data=1220).
Memory address Memory data

00000

1220
Main memory

00777 01000

2340 3450

01777 02000

4560 5670

02777

6710

Index Address 000 Tag 00 Data 1220


Cache memory

777

02

6710

CO NOTES
Suppose, if the CPU wants to access the word at address 02000. The index address 000,so it is used to access the cache. The two tags then compared. The cache tag is 00 but the address tag is 02,which does not produce a match. Therefore, the main memory is accessed and the data word 5670 is transferred to the CPU. The cache word at index address 000 is then replaced with a tag of 02 and the data of 5670. The main disadvantage of direct mapping is that two words with the same index cannot be stored. Set-Associative mapping: To overcome the disadvantage of direct mapping a third type of cache organization, called Set associative memory has been introduced. In that each word in cache can store two or more words of memory under the same index address. Each data word is stored together with its tag and the number of tag-data items in one word of cache is said to form a set. Let us illustrate it with an example in which the size of cache is 512x36. It can accommodate 1024 words of main memory since each word of cache contains two data words. Index 000 Tag 01 Data 3450 Tag 02 Data 5670

777

02

6710

00

2340

The words stored at addresses 01000 and 02000 of main memory are stored in cache memory at index address 000. Similarly, the words at addresses 02777 and 00777 are stored in cache at index address 777.

CO NOTES
When the CPU generates a memory request, the index value of the address is used to access the cache. Then the tag field of the CPU address is then compared with both tags in the cache to determine if a match occurs. The comparison logic is done by an associative search of the tags in the set similar to an associative memory search, thus the name setassociative. If the miss occurs and the set is full, it is necessary to replace one of the tag-data items with a new value. The most common replacement algorithms used are random replacement, First-in First-out(FIFO), and Least Recently Used(LRU).

3) Difference between Static RAM and Dynamic RAM. Static RAM


1) The static RAM consists of internal flip-flops that store the binary information. 2) The stored information is valid as long as power is applied to unit. 3) The static RAM is easier to use and has shorter read and write cycles. 4) It is used in implementing the cache memories 5) It offers less storage capacity when compared to dynamic ram 6) It offers high power consumption

Dynamic RAM
1) The dynamic RAM stores the binary information in the form of electric charges that are applied to capacitors. 2) In this the data is stored temporarily. 3) It has large read and write cycles. 4) It is used implementing the main memory 5) It offers high storage capacity when compared to static ram. 6) It offers reduced power consumption

CO NOTES

Вам также может понравиться