Академический Документы
Профессиональный Документы
Культура Документы
3/7/13
3/7/13
Introduction to DSM
Shared memory is used in tightly coupled systems. loosely coupled systems are using msg passing or RPC. But how to use shared memory on loosely coupled??
3/7/13
DSM History
Memory mapped files started in the MULTICS operating system in the 1960s One of the first DSM implementations was Apollo. One of the first system to use Apollo was Integrated shared Virtual memory at Yale (IVY) DSM developed in parallel with shared-memory multiprocessors
3/7/13 44
Topics covered :
Architecture design issues structure of shared memory space replacement strategy thrashing
3/7/13
A Software layer on top of message passing system to provide shared memory abstraction. Gives an illusion of physically shared memory. The software layer can be implemented on os kernel or run time library with kernel support. DSM is the shared memory paradigm 3/7/13 applied to loosely coupled distributed
DSM.
That is DSM is basically an abstraction that integrates the local memory of different machines in a network environment into a single logical entity shared by cooperating process executing on multiple site. Shared memory exists only virtually. Sharing with the help of replication/migration
3/7/13
Processes can cause error to one Processes are protected from one another by altering data another by having private address spaces Processes may execute with non- Processes must execute at the overlapping lifetimes same time Invisibility of communications cost Cost of communication is obvious
Distributed Shared Memory (DSM) allows programs running on separate computers to share data without the programmer having to deal with sending messages Instead underlying technology will send the messages to keep the DSM consistent (or relatively consistent) between computers DSM allows programs that used to operate on the same computer to be easily adapted to operate on separate computers
Distributed 3/7/13 99
A software memory mapping manager routine in each node maps the local memory onto the shared virtual memory . To facilitate the mapping operation, the shared memory space is partitioned into blocks Memory access latency- network latency can be reduced by data 3/7/13 caching.
CPU 1
Memo ry
CPU 1
Memo ry
CPU 1
Memo ry
CPU n
CPU n
CPU n
MemoryMapping Manager
MemoryMapping Manager
MemoryMapping Manager
Communication Network
Distributed 3/7/13 1111
Unit of sharing or unit of data transfer Possible units are few words , a page or a few pages. Determines the measure of parallelism and amount of traffic
Coherence means consistency : related to replicated data. Concurrent access to the shared data may be there Synchronization primitives like semephores, event count, and lock etc..may be used
3/7/13
5. Replacement Strategy:
6. Thrashing:
Data block migrating between nodes on demand. transfer at high rate so that no
Blocks 3/7/13
3/7/13
Structure and granularity are closely related Commonly used approaches for structuring are :
No structuring :
unstructured shared memory, can choose suitable page size and a fixes grain size
3/7/13
Structuring as a database
Replacement strategy
Which block should be replaced to make space for newly required block? Where should the replaced block be placed?
3/7/13
Usage based versus non usage based(LRU, FIFO, Rand) Fixed space versus variable space Based priority mechanism
unUsed : block that is not currently being used Nil : block that has been invalidated Read only : block for which the node has only read access right owned : block for which the node
3/7/13 Read
Replacement Priority
Used and nil : highest replacement priority Read only : next replacement priority, copy of the block will be there available at the owner , can be brought if required. Read owned and writable blocks for which replicas exit on other nodes : next replacement priority
3/7/13
Where to replace??
Blocks selected for replacement having useful information need not be discarded eg : writable, read owned blocks etc. Approaches for storing replaced blocks are:
3/7/13
Thrashing
Serious performance problem Because of poor locality of reference Large amount of time wasted transferring shared data blocks from one node to another, than doing useful work Thrashing occurs when network resources are exhausted, and more time is spent invalidating data and 3/7/13 sending updates than is used doing
Thrashing occurs..
When interleaved data access made by processes on two or more nodes causes a data block to move back and forth from one node to another on quick succession When blocks with read only permission are repeatedly invalidated soon after they are replicated.
3/7/13
Providing application controlled locks Nailing a block to a node for a minimum amount of time Tailoring the coherence algorithm to the shared data usage patterns.
3/7/13