Вы находитесь на странице: 1из 37

Concurrency in Shared

Memory Systems
Synchronization and Mutual
Exclusion
Processes, Threads,
Concurrency
• Traditional processes are sequential: one
instruction at a time is executed.
• Multithreaded processes may have
several sequential threads that can
execute concurrently.
• Processes (threads) are concurrent if their
executions overlap – start time of one
occurs before finish time of another.
Concurrent Execution
• On a uniprocessor, concurrency occurs
when the CPU is switched from one
process to another, so the instructions of
several threads are interleaved (alternate)
• On a multiprocessor, execution of
instructions in concurrent threads may be
overlapped (occur at same time) if the
threads are running on separate
processors.
Concurrent Execution

• An interrupt, followed by a context switch, can


take place between any two instructions.
• Hence the pattern of instruction overlapping and
interleaving is unpredictable.
• Processes and threads execute asynchronously
– we cannot predict if event a in process i will
occur before event b in process j.
Sharing and Concurrency

• System resources (files, devices, even


memory) are shared by processes, threads,
the OS. Uncontrolled access to shared
entities can cause data integrity problems –
• Example: Suppose two threads (1 and 2)
have access to a shared (global) variable
“balance”, which represents a bank account.
• Each thread has its own private (local)
variable “withdrawali”, where i is the thread
number
Example
• Let balance = 100, withdrawal1=50, and
withdrawal2 = 75.
• Threadi will execute the following algorithm:
if (balance >= withdrawali)
balance = balance – withdrawali
else // print “Can’t overdraw account!”
• If thread1 executes first, balance will be 50
and thread2 can’t withdraw funds.
• If thread2 executes first, balance will be 25
and thread1 can’t withdraw funds.
• But --- what if the two threads execute
concurrently instead of sequentially?
• Break down into machine-level operations:

if (balance >= withdrawali)


balance = balance – withdrawali

move balance to a register


compare register to withdrawali
branch if less-than
register = register – withdrawali
store register contents in balance
Example-Multiprocessor
(A possible instruction sequence showing interleaved
execution)

• Thread 1 • Thread 2
(2) Move balance to (1) Move balance to
register1 (register = register2 (register =
100) 100)
(4) compare register1 to (3) compare register2 to
withdraw1 withdraw2
(5)register1 = register1 – (6) register2 = register2
withdraw1 (100-50) – withdraw2 (100 – 75)
(7) store register1 in (8) store register2 in
balance (balance = 50) balance (balance = 25)
Example – Uniprocessor
(A possible instruction sequence showing interleaved
execution)
• Thread 1 • Thread 2
– Move balance to
register (Reg. = 100)
P1’s time slice expires – its
state is saved
… – Move balance to reg.

P1 is re-scheduled; its state is – balance >= withdraw2
restored (Reg. = 100)
– balance = balance –
– balance = balance – withdraw2 = (100-75)
withdraw1 (100-50)

– Result: balance = 50
Race Conditions
• The previous examples illustrate a race
condition (data race): an undesirable
condition that exists when several
processes access shared data, and
– At least one access is a write and
– The accesses are not mutually exclusive
• Race conditions can lead to inconsistent
results.
Mutual Exclusion
• Mutual exclusion forces serial resource access
as opposed to concurrent access.
• When one thread locks a critical resource, no
other thread can access it until the lock is
released.
• Critical section (CS): code that accesses shared
resources.
• Mutual exclusion guarantees that only one
process/thread at a time can execute its critical
section, with respect to a given resource.
Mutual Exclusion Requirements

• It must ensure that only one process/thread at


a time can access a shared resource.
• In addition, a good solution will ensure that
– If no thread is in the CS a thread that wants to
execute its CS must be allowed to do so
– When 2 or more threads want to enter their CS’s,
can’t postpone decision indefinitely
– Every thread should have a chance to execute its
critical section (no starvation)
Solution Model
• Begin_mutual_exclusion /* some mutex primitive
execute critical section
End_mutual_exclusion /* some mutex primitive
• The problem: How to implement the mutex
primitives?
– Busy wait solutions (e.g., test-set operation, spinlocks
of various sorts, Peterson’s algorithm)
– Semaphores (OS feature usually, blocks waiting
process)
– Monitors (language feature – e.g. Java)
Semaphores
• Definition: an integer variable on which
processes can perform two indivisible
operations, P( ) and V( ), + initialization. (P
and V sometimes called Wait & Signal)
• Each semaphore has a wait queue
associated with it.
• Semaphores are protected by the
operating system.
Semaphores
• Binary semaphore: only values are 1 and
0
• Traditional semaphore: may be initialized
to any non-negative value; can count
down to zero.
• Counting semaphores: P & V operations
may reduce semaphore values below 0, in
which case the negative value records the
number of blocked processes. (See CS
490 textbook)
Semaphores
• Are used to synchronize and coordinate
processes and/or threads
• Calling the P (wait) operation may cause a
process to block
• Calling the V (signal) operation never
causes a process to block, but may wake
a process that has been blocked by a
previous P operation.
Traditional Semaphore Counting Semaphore
• P(S): • P(S):
if S > = 1 S=S–1
then S = S – 1 if ( S < 0)
else block the process on then block the process
S queue on S queue

• V(S): • V(S):
if some processes are S=S+1
blocked on S queue if (S <= 0)
then unblock a process then move a process
else S = S + 1 from S queue to the
Ready queue
Usage – Mutual Exclusion
• Using a semaphore to enforce mutual exclusion.
P(mutex) // mutex initially = 1
execute CS;
V(mutex)
• Each process that uses a shared resource must
first check (using P) that no other process is in
the critical section and then must use V to
release the critical section.
Bank Problem Revisited
Semaphore S = 1
Thread 1 Thread 2
P(S) P(S)
Move balance to Move balance to
register1 register2
Compare register1 to Compare register2 to
withdraw1 withdraw2
register1 = register1 – register2 = register2 –
withdraw1 withdraw2
Store register1 in Store register2 in
balance balance
V(S) V(S)
Example – Uniprocessor
• Thread 1 • Thread 2
– P(S) S is decremented: S = 0,
T1 continues to execute
– Move balance to
register (Reg. = 100)
T1’s time slice expires – its state – P(S)
is saved Since S = 0, T2 is blocked

T1 is re-scheduled; its state is
restored (Reg. = 100)
T2 resumes executing some
– balance = balance – time after T1 executes V(S)
withdraw1 (100-50) – Move balance to reg.
– V(S) (50)
Thread 2 returns to run state, S
remains 0
– balance >= withdraw2
Since !(50>=75), T2 does not
make withdrawal
– V(S)
Since no thread is waiting, S is
set back to 1
Critical Sections are Indivisible
• The effect of mutual exclusion is to make a
critical section appear to be “indivisible” –
much like a hardware instruction. (Recall
the atomic nature of a transaction)
• In the bank example, once T1enters its
critical section no other thread is allowed
to operate on balance until T1 signals it
has left the CS. (assumes that all users
employ mutual exclusion)
Implementing Semaphores:
P and V Must Be Indivisible

• Semaphore operations themselves must


be indivisible, or atomic; i.e., execute
under mutual exclusion.
• Once OS begins to execute a P or V
operation, it cannot allow another P or V to
begin on the same semaphore.
P and V Must Be Indivisible
• P operation must be indivisible; otherwise there is no
guarantee that two processes won’t try to test P at
the “same” time and both find it equal to 1.
– P(S): if S > = 1 then S = S – 1
else block the process on S queue
• Two V operations executed at the same time could
unblock two processes, leading to two processes in
their critical sections concurrently.
– V(S): if some processes are blocked on the
queue for S then unblock a process
else S = S + 1
if S >= 1 then S = S – 1
else block the process on S queue

execute critical section

if processes are blocked on the


queue for S then unblock a process
else S = S + 1
Semaphore Usage – Event Wait
(synchronization that isn’t mutex)

• Suppose a process P2 wants to wait on an


event of some sort (call it A) which is to be
executed by another process P1
• Initialize a shared semaphore to 0
• By executing a wait (P) on the semaphore,
P2 will wait until P1 executes event A and
signals, using the V operation.
Event Wait – Example
semaphore signal = 0;
Process 1 Process 2
…. …
execute event A P(signal)
V(signal) …
Semaphores Are Not Perfect
• Programmer must know something about
other processes using the semaphore
• Must use semaphores carefully (be sure to
use them when needed; don’t leave out a
V(), etc.)
• Hard to prove program correctness when
using semaphores.
Other Synchronization
Problems
(in addition to simple mutual exclusion)

• Dining Philosophers: resource deadlock


• Producer-consumer: buffering (as of
messages, input data, etc.)
• Readers-writers: data base or file sharing
– Reader’s priority
– Writer’s priority
Producer-Consumer
• Producer processes and consumer
processes share a (usually finite) pool
of buffers.
• Producers add data to pool
• Consumers remove data, in FIFO order
Producer-Consumer Requirements
• The processes are asynchronous. A
solution must ensure producers don’t
deposit data if pool is full and consumers
don’t take data if pool is empty.
• Access to buffer pool must be mutually
exclusive since multiple consumers (or
producers) may try to access the pool
simultaneously.
Bounded Buffer P/C Algorithm
Initialization: s=1;
n=0;
e=sizeofbuffer;

Producer: Consumer:
while(true) while(true)
produce v; P(n); // wait for a full buffer
P(e); // wait for buffer slot P(s); // wait for buffer pool
P(s); // wait for buffer pool access
access w:=take();
append(v); V(s); // release buffer pool
V(s); // release buffer pool V(e); // signal an empty buffer
V(n); // signal a full buffer consume(w);
Readers and Writers Problem
• Characteristics:
– concurrent processes access shared data
area (files, block of memory, set of registers)
– some processes only read information, others
write (modify and add) information
• Restrictions:
– Multiple readers may read concurrently, but
when a writer is writing, there should be no
other writers or readers.
Compare to Prod/Cons
• Differences between Readers/Writers
(R/W) and Producer/Consumer (P/C):
– Data in P/C is ordered - placed into buffer and
retrieved according to FIFO discipline. All data
is read exactly once.
– In R/W, same data may be read many times
by many readers, or data may be written by
writer and changed before any reader reads.
No order enforced on reads.
procedure writer; // Initialization code
begin integer readcount = 0; // done only once
repeat semaphore x, wsem = 1; // done only once
P (wsem);
write data; procedure reader;
V (wsem); begin
repeat
forever
P (x);
end;
readcount = readcount + 1;
if readcount = =1 then P (wsem);
V (x);
read data;
P (x);
readcount = readcount - 1;
if readcount == 0 then V(wsem);
V (x);
forever
end;
Any Questions?
Can you think of any real
examples of producer-consumer
or reader-writer situations?
Semaphores and User Thread Library
• Thread libraries can simulate real semaphores.
• In a multi-(user-level) threaded process the OS
only sees a single thread of execution; e.g.,
T1, T1, T1, L, L, T2, T2, L, L, T1, T1, …
– Library functions execute when a u-thread
voluntarily yields control
• Use a variable as a semaphore; access via P
& V functions. A thread executes P(S) and
finds S = 0. Then it yields control.
Semaphores and User Thread Library

• Why is this safe? Because there is really


never more than one thread of control –
violations of mutual exclusion happen when
separate threads are scheduled concurrently.
• A user-level thread decides when to yield
control; kernel-level threads don’t.
• If the library is asked to execute P(S) or V(S)
it will not be interrupted by another thread in
the same process, so there is no danger.

Вам также может понравиться