Вы находитесь на странице: 1из 24

Chapter 5

Concurrency: Mutual Exclusion


and Synchronisation: Part 1
In this session…
• We will cover:
– Concurrency
• Mutual
Exclusion
,
deadlock,
livelock
• Critical
section
– Software
approached
to Mutual
Exclusion 3
Multiple Processes
• Operating System design is
concerned with the management
of processes and threads:
– Multiprogramming: the
management of multiple
processes within a uniprocessor
system
– Multiprocessing: the
management of multiple
processes within a multiprocessor
– Distributed Processing: the
management of multiple
processes executing on multiple,
distributed computer systems
(e.g., computer clusters) 4
Multiple Processes
• All of these involve cooperation, competition, and communication
between processes
• This includes processes that either run simultaneously or are interleaved
in different ways to give the appearance of running simultaneously
• Therefor concurrent processing is vitally important to operating systems
and their design
• This is particularly challenging in the HPC domain where CPU’s run at
different speeds

4
Typical HPC

5
Concurrency
• Fundamental to all of these areas, and fundamental to OS design, is
concurrency
• Concurrency is the interleaving of processes in time to give the
appearance of simultaneous execution
• Concurrency encompasses a host of design issues, including
– Communication among processes
– Sharing of and competing for resources (such as memory, files, and I/O
access)
– Synchronisation of the activities of multiple processes
– Allocation of processor time to processes
• These issues arise not just in multiprocessing and distributed
processing environments but even in single-processor
multiprogramming systems
6
Concurrency Example
salary = user_salary;
tmp_salary = salary;
updateSalary(tmp_salary);

As both process run, here's


what happens:
Process A runs the code. The
process only gets to the point
where the user enters 15.50
into
salary
Process B runs the code. It
finishes just fine and winds up
with a final value of 17.50
Process A starts up again, but now the tmp_salary is actually showing 17.50, which is 7
Concurrency Arises in Three Different Contexts

Multiple Structured Operating


Applications Applications System Structure
• Invented to allow • Extension of • OS themselves
processing time to modular design and implemented as a
be shared among structured set of processes or
active applications programming threads

8
Three Contexts
• Multiple applications: Multiprogramming was invented to allow processing
time to be dynamically shared among a number of active applications
• Structured applications: As an extension of the principles of modular
design and structured programming, some applications can be effectively
programmed as a set of concurrent processes
• Operating system structure: The same structuring advantages apply to
systems programs, and we have seen that operating systems are themselves
often implemented as a set of processes or threads

9
Key Terms Related to Concurrency
• Atomic operation: a function or action implemented as a sequence of
one or more instructions that appears to be indivisible; that is, no
other process can see an intermediate state or interrupt the operation.
The sequence of instruction is guaranteed to execute as a group, or
not execute at all
• Critical section: a section of code within a process that requires access
to shared resources and that must not be executed while another
process is in a corresponding section of code

10
Key Terms Related to Concurrency
• Deadlock: a situation in which two or more processes are unable to
proceed because each is waiting for one of the others to do something
• Livelock: a situation in which two or more processes continuously
change their states in response to changes in the other process(es)
without doing any useful work
• Mutual exclusion: is in many ways the fundamental issue in
concurrency. It is the requirement that when a process P is accessing a
shared resource R, no other process should be able to access R until P
has finished with R. Examples of such resources include files, I/O
devices such as printers, and shared data structures.

11
Requirement For Solving Critical Section
• The following conditions must be met:
– Mutual Exclusion – if a process is executing in its critical section, then no other
process can execute in its critical section
– Progress – When no process is in a critical section, any process that requests
entry must be permitted without delay
– Bounded Wait – There is an upper bound on the number of times a process can
enter its
critical sections while another is waiting (no starvation)

12
Mutual Exclusion: Software Approaches
• Software approaches can be implemented for concurrent processes that
execute on a single-processor or a multiprocessor machine with shared main
memory
• These approaches usually assume elementary mutual exclusion at the
memory access level
• Dijkstra reported an algorithm for mutual exclusion for two processes, called
Dekker’s algorithm
• There are also hardware solutions to ensure mutual exclusion.

13
Dekker’s Algorithm
• Dekker’s algorithm is a concurrent programming algorithm for mutual
exclusion derived by the Dutch mathematician T.J.Dekker in 1964
• It allows two threads to share a single use resource without conflict, using
only shared memory for communication
• It avoids the strict alternation of a turn based algorithm
• If two processes attempt to enter critical selection at the same time, the
algorithm will only allow one process in based on who's turn it is
• If one process is already in the critical section, the other process will busy
wait for the first process to finish
• This is achieved by using two flags f0 and f1 which indicate an intention to
enter the critical selection and a turn variable which indicates who has priority
between the two processes
14
First
Attempt Check if process two is running
critical section, if it is wait

We don’t want two process running critical


section at the same time update whose
turn it is
15
Frist Attempt Drawbacks
• First, processes must strictly alternate in their use of their critical
section; therefore, the pace of execution is dictated by the slower
of the two processes
– If P1 uses its critical section only once per hour, but P2 would like to use
its critical section at a rate of 1,000 times per hour, P2 is forced to adopt
the pace of P1
• A much more serious problem is that if one process fails, the other
process is permanently blocked. This is true whether a process
fails in its critical section or outside of it

16
Second
Attempt Check if process two is running
critical section, if it is wait

17
Second Attempt Drawbacks
• If a process fails inside its critical section or after setting its flag to
true just before entering its critical section, then the other
process is permanently blocked
• This solution may not even guarantee mutual exclusion
– 1. P1 checks c2 and finds c2==1
– 2. P2 checks c1 and finds c1==1
– 3. P1 sets c1 to 0
– 4. P2 sets c2 to 0
– 5. P1 enters its critical section
– 6. P2 enters its critical section

18
Third Attempt
Check if process two is running
critical section, if it is wait

19
Third Attempt Drawbacks
• As before, if one process fails inside its critical section, including the flag-
setting code controlling the critical section, then the other process is blocked
• Deadlock:
– If both processes set their flags to true before either has executed the while
statement,
then each will think that the other has entered its critical section, causing deadlock

20
Dekker’s Algorithm

21
Dekker's
Algorithm
• Dekker’s algorithm solves the mutual exclusion problem, but with a rather
complex program that is difficult to follow and whose correctness is tricky to
prove
• Peterson has provided a simple, elegant solution

22
Peterson’s Algorithm (1981)

23
Summary
• In this session we have covered:
– Concurrency
– Mutual Exclusion
– Livelock
– Deadlock
– Software approached to Mutual Exclusion
– Dekker’s Algorithm
– Peterson’s Algorithm

24

Вам также может понравиться