Вы находитесь на странице: 1из 38

Interprocess Communication and Synchronization | 1

Interprocess
UNIT 3:

Communication and
Synchronization

Structure

3.0 Objective
3.1 Introduction
3.2 The Need for Inter-process Synchronization
3.3 Mutual Exclusion
3.4 Semaphores
3.5 Hardware Support for Mutual Exclusion
3.6 Classical Problems in Concurrent Programming
3.7 Critical Region and Conditional Critical Region
3.8 Monitors
3.9 Messages
3.10 Deadlocks
3.11 Summary
3.12 Exercises
Suggested Reading

3.0 Objective

After reading this tutorial, the reader should be able to understand.


Why is the need for Inter-process synchronization
The concept of mutual exclusion
The concept of semaphores
Hardware implementation of mutual exclusion
The classical problems in concurrent programming
The critical region and conditional critical region
The concept of monitors
Role of messages in inter-process communication
Deadlocks
2 | Operating System

3.1 Introduction

The aim of this tutorial is to help the reader to understand the various concepts of Inter-process
synchronization. The different topics have been discussedunder different sections. Section 3.2, gives
the brief introduction about the need of inter-process synchronization. Section 3.3, introduces the
concept of mutual exclusion. In section 3.4, you will learnsemaphores. Section 3.5, discusses how
mutual exclusion can be achieved with the support of hardware. Section 3.6, explains the classical
problems in concurrent programming. Section 3.7, presents a brief discussion on critical region and
conditional critical region. Section 3.8, explains monitors as a synchronization tool. In section 3.9,
you will learn about inter-process communication using messages. And finally,in the end, in section
3.10, deadlock has been discussed in detail.

3.2 The Need For Inter-process Synchronization

The collaboration between the individual processes (or the threads of the single process) is required
most of the times. Whenever the two processes communicate, you need to synchronize their actions.
The execution speeds of the processes are unpredictable. However, to communicate,one process has
to do some event, for example, setting the value of a variable or sending a message which will be
detected by the other process. And this is possible only when the events performed by the processes
occur in certain specific order.Therefore,the synchronizationcan be considered as a set of constraints
for ordering the events. For delaying the execution of a process the synchronization mechanism is
employed by the programmer.Inter-process communication provides a mechanism for the processes
to communicate and to synchronize their actions. And for this to happen, the resources, such as
memory, is shared between processes or threads. For accessing the shared resources, proper care is
required otherwise many problems may occur, for example starvation, deadlock, data inconsistency
etc.

3.3 Mutual Exclusion

This property says that if one process is executing the shared variable, then all the other processes
wants to do the same at the same time instant should wait. And only after the process completed
executing the shared variable, one of the waiting processes should be permitted to proceed. This way,
every processthat executes the shared data (variables) prevents all others from doing so
simultaneously. You must note that mutual exclusion is required to be enforced only when processes
access shared modifiable data, otherwise, when processes performs operations which are not
conflicting with one another then in that case concurrent execution should be allowed.
In the case of concurrent programming, concurrent accesses to shared resources may
causeunanticipated or incorrectbehavior; thereforeit is required to protect the parts of the program
where the shared resource is accessed. And that protected section is called the critical section
(CS) or critical region.The mutual exclusion property is necessary to handle the critical section
problem.
Here are some commonly used terms in inter-process communication and synchronization.
Interprocess Communication and Synchronization | 3

Critical section problem: when one process is executing in its critical section, no other process
should execute in its critical section.
Entry section: code for requesting entry to CS
Exit section: code for releasing CS
Remainder section: non-critical section code
The following three requirements must be fulfilled for the critical section solution.
1. Mutual Exclusion: when one process is in its critical section then no other processes can be in
their critical sections at the same time.
2. Progress: if no process is executing in its critical section and some processes desire to enter
their critical sections, then only those processes that are not executing in their remainder
section can participate in the decision on which will enter its critical section next, and this
selection cannot be postponed indefinitely.
3. Bounded Waiting: if a process P1 makes a request to enter its critical section, the request
may not be granted right away. Rather other processes may be allowed to enter their CSs. P1
must have to wait. If there is a bound to the number of times the other processes may be
given access to their CSs before P1's request is granted then it is a bounded wait. i.e. waiting
is not indefinite.
To have a good solution for the critical section problem, following four conditions are required to be
hold.
Only one process is allowed to be in the critical region at a time.
No assumptions should be made about the number of CPUs or the relative speeds of the
processes.
The processes outside its critical section should not block other processes.
There should be a bound after which a process is allowed to enter its critical section.

3.4 Semaphores

A semaphoreis a protected integer variable which cansimplify and limit access to shared resources in
a multi-processing environment. These are of the two types namelybinary semaphoreand
countingsemaphore. A binary semaphore represents two possible states (commonlylocked or
unlocked; 0 or 1) whereas counting semaphores represent multiple resources. Semaphores were
invented by the late Edsger Dijkstra. It is a synchronization tool to manage concurrent processes for
complex mutual exclusion problems.
Semaphores can be viewed at as a representation of a restricted number of resources, for example,
seating capacity at a lounge. If a lounge has a capacity of 40 people and nobody is there, the
semaphore would be initialized to 40. And with the each person arriving at the lounge, the seating
capacity decreases, therefore the semaphore in turn is decremented. After reaching the maximum
4 | Operating System

capacity, the semaphore will be at zero, and nobody else will be able to enter the lounge. Instead the
person who has to sit in a lounge must wait until someone is done with the resource. When a person
leaves, the semaphore is incremented and the resource becomes available again.
You can access the semaphore with the help of wait() and signal()operations:wait() is called when a
process wants access to a resource. This is similar to the person arriving to sit in a lounge. If the seats
are available in the lounge, or the semaphore is greater than zero, then the person can take that
resource (seat) and sit in the lounge. If there is no available space to sit in the lounge and the
semaphore is zero, that process must wait until it becomes available. signal() is called when a process
is done using a resource, or when the person leave the seat. The implementation of counting
semaphore (where the value can be greater than 1) is as follows:
wait(Semaphore s){
while (s==0); // Wait until s>0
s=s-1;
}
signal(Semaphore s){
s=s+1;
}
Init(Semaphore s , Int v){
s=v;
}
Note that wait() is called P (for Dutch Proberen meaning to try) and signal() is called V (for Dutch
Verhogen meaning to increment). However the standard Java library uses the name "acquire"
for P and "release" for V.
When P or V are executing, thenno other process can access the semaphore. This is implemented
with atomic hardware and code. An atomic operation is indivisible, that is, it can be considered to
execute as a unit.
When the resource is only one, for example only one seat in the lounge, a binary semaphore is used
which can only have the values of 0 or 1. They are often used as mutex locks. The implementation of
mutual-exclusion using binary semaphores is as follows:
do
{
wait(s);
// critical section
signal(s);
// remainder section
} while(1);
Here, if a process wants to enter its critical section, it has to obtain the binary semaphore which gives
it mutual exclusion until it signals that it is done.
Interprocess Communication and Synchronization | 5

For example, suppose two processes, P1 and P2 want to enter their critical sections at the same
timeandwe have semaphore s. P1 first calls wait(s). And therefore the value of s is decremented to 0
and P1 enters its critical section. When the P1 is in its critical section, P2 calls wait(s), but as the
value of s is zero, it has to wait until P1 remain in its critical section and executes signal(s). When P1
calls signal, the value of s is incremented to 1, and then P2 can go on to execute in its critical section
by again decrementing the semaphore. As only one process can be in its critical section at any time,
therefore the mutual exclusion is achieved.
Disadvantage
As described in the above examples, processes waiting on a semaphore have to constantly check to
see if the semaphore is not zero. The continuous looping is a problem in a real multiprogramming
system. Note that in a multiprogramming systemgenerally a single CPU is shared among multiple
processes. This is called busy waiting and it wastes CPU cycles. When a semaphore does this, it is
called a spinlock.
In order to avoid busy waiting, a semaphore may use an associated queue of processes that are
waiting on the semaphore, which allows the semaphore to block the process and then wake it when
the semaphore is incremented. A block() system call may be provided by the operating system,
thathang up the process that calls it, and the wakeup(P <Process>) system call thatresumes the
execution of blocked process.When a process calls wait() on a semaphore with a value of zero, then
that process is added to the semaphores queue and then blocked. The state of the process is changed
to the waiting state, and control is transferred to the CPU scheduler, which selects another process to
execute. When another process increments the semaphore by calling signal() and there are tasks on
the queue, then in that case one is taken off from the queue and resumed.
You can have an alteredimplementation where it is possible for a semaphore's value to be less than
zero. When a process executes wait(), the semaphore count is automatically decremented. The
magnitude of the negative value willindicatethe number of processesthat were waiting on the
semaphore:
wait(Semaphore s){
s=s-1;
if (s<0) {
// Add process to queue
block();
}
}

signal(Semaphore s){
s=s+1;
if (s>=0) {
// Remove process p from queue
wakeup(p);
}
6 | Operating System

Init(Semaphore s , Int v){


s=v;
}

3.5 Hardware Support for Mutual Exclusion

Here, you will see some of the important hardware support approaches for mutual exclusion.
Interrupt disabling
A uniprocessor environment does not have overlapping execution of concurrent processes. Only, the
processes are interleaved. Additionally, a process can run continuously before invoking an operating
system service or an interrupted service. Hence, it is necessary to stop interrupted service for a
running process to provide a mutual exclusion environment. The operating system kernel provides
these primitives by defining enabling and disabling interrupts. To enforce mutual exclusion, a process
will do the following:
while(1) {
/* interrupts are disabled */;
/* critical region */;
/* interrupts are enabled*/;
/* remainder section */;
}
But, the price of this methodis high. The execution efficiency is degraded because the processor has
limited ability to interleave processes. One more problem is that this technique could not work on
multiprocessor environment where many processes are running at the same time. In this situation,
disabling of interrupts would not work for mutual exclusion.
Special machine instructions
In hardware level, accessing in a memory location do not include any other accesses within the same
memory location. On this basis, designers of processor define different machine instructions to
perform two operations atomically. They are writing and reading or testing and reading in a single
location of memory using a single instruction fetch cycle. At the time of instruction execution,
accessing the location of the memory is blocked for the other instruction referencing to the same
location of the memory. Here, you will see two of the frequently use instructions.
Compare & swap instruction: This type of instruction is also called acompare and exchange
instruction. It is defined as follows:
int compare-and-swap ( int * Word, int testvalue, int newvalue)
{
int oldvalue;
Interprocess Communication and Synchronization | 7

oldvalue = *Word;
if (oldvalue == testvalue) *word = newvalue;
return oldvalue;
}
Here the instruction sees a memory location (*Word) against a test value ( testvalue). If present value
of the memory location is testvalue, then it will replace with the newvalue; else it remains same. This
function returns an old memory value. Therefore, the memory location is updated when the returned
value is identical with the test value. This atomic instruction therefore has two parts: In the
comparison of a test value and memory value; in case of same value, a swap has occurred. This
function is carried out atomically without having any interruption.
The other version of the instruction returns a Boolean value: in case of swap occurred the value in
true; otherwise false. It is defined as follows:
const int n = /* number of processes */;
int bolt;
void P( int i)
{
while (true) {
while (compare-and-swap(bolt, 0, 1) == 1)
/* do nothing */;
/* critical section */;
bolt = 0;
/* remainder */;
}
}
void main()
{
bolt = 0;
parbegin (P(1), P(2), ... ,P(n));
}
In the above code, bolt is a shared variable is initialized to 0. If the value of bolt is 0, then a process
can enter into its critical section. All the other processes trying to enter into its critical section must
go into a mode of busy waiting. Busy waiting or spin waiting is a technique where a process has to
wait for its critical section and continuously executing instruction to tests the bolt variable to enter
into its critical section. At the time of leaving a critical section by a process, it is resetting bolt as 0;
and any one of the waiting processes is granted to access its critical section. This choice is depended
on which of the processes is executing the compare & swap instruction next.
8 | Operating System

Exchange instruction
The exchange instruction is defined as follows:
void exchange ( int *register, int *memory)
{
int temp;
temp = *memory;
*memory = *register;
*register = temp;
}
It exchanges the contents of a register with thelocation of memory.In the figure given below, shows a
protocol of mutual exclusion based on the exchange instruction. Here, bolt is a shared variable is
initialized to 0. Each of the process is used a local variable key initializing to 1. If the value of bolt is
0, then a process can enter into its critical section. It will exclude all the other processes from entering
into its critical section by setting bolt to 1. At the time of leaving a critical section by a process, it is
resetting bolt as 0; and any one of the waiting processes is granted to access its critical section.
int const n = /* number of processes */;
int bolt;
void P( int i)
{
int keyi = 1;
while (true) {
do exchange (&keyi, &bolt)
while (keyi != 0);
/* critical section */;
bolt = 0;
/* remainder */;
}
}
void main()
{
bolt = 0;
parbegin (P(1), P(2), ..., P(n));
}
Here, the following expression always holds, since the way in which the variables are initialized and
the algorithms nature:
Bolt + iKeyi = n
When bolt 0, then no process isentered into its critical section. If bolt is 1, then just one process is
entered into its critical section, specifically the process having key value equal to 0.
Interprocess Communication and Synchronization | 9

3.6 Classical Problems in Concurrent Programming

Here, we will discuss three different classical problems in concurrent programming. These problems
are as follows.
The Dining philosophers problem
Producer consumer problem (or bounded-buffer problem)
Readers writer problem
The Dining Philosophers Problem (E. W. Dijkstra.)
This is a classic synchronization problem which is used to assessscenarios where there is a
requirement of assigning multiple resources to multiple processes.
Problem Statement
The fivephilosophers who are silent sit at a round table with five bowls of spaghetti. Chopsticksare
kept between each pair of adjacent philosophers.
Each philosopher must alternately eat and think. A philosopher can only eat spaghetti when they
have both left and right chopsticks. Each chopstick can be held by only one philosopher and so a
philosopher can use the chopstickonly if the other philosopher is not using it. Whena philosopher
finishes eating, he or sheis required to put down both chopsticks so that the chopstick becomes
available to others. A philosopher can take the chopstickeither on their right or left as they become
available.Philosopherscan only start eating when they have both the chopsticks available.
As it is assumed that there is an infinite amount of spaghetti or stomach space available, therefore
eating is not restricted by the availability of either of them. The problem is depicted below in Figure
3.1.

Figure 3.1:Dining philosopher problem depiction


Problem
The problem was posed to demonstrate the challenges of avoiding deadlock. Deadlock is a state in
which no progress is possible. As the proper solution to this problem is not obvious, we mayput the
constraint for the philosopherto behave as follows:
think until the left chopstick is available; when it is, pick it up;
think until the right chopstick is available; when it is, pick it up;
10 | Operating System

when both chopsticksare held, eat for a fixed amount of time;


then, put the right chopstick down;
then, put the left chopstick down;
repeat from the beginning.
The above solution will fail as it allows the system to reach a deadlock state, in which no progress is
possible. This will happen when each philosopher has picked up the chopstick to the left, and is
waiting for the chopstick to the right to become available, vice versa. With the given constraint, the
deadlock state can be reached, where thephilosophers will have to wait indefinitely for each other to
release the chopsticks.
Apart from the deadlock state, the resource starvation may occur if a particular philosopher is not
able to obtain both chopsticksdue to timing problem. For example, there could be a rule that the
philosophers have to put down a chopstickwhen he waits for fifteen minutes for the other chopstickto
become available and wait a further fifteen minutes before attempting again. This rule will eliminate
the possibility of deadlock but may still suffer from the problem of livelock. This will happen when
all five philosophers arrive in the dining room at the same time and all the philosopher picks up the
left chopstick at the same time. Now the philosophers will wait fifteen minutes until they all put their
chopsticks down and then wait a further fifteen minutes before they all pick them up again.
The basic idea of the problem is mutual exclusion; the dining philosophersproblem put a general and
theoretical scenario which is very useful to explain issues of this type. The failures experienced by
these philosophers are similar to the problems that occur in real computer programming when
number of programs requires exclusive access to shared resources. However, the problems
demonstrated by the dining philosophers problem occur far more often when multiple processes
access the data that are being updated. Operating system kernels use thousands of locks and
synchronizations to avoid problems such as deadlock, starvation, or data corruption. A solution to
dining philosopher problem using monitor.
monitor dining_controller;
cond ForkReady[5];
boolean fork[5] = {true};
void get_forks(int pid)
{
int left = pid;
int right = (pid++) % 5;
if (!fork(left)
cwait(ForkReady[left]);
fork(left) = false;
if (!fork(right)
cwait(ForkReady(right);
fork(right) = false:
}
Interprocess Communication and Synchronization | 11

void release_forks(int pid)


{
int left = pid;
int right = (pid++) % 5;
if (empty(ForkReady[left])
fork(left) = true;
else
csignal(ForkReady[left]);
if (empty(ForkReady[right])
fork(right) = true;
else
csignal(ForkReady[right]);
}
void philosopher[k=0 to 4]
{
while (true)
{
<think>;
get_forks(k);
<eat spaghetti>;
release_forks(k);
}
}
Producer Consumer Problem
This problem is a very good example of a multi-process synchronization problem. The problem
defines two processes, one is producer and the other isconsumer. These processes shares a common,
fixed-size buffer used as a queue. The producer's task is to generate the data, put the data into the
buffer, and start again. Simultaneously, the consumer task is to consume the data i.e., remove from
the buffer, one piece at a time. Here, the main problem is to make sure that the producer should not
try to add data into the buffer when it is full and also the consumer should not try to remove data
from an empty buffer. To ensure the same, the producer should either discard data or go to sleep
when the buffer is full. And when the consumer removes an item from the buffer, it informs the
producer, so that it fills the buffer again. Likewise, the consumer maygo to sleep when it finds that
the buffer is empty. And the next time when the producer will put data into the buffer, it will wake up
the consumer which is in the sleeping mode. Thesemaphorescan be used to reach the solution.Apoor
solution may result in a deadlock in which both the processes are waiting to be awakened. This
problem may also be generalized by having multiple producers and consumers.
This problem may be solved by using semaphores, using monitors, and without using semaphores
and monitors.
12 | Operating System

Using Semaphores
Here two semaphores have been usedfillCnt and emptyCnt. fillCnt is the number of items already in
the buffer and available to be consumed, while emptyCntis the number of available spaces in the
buffer where items may be placed. fillCnt willbe incremented and emptyCnt will be decremented
when you place a new item into the buffer. If the producer will try to decrement emptyCnt when its
value is zero, then the producer is put to sleep. And the next time when an item is
consumed, emptyCnt will be incremented and the producer wakes up. The consumer works the
similar way. Here is the pseudo code for the same.

semaphore fillCnt = 0; // This will produce the item


semaphore emptyCnt = BUFFER_SIZE; // remaining size of the buffer

procedure producer() {
while (true) {
item = produceItem();
down(emptyCnt);
putItemIntoBuffer(item);
up(fillCnt);
}
}

procedure consumer() {
while (true) {
down(fillCnt);
item = removeItemFromBuffer();
up(emptyCnt);
consumeItem(item);
}
}
The above solution will work well when there is only one producer and consumer. If the multiple
producer shares the same memory space for the item buffer or the multiple consumers shares the
same memory space, then the above solution will cause the serious race condition. And in that case
two or more processes may be reading or writing into the same slot simultaneously. To know how it
is possible, you will have to imagine how the procedure putItemIntoBuffer() may be implemented.
This implementation may have two actions, firstis to determine the next available slot and the second
writing into it. The concurrent execution of the procedure by multiple producers may cause the
following scenario:
Two producers may decrement the emptyCnt
Interprocess Communication and Synchronization | 13

One of the producers may determine the next empty slot in the buffer
The second producer may determine the next empty slot which will give same result as the
first producer
Both producers write into the same slot
In order to solve this problem, you are required to find a way to make sure that only one producer is
executing putItemIntoBuffer() at a time. That isyou need a way to execute a critical section with
mutual exclusion. Here is the solution for multiple producers and consumers.

mutex buff_mutex; // same as "semaphore buff_mutex = 1",


semaphore fillCnt = 0;
semaphore emptyCnt = BUFFER_SIZE;
procedure producer() {
while (true) {
item = produceItem();
down(emptyCnt);
down(buff_mutex);
putItemIntoBuffer(item);
up(buff_mutex);
up(fillCnt);
}
}

procedure consumer() {
while (true) {
down(fillCnt);
down(buff_mutex);
item = removeItemFromBuffer();
up(buff_mutex);
up(emptyCnt);
consumeItem(item);
}
}
Note that the order in which different semaphores are incremented or decremented is crucial: change
of the order may result in a deadlock. Although mutex looks like work as a semaphore with value of
1 (binary semaphore), but the difference between semaphore and the mutex lies in the ownership
concept associated with mutex. This implies that the mutex can only be "incremented" back (set to 1)
by the same process that "decremented" it (set to 0). And all otherjobshave to wait until mutex is
available for decrement that ensures mutual exclusivity and avoids deadlock. Therefore, improper
14 | Operating System

usageof mutexes may stall several processes when exclusive access is not required, but mutex is used
instead of semaphore.
Readers Writer Problem
Suppose you have a file shared among many people.
So if one person tries to edit the file, no other person should be allowed to read or write at
simultaneously, otherwise modifications would not be visible to that person.
However,all the people may read the file at the same time.
In operating system (OS) we term this situation as the readers-writers problem.
Problem parameters:
Several processes shares the same set of data
When the writer is ready, it performs its write. Only onewriter is allowed to write at a time
While a process is writing, no other process can read it
If any reader is reading, no other process can write
Readers may not write and it only reads
Solution when Reader has the Priority over Writer
When reader has the priority, then, no reader should wait if the shared file is currently opened for
reading.For implementing the solution three variables are used:mutex, wrt, readcnt
1. semaphore mutex, wrt; // semaphore mutex will be used to ensure mutual exclusion
when readcnt is updated i.e. when any reader enters or exit from the critical section and both
readers and writers will use semaphore wrt
2. int readcnt; // readcnt indicate the number of processes reading in the critical section,
initially 0
Sempahore functions:
wait() : this will decrement the semaphore value.
signal() : this will increment the semaphore value.
Writer process:
1. Entry to critical section will be requested by the writer.
2. If permitted i.e. wait() gives a true value, it will enter and perform the write. And if not
permitted, it will keep on waiting.
3. It exits the critical section.
do {
// request by writerto enter the critical section
wait(wrt);
Interprocess Communication and Synchronization | 15

// performing the write


// exit the critical section
signal(wrt);
} while(true);
Reader process:
1. Reader will requestto enter the critical section.
2. If permitted:
It will increment the count of number of readers in the critical section. And if this
reader is the first reader entering, it will lock the wrt semaphore to prohibit the entry
of writers if any reader is inside.
It then, signals mutex as any other reader is permitted to enter while others are
already reading.
After carrying out reading, it exits the critical section. At the time of exit, it checks if
no more reader is inside, it signals the semaphore wrt and now, writer may enter
the critical section.
3. If not permitted, it will be waiting continuously.
do {
// Reader wishes to enter the critical section
wait(mutex);
// Now the number of readers is increased by 1
readcnt++;
// atleast one reader in the critical section
// this will ensure that no writer can enter if there is atleast one reader
// hencethe preference to readers is being given
if (readcnt==1)
wait(wrt);
// All the other readers can enter while this current reader is inside
// the critical section
signal(mutex);
// current reader starts reading
wait(mutex); // a reader wishes to leave
readcnt--;
// It means, now there is no reader in the critical section
if (readcnt == 0)
signal(wrt); // Now the writers may enter
signal(mutex); // reader leaves
} while(true);
16 | Operating System

Thus, the mutex wrt is queued on both readers and writers in such a way that preference is given to
readers if writers are also there. Hence, no reader has to wait simply because a writer requested to
enter the critical section.

3.7 Critical Region and Conditional Critical Region

Critical Region
It is is a section of code which is executed under mutual exclusion. The burden to enforce the mutual
exclusion is shiftedfrom the programmer to the compiler by using critical region concept. They
comprise of the following two parts.
Variables (which will be be accessed under mutual exclusion.)
A new language statement which will identify a critical region where the variables are
accessed.
A variable v of type T is declared as follows:
VAR v: SHARED T;
(Note that the variable v will be shared among many processes)
And this variable v can only be accessed inside a region statement which has the form:
REGION v DO S;
The above statement implies that while statement S is being executed, no other process will be
allowed toaccess the variable v. Therefore, if we will execute the following two statements
concurrently, in distinct sequential processes, then it will be equivalent to the sequential execution of
S1 followed by S2, or S2 followed by S1.
REGION v DO S1;
REGION v DO S2;
Implementation of the critical region construct by compiler
For each of the declaration
VAR v: SHARED T;
The compiler will generate a semaphore v-mutex initialized to 1.
And for each of the following statement,
REGION v DO S;
The following code will be generated by the compiler:
p(v-mutex);
S;
V(v-mutex);
The critical regions that have been tagged with the same variable will have compiler-enforced
mutual exclusion and so only one of them can be executed at a time:
Interprocess Communication and Synchronization | 17

Process A: Process B:
region V1 do region V1 do
begin begin
{Do something} {Do something}
end; end;
region V2 do
begin
{Do something}
end;
In the above code, process A can be executing in the V2 region while process Bcan be executing in
the V1 region. But if boththe processes want to execute in their respective V1 regions, then only one
process will be allowed to do so. Here, the shared variable V1 and V2 has a queue associated with it.
If one process is executing code in a region tagged with a shared variable, then other processes that
will try to enter a region which is tagged with the same variable are blocked and placed in the queue.
Now, here we will see how the critical region may be nested. But this may result deadlocks.
Deadlock Example:
VAR u, v: SHARED T;
PARBEGIN
Q: REGION u DO REGION v DO S1;
R: REGION v DO REGION u DO S2;
PAREND;
The critical-region construct may shield against some simple errors that are related with the
semaphore solution to the critical section problem which may be made by a programmer.
Conditional Critical Region
Critical regions and the semaphores are not equivalent. As discussed, critical regions lack condition
synchronization. Semaphores can be used to put a process to sleep until some condition is met.But,
this is not possible with the critical regions. Therefore, we have another construct, called conditional
critical regions that provide condition synchronization for critical regions:
region v when B do
begin
...
end;
Where B denotes the boolean expression.
The working of conditional critical regions is as follows:
A process that desires to enter a region for v must get the mutex lock. Otherwise,it is queued.
18 | Operating System

And when the lock is obtained the boolean expression B will be tested. The process can only
proceed if B evaluates to true, otherwise the process will release the lock and it is queued.
Next time if it obtains the lock it must retest B.
Implementation of conditional critical region
Every shared variable will have two queues associated with it. These are the main queue and the
event queue. The processes that want to enter a critical region but find themselves locked reside in
the main queue. And the event queue is for the processes that have been blocked because the
condition evaluates to be false. When a process leaves the conditional critical region the processes on
the event queue join those in the main queue. Since these processes have to retest their condition they
are doing something akin busy-waiting. Though the frequency with which they will retest the
condition is very low. The condition will only be retested when there is reason to believe that it may
have changed, for example, another process has finished accessing the shared variable. Despite, this
is more controlled than busy-waiting; it may still be unappealing.
Limitations
The efficient implementation of conditional critical regions is more difficult than
semaphores.
Conditional critical regions are still distributed among the program code.
Manipulation of the protected variables is not controlled. No information hiding or
encapsulation. Once a process is executing inside a critical region it can do anything it likes
to the variables (for which it has access).

3.8 Monitors

The various points regarding the monitors are as follows.


Monitors are advancement over conditional critical regions. This is because all the codethat
accesses the shared data is localized.
May contain procedures, constants, types, and variables.
Comprise of private data and operations on that data.
The body of the monitor permits for the initialization of the private data.
Only the explicitly marked procedures could be seen outside the monitor.
The mutual exclusion is enforced by the compiler on a specific monitor.
For each monitor, there is aboundary queue. And, if the monitor is already in usethen the
processes that want to call a monitor routine join this queue.
Interprocess Communication and Synchronization | 19

Condition synchronization
Just like critical regions is required to be extended to conditional critical regions, monitors also
required to be extended for making them as applicable as semaphores. Condition variablespermit
processes to block until some condition is true and then bewoken up:
var
c: condition;
For condition variables, following two operations has been defined.
(i) Delay
(ii) Resume
Delay
It is same as the semaphore wait operation.
delay(c) blocks the calling process on c and releases the lock on the monitor.
Resume
It is same as the semaphore signal operation.
resume(c) unblocks a process waiting on c.
If no processes are blocked thenresume is a nop (i.e., no operation). Note that it is different
from the signaloperation, which always has aneffect.
Once we callresume,there are potentiallytwo processes inside the monitor:
The process that has called delay and has been woken up.
The process that has called resume.
Solutions:
Resume and continue:The woken process will be waiting until the process that has
called resumereleases the monitor.
immediate resumption: The process that called resume has to immediately leave
themonitor.
Resume-and-continue implies that processes that call delay should use
while not B do rather if not B then
delay(c); delay(c);
because a resumer mayproceed and alter the condition after calling resume but beforeexiting the
monitor.
Immediate resumption variants
20 | Operating System

1. Resume and exit: The resumer will be forcedautomatically to exit the monitor after
callingresume.
2. Resume and wait: The resumer is put back on the monitor boundary queue. When itgets back
in, it is permitted to continue from where it left off.
3. Resume and urgent wait: The resumer is put on a second queue which has the priority
overthe monitor boundary queue.
Nested monitor calls
When a procedure in monitor X calls a procedure in a different monitor, suppose, monitor Y, then it
called nested monitor calls. Thismay cause problem: for example, what would happen if theprocedure
in monitor Yhas a delay statement.
In order to get rid of this situation, the various measures that could be taken are as follows.
1. The lock on monitor X is to be retained when calling the procedure in monitor Y, release the
lock on monitor Y when calling delay in monitor Y.
2. The lock on monitor X is to be retained when calling the procedure in monitor Y, release
both locks when calling delay in monitor Y.
3. The lock on monitor X is released when making a nested call.
4. Completely prohibit the nested calls.

3.9 Messages

Here, processes communicate with each other by passing messages between them. The
communication between the two processes P1 and P2 proceed as follows.
First, the establishment of a communication link.
And then the messages will beexchanged using two basic primitives given below.
send(message, destination) or send(message)
receive(message, host) or receive(message)

Figure 3.2: Message passing mechanism


Interprocess Communication and Synchronization | 21

The size of the message can be fixed or variable. A standard message can have two parts: the header
part and the body part.
The header part is used to store Message type, source id,destination id, message length and control
information. The control information comprisesthe information like sequence number, priority and
what should be done if runs out of buffer space. In general, the message is sent using FIFO style.
Message Passing through Communication Link
There are two types of communication link. These are direct and indirect communication link. While
we are going to implement the communication link, the following questions are required to be
considered.
1. Whether the link is unidirectional or bi-directional?
2. Can we associate a link with more than two processes?
3. Number of links between every pair of communicating processes?
4. The capacity of a link? Whether the size of message is fixed or variable?
The capacity of the link determines the number of messages that can reside in it for the time being.
For this each link has a queue associated with it which can be of zero or bounded or unbounded
capacity. In case of zero capacity queue, the sender wait until receiver inform sender that it has
received the message. In case of non-zero capacity queue, a process will not be able to know whether
a message has been received or not after the send operation. And in that case, the sender has to
communicate to receiver explicitly. Depending on the situation, the link has to be implemented and it
could be either a Directed communication link or an In-directed communication link.
When the processes use particular process identifier for the communication but it is tough to identify
the sender ahead of time, then in that case direct Communication links should be implemented.
In-directed Communication is done with the help of shared mailbox (port), which contains queue of
messages. Sender put the message in mailbox and receiver picks them up.
Message Passing through Exchanging the Messages
Synchronous and Asynchronous message passing: A blocked process waits for some event to occur,
for example, an I/O operations completed or a resource becomes available. Inter-process
communication (IPC) is possible between the processes running on the same computer and also
between the processes running on different computer i.e. in distributed or networked system. In both
cases, the process may or may not be blocked while sending a message or trying to receive a
message, therefore the message passing may be blocking or non-blocking. Blocking is treated
synchronous and blocking send implies that the sender will be blocked until the message is received
by the receiver. Likewise, blocking receive implies that the receiver will be blocked until a message
is available. Non-blocking is treated asynchronous and non-blocking send implies that the sender
can send the message continuously. Likewise the non-blocking receive has the receiver which
canreceive a valid message or null continuously. We can easily understand that it is more natural for
a sender to be non-blocking after passing the message because the sender might requires ending the
message to different processes, but the sender would require the acknowledgement from receiver if
22 | Operating System

the send fails. Also, it would be more natural for a receiver to be blocking after it has received from
the sender since the information from the received message could be used for further execution. But
at the same time, if the sent message keeps on failing, receiver will have to wait for indefinitely. For
this reason, the other way of message passing has also been considered. The three most preferable
combinations are as follows.
Blocking send and blocking receive
Non-blocking send and Non-blocking receive
Non-blocking send and Blocking receive (Mostly used)
In direct message passing, the process that has desire to communicate has to explicitly name the
recipient or sender of communication. e.g. send(p1, message) means send the message to p1.
Likewise, receive(p2, message) means receive the message from p2.Indirect message passing, there is
automatic establishment of communication link, that could be either bidirectional or unidirectional,
but only one link could be used between one pair of the sender and receiver. Symmetry and
asymmetry between the sender and the receiver can also be implemented. It means either both
process will name each other to send and receive the messages or only sender will name receiver to
send the message and the receiver do not need to name the sender. However, if the name of one
process will change then this method would not work.
In Indirect message passing, mailboxes(also referred to as ports) are used by the processes to send
and receive the messages. For each mailbox there is a unique id and processes communicate by
sharing a mailbox. If processes share a common mailbox then only the link can be established and
here we may associate a single link with several processes. Every pair of processes could share many
communication links and the links may be bi-directional or unidirectional. Consider two processes
want to communicate using indirect message passing, the operations that are needful: creating a mail
box, use this mail box to send and receive messages, destroying the mail box. The primitives that will
be used are: send(X, message) which implies send the message to the mailbox X and the primitive to
receive the message also work the same manner e.g. received (X, message). Now let us see the
problem with mailbox implementation. Assume more than two processes share the same mailbox and
let the process P1 sends a message to the mailbox. Then the question is which process should receive
the message? To solve this problem we can either force that only two processes can share a single
mailbox or enforce that only one process is permitted to execute the receive at a given time or
arbitrarily select any process and inform the sender about the receiver. A single sender/receiver pair
may have a private mailbox or multiple sender/receiver pairs can share a single mailbox. Port is an
implementation of such mailbox which can have multiple sender and single receiver. This is used by
client/server application (note that here server is the receiver). The port is possessed by the receiving
process and created by operating system (OS) on receiver process request. And port can be destroyed
by the request of the same receiver process or when the receiver gets terminated by its own. To
enforce that only one process should be permitted to receive the sent message; we use the concept of
mutual exclusion. Mutex mailbox is created which is shared by n processes. Here the sender is
non-blocking and sends the message. The first process which that will receive the sent message will
enter in the critical section and all other processes will be blocking and will wait.
Interprocess Communication and Synchronization | 23

Now, we will seethe implementation of Producer-Consumer problem using message passing. The
producer put items (inside messages) in the mailbox and the consumer is able to consume item when
minimum of one message is present in the mailbox. Here is the code for both the producer and
consumer.
Producer Code
void Producr(void){

int item;
Message msg;
while(1){
receive(Consumr, &msg);
item = produce();
build_message(&msg , item ) ;
send(Consumr, &msg);
}
}
Consumer Code
void Consumr(void){
int item;
Message msg;
while(1){
receive(Producr, &msg);
item = extracted_item();
send(Producr, &msg);
consume_item(item);
}
}
Here are the examples of IPC systems.
1. Windows XP : uses message passing using local procedural calls
2. Mach : uses message passing

3.10 Deadlocks

In a multiprogramming environment, a number of processes may try to gain access to a finite number
of resources. Request for resources is made by a process; if the resources are not obtainable at that
time, the process move in a waiting state. And, sometimes, a waiting process becomes unable to
change state. This is because the resources requested by these waiting processes are held by other
waiting processes. Therefore, the waiting processes will never able to get access to the resources,
held by the other processes. This situation is called deadlock. Example 1:Suppose a system has two
printers and each of the two processes P0 and P1 hold one printer and each needs another one.
24 | Operating System

Example 2: Semaphores X and Y, initialized to 1.


P0 P1
wait(X); wait(Y);
wait(Y); wait(X);
System Model
In order to discuss the deadlock, we can model a system as a collection of limited resources.
And the resources can be divided into different categories, for allocating to a number of
processes, where each has different needs.
The resource categories may comprise printers, CPUs, memory, tape drives, CD-ROMS,
open files, etc.
It is assumed that all the resources belong to the same category are equivalent. And, a request
of this category can be equally satisfied by any one of the resources in that category. If it is
not the case, that is, if there is some difference between the resources of the same category,
then that category is required to be further divided into distinct categories. For example,
"printers" can be further divided into "color inkjet printers "and "laser printers".
There could be a single resource in some categories.
Normally, a process has to request a resource before using it, and release it when it is done,
in sequence as follows:
1. Request - When the request cannot be granted immediately, then the process has to
wait until the resource(s) it requires become available. For example the system calls
open( ), malloc( ), new( ), and request( ).
2. Use - The resource will be used by the process that has acquired it. For example, the
printer will be used for printing or the file will be used for reading.
3. Release - The process give up the resource after using it, so as to make it available
for other processes. For example, close( ), free( ), delete( ), and release( ).
For all the resources that are managed by the kernel, the kernel keeps track of the allocated
and free resources and to which process they are allocated. And also kernel makes a queue of
processes waiting for this resource available. The resources that are managed by the
application can be controlled using mutexes or wait( ) and signal( ) calls, ( i.e. binary or
counting semaphores.)
A set of processes is deadlocked if each process in the set is waiting for a resource that is
currently allocated to another process in the set and resources can only be released when the
other waiting process makes progress.
Necessary Conditions to achieve Deadlock
The following four conditions are necessary to achieve deadlock.
Interprocess Communication and Synchronization | 25

1. Mutual Exclusion This means that at least one resource should be held in a non-sharable
mode; therefore, if any other process requests this resource, then that process must wait for
the resource to be released.
2. Hold and Wait - A process must hold least one resource and should be waiting for at least
one resource that is currently being held by some other process, simultaneously.
3. No preemption - Once a process is holding a resource after their request for that resource is
granted, then that resource cannot be taken away from that process until the process willingly
releases it.
4. Circular Wait - A set of processes { P0, P1, P2, . . ., PN } must exist such that every P[ i ] is
waiting for P[ ( i + 1 ) % ( N + 1 ) ]. Note that circular wait condition implies the hold-and-
wait condition. But we can easily deal with the conditions if these four conditions are
considered separately.
Resource-Allocation Graph (RAG)
In some cases, we can understand deadlocks more clearly through the use of resource-allocation
graphs. The properties of the RAG are as follows:
A set of resource of different categories, {R1, R2, R3, . . ., RN}, is shown by the square
nodes on the graph. The dots inside the resource nodes indicate aparticular instance of the
resource. For example, two dots may represent two laser printers.
The graph will have a set of processes, {P1, P2, P3, . . ., PN}.
Request Edges - A set of directed arcs from Pi to Rj, indicates that process Pi has requested
resource Rj, and is currently waiting for that resource to become available.
Assignment Edges - A set of directed arcs from Rj to Pi indicates that resource Rj has been
allocated to process Pi, and that Pi is currently holding resource Rj.
Note that a request edge can be converted into an assignment edge by reversing the
direction of the arc when the request is granted. Also note that the request edges point to the
category box, whereas assignment edges originate from a particular instance dot inside the
box.
For example:

Figure 3.3: Resource allocation graph with no deadlock


26 | Operating System

If there are no cycles in the resource-allocation graph, then the system is not in deadlock
state. Note that as the graph is directed, so we have to look for the directed cycle. See the
example in Figure 3.3 above.
If there is a cycle in the resource-allocation graph AND each resource category (type)
contains only a single instance, then we are sure that the deadlock exists.
If there is more than one instance in a resource category, then the presence of a cycle in the
resource-allocation graph indicates the possibility of a deadlock. Therefore, the state may or
may not be deadlocked. Consider, for example, Figures 3.4 and 3.5 below:

Figure 3.4: Resource allocation graph with a deadlock

Figure 3.5: Resource allocation graph with a cycle but no deadlock

Methods for Handling Deadlocks


There are three ways by which deadlocks may be handled are as follows.
1. Deadlock prevention or avoidance This does not allow the system to get into a deadlocked
state.
2. Deadlock detection and recovery In this case, when the deadlock is detected, then the
process is abortedor some resources arepre-empted.
3. Ignore the problem all together It says if the deadlock do not occur often (i.e., happens once
in a year), then it might be better to simply let them happen and reboot when required than to
Interprocess Communication and Synchronization | 27

suffer the constant overhead and system performance penalties linked with deadlock
prevention or detection. This approach is taken by both the Windows and UNIX.
For avoidingthe deadlocks, the system must have additional information about all processes.
Particularly, it must be known to the system must what resources a process will or may
request in the future. Depending on the particular algorithm, this may range from a simple
worst-case maximum to a complete resource request and release plan for each process.
The deadlock detection is easy and straightforward, however, deadlock recovery requires
either pre-empting resources or aborting processes, both of which is cumbersome.
If neither the preventionnor the detection of deadlock is done, then on occurrence of
deadlock, the system will gradually slow down. This is because more and more processes
become stuck as they will be waiting for resources currently held by the deadlock and by
other waiting processes. This is unfortunate that this slowdown can be indistinguishable from
a general system slowdown when a real-time process has heavy computing requirements.
Deadlock Prevention
It is possible to prevent deadlock by preventing at least one of the four necessary conditions that are
discussed below.
1. Mutual Exclusion
Shared resources, for example, read-only files do not lead to deadlocks.
But, there are some resources, such as tape drives and printers, which require exclusive
access by a single process.
2.Hold and Wait
For preventing this condition, the processes have to be prevented to hold one or more resources
while simultaneously waiting for one or more other resources. These can be achieved by many ways
as described below.
This requires all processes should made request for all the resources simultaneously. But
this will waste the system resources clearly, for example, if a process requires one resource
early in its execution and doesn't require some other resource for a long time.
This requires that processes that hold the resources should release them before making the
request for new resources, and then re-acquire the released resources along with the new
ones in a single new request. But this will cause the problem if a process could partially
complete an operation using a resource and then unable to get it re-allocated after releasing
it.
Starvation can be caused by either of the methods described above, if a process needs one or
more popular resources.
3. No Preemption
By preempting the process resource allocations, the deadlocks can be prevented.
28 | Operating System

One of the approach is that when a resource is requested and it is not available, then the
system will look to see what other processes currently have those resources and are blocked
themselves, waiting for some other resource. When these types of process are found, then
some of their resources may get preempted and will be added to the list of resources for
which the process is waiting.
The other approach is that if a process is made to wait involuntary when it is making a
request for a new resource, then all other resources previously held by this process are
implicitly released ( preempted ), forcing this process to re-acquire the old resources along
with the new resources in a single request, same as the previous discussion.
For the resources whose states are easily saved and restored, either of these approaches may
be applicable, such as registers and memory, but are usually not applicable to other devices
such as tape drives and printers.
4. Circular Wait
Circular wait can be avoided by numbering all the resources, and to require that processes
request resources only in strictly increasing (or decreasing) order.
That is, for requesting resource Rj, a process should first release all Ri such that i >= j.
The most challenging task in this scheme is to determine the relative ordering for the
different resources.
Deadlock Avoidance
The common idea behind deadlock avoidance is to prevent deadlocks from ever happening,
by preventing at least one of the four conditions mentioned above.
This needs more information about each process and causes low device utilization.
However, for some algorithms, the scheduler is only requires to know the maximum number
of each resource that a process may use. In more complex algorithms the scheduler can also
make use of the schedule of exactly what resources may be required in what particular order.
If a scheduler realizes that granting resource requests or starting a process may cause
deadlocks in future, then the request is not granted or that process is not started.
A resource allocation state has the information about the number of available and allocated
resources and the maximum requirements of all processes in the system.
Safe State
A state is safe if the system can allocate all resources requested by all processes without
entering a deadlock state. That is if there exists a safe sequence of processes {P0, P1, P2, ...,
PN} such that all of the resource requests for Pi can be granted by using the resources
currently allocated to Pi and all processes Pj where j < i. That is if all the processes prior to
Pi finish and make available their resources, then Pi will be able to finish also, using the
resources that they have released.
Interprocess Communication and Synchronization | 29

If a system is not in the safe state or a safe sequence does not exist, then the system is in an
unsafe state, which may cause deadlock. It is important to note that all safe states are
deadlock free, but not all unsafe states necessarily cause deadlocks.
The basic idea of the safe state approach is that when a request is made for resources, the
request is should be granted only if the resulting allocation state is a safe one.Figure 3.6
below showsSafe, unsafe, and deadlocked state spaces.

Figure 3.6: Safe, unsafe, and deadlocked state spaces


Resource-Allocation Graph Algorithm
When the resource categories have only single instances of their resources, then deadlock
states can be detected if there is cycle in the RAG.
For this case, one can recognize and avoid the unsafe states by augmenting the resource-
allocation graph with claim edges, denoted by dashed lines.The claim edgespoint from a
process to a resource thatcould be requested in the future.
Note that in order to work with this technique, all claim edges should be added to the graph
for any particular process before that process is permitted to request any resources. In other
words, processes could only make requests for resources for which they have already
established claim edges. Note that claim edges cannot be added to any process that is
currently holding resources.
The claim edge Pi->Rj will be converted to a request edge, when a process makes a request.
Likewise, when a resource is freed, the assignment reverts back to a claim edge.
This approach denies requests that will produce cycles in the resource-allocation graph,
considering the claim edges into effect.
Consider, for example, what would happen when process P2 requests resource R2:Resource
allocation graph for deadlock avoidance is shown below in Figure 3.7.
30 | Operating System

Figure 3.7: Resource allocation graph for deadlock avoidance


The resulting resource-allocation graph will have a cycle in it, and therefore the request
cannot be granted. Figure 3.8 shown below shows an unsafe state in a resource allocation
graph.

Figure 3.8: An unsafe state in a resource allocation graph


Banker's Algorithm
If the resource categories have more than one instance, then the resource-allocation graph
method fails, and this requires choosing more complex methods.
The name of this algorithm is Bankers Algorithm, as this method is commonly used by
bankers to assure that when they lend out resources they should still be able to satisfy all
their clients.
When a process started, it necessarily needs to state in advance the maximum allocation of
resources it may request. Obviously, it can request up to the maximum amount available on
the system.
Interprocess Communication and Synchronization | 31

Before granting the request, the scheduler determines whether granting the request will leave
the system in a safe state. If not, then the process will have to wait until the request can be
granted safely.
The banker's algorithm uses many data structures: (n is the number of processes and m is the
number of resource categories.)
o Available[m] - indicates number of are currently available resources of each type.
o Max[n][m] - indicates the maximum demand of each process of each resource type.
o Allocation[n][m] - indicates the number of resource of each category allocated to
each process.
o Need[n][m] - indicates the remaining resources required of each type for each
process. Note that Need[i][j] = Max[i][j] - Allocation[i][j] for all i, j.
To simplify the discussion the following notations are used:
o The Need[i] can be considered as a vector corresponding to the needs of process i,
and likewise for Max and Allocation.
o If X[i] <= Y[i] for all I, then vector X is considered to be less than equal to a vector
Y.
Safety Algorithm
Before applying the Banker's algorithm, we require an algorithm to determine whether a
particular state is safe or not.
This algorithm determines if the current state of a system is safe, by using the following
steps:
1. Let Work and Finish be vectors of length m and n respectively.
o Work is a working copy of the available resources, which will be modified
during the analysis.
o Finish is a vector of Booleans which specifies whether a particular process can
finish or has completed so far in the analysis.
o Initialize Work to Available, and Finish to false for all elements.
2. Find an i such that Finish[i] == false, and Need[i] < Work. This process has not
completed, but can complete with the given available working set. If there is no such
i exists, go to step 4.
3. Set Work = Work + Allocation[i], and set Finish[i] to true. This indicates that the
process i completing and making available its resources back into the work pool.
Then loop back to step 2.
4. If finish[i] == true for all i, then the state is a safe state, because a safe sequence
could be found.
32 | Operating System

Resource-Request Algorithm (The Bankers Algorithm)


Now we have a way to determine if a particular state is safe or not, we willsee the Banker's
algorithm.
The algorithm determines whether a new request is safe, and grants the resource request only
if it is safe to do so.
When a request is made that could be satisfied by the currently available resources, assume
that it has been granted, and then see if the resulting state is a safe or not. If it is safe, grant
the request. Otherwise, deny the request, as follows:
1.Let Request[n][m] specifies the number of resources of each type currently requested by
processes. If Request[i] > Need[i] for any process i, raise an error condition.
2.If Request[i] > Available for any process i, then that process must wait for resources to
become available. Otherwise the process can continue to step 3.
3.Determine if the request could be granted safely, by assuming it has been granted and then check
whether the resulting state is safe or not. If it is safe, grant the request, and if not, then the
process has to wait until its request could be granted safely.The process to grant the request
or pretend to for checking purposes is as follows:
o Available = Available - Request
o Allocation = Allocation + Request
o Need = Need - Request
Deadlock Detection
If deadlocks have not been avoided, then we will have to detect it when it has occurred and
recover.
Rather than continually inspecting for deadlocks, an algorithm must be there to recover from
deadlocks. Also, the work can be lost if processes is to be aborted or their resources is
preempted.
Here, we will discuss deadlock detection in the following two cases. That is, when there is single
instance of each resource type and also when there are several instances of resource types.
Single Instance of Each Resource Type
If there is single instance for each resource category, then we can use a wait-for graph which
is a variation of the resource-allocation graph.
We can construct a wait-for graph from a resource-allocation graph. For this we need to
eliminate the resources and collapse the corresponding edges, as depicted by the figure
below.
Interprocess Communication and Synchronization | 33

An arc from Pi to Pj in a wait-for graph specifies that process Pi is waiting for a resource that
process Pj is currently holding. The resource allocation graph and corresponding wait-for
graph has been shown in Figure 3.9 below.

Figure 3.9: (a) Resource allocation graph (b) Corresponding wait-for graph
Same as RAG, cycles in the wait-for graph specifies deadlocks.
The wait-for graph should be maintained by this algorithm, and also the algorithm search the
wait-for graph for cycles at regular interval.
Several Instances of a Resource Type
Here, we are presenting a detection algorithm which is basically the same as the Banker's algorithm,
with two major differences:
In step1, the Banker's Algorithm was setting Finish[i] to false for all i. But, here the
algorithm is setting Finish[i] to false only if Allocation[i] is non-zero. And if the currently
allocated resources for this process are zero, then the algorithm sets Finish[i] to true. This
algorithm is basically assuming that, if all the other processes can complete, then this process
can complete also. Moreover, this algorithm is particularly looking for the processes that are
involved in a deadlock. And, a process for which no resources are allocated could not be
involved in a deadlock, and hence can be ruled out for any further consideration.
Steps 2 and 3 are unchanged.
The step 4 of the Banker's Algorithm states that if Finish[i] == true for all i, then there is no
deadlock. However, this algorithm states that if Finish[i] == false for any process Pi, then it
implies clearly that the process Piis involved in the deadlock which has been detected.
Recovery from Deadlock
The following are the approaches to recover from deadlock situation:
1. Resource preemption.
2. Manual intervention by the system operator.
3. Termination of one or more processes that are involved in the deadlock.
34 | Operating System

Resource Preemption
Three significant issues need to be addressed when resource preemption is being done to get rid of
deadlock situation. These are as follows.
1. Selecting a victim - In order to decide which resources should be preempted from which
processes involves many of the same decision principles as discussed above.
2. Rollback - If at all possible, one wishes to roll back a preempted process to a safe state just
before the point at which that resource was initially allocated to the process. However, it is
almost impossible to determine what such a safe state is, and therefore the only safe rollback
is to roll back all up to the beginning. That is, abort the process and start it again.
3. Starvation It is quite possible that a process would starve as its resources are continuously
being preempted. Therefore, to guarantee that a process will not starve, one solution could be
to use a priority system, where the priority of a process will increase each time its resources
get preempted. Ultimately, the priority would become so high that it will not be preempted
any longer.
Process Termination
Following are the two simple approaches that will recover the resources allocated to
terminated processes.
o In this approach we terminate the processes one by one until the deadlock is broken.
This approach is more conventional, but it will require the deadlock to be detected
after each step.
o In this approach we can simply terminate all processes that are involved in the
deadlock. This will clearly solve the deadlock problem. But, at the cost of
terminating more processes than the required.
In the first approach, where we terminate the processes one by one until the deadlock is
broken, there could be various factors that may decide which processes should be terminated
next. These factors are as follows.
1. The type and the number of resources the process is holding. Can we easily preempt
and restore these processes?
2. Process priorities.
3. The total time when the process starts running and how much time it may take to
finish.
4. What kind of process it is? Batch or interactive.
5. What other resources are needed to finish the process.
6. Number of processes that are required to be terminated.
Interprocess Communication and Synchronization | 35

3.11 Summary

Inter-process communication provides a mechanism for the processes to communicate and to


synchronize their actions.
When one process is in its critical section then no other processes can be in their critical
sections at the same time.
A semaphore is a protected integer variable which can simplify and limit access to shared
resources in a multi-processing environment. These are of the two types namely binary
semaphore and counting semaphore.
Interrupt disabling and special machine instructions are two important hardware support
approaches to achieve mutual exclusion.
The famous classical problems in concurrent programming are: the dining philosophers
problem, producer-consumer problem (or bounded-buffer problem) and readers-writer
problem.
Critical region is a section of code which is executed under mutual exclusion. Conditional
critical regions provide conditional synchronization for critical regions.
Monitors are advancement over conditional critical regions. This is because all the code that
accesses the shared data is localized.
Processes communicate with each other by passing messages between them. In direct message
passing, the process that has desire to communicate has to explicitly name the recipient or
sender of communication. In Indirect message passing, mailboxes(also referred to as ports) are
used by the processes to send and receive the messages.
Necessary conditions to achieve deadlock are: mutual exclusion, hold and wait, no
preemption, and circular wait.

3.12 Exercises

1. Which process can be affected by other processes executing in the system?


a) cooperating process
b) child process
c) parent process
d) init process
2. When several processes access the same data concurrently and the outcome of the
execution depends on the particular order in which the access takes place, is called
a) dynamic condition
b) race condition
36 | Operating System

c) essential condition
d) critical condition
3. If a process is executing in its critical section, then no other processes can be
executing in their critical section. This condition is called
a) mutual exclusion
b) critical exclusion
c) synchronous exclusion
d) asynchronous exclusion
4. Which one of the following is a synchronization tool?
a) thread
b) pipe
c) semaphore
d) socket
5. A semaphore is a shared integer variable
a) that cannot drop below zero
b) that cannot be more than zero
c) that cannot drop below one
d) that cannot be more than one
6. Mutual exclusion can be provided by the
a) mutex locks
b) binary semaphores
c) both (a) and (b)
d) none of the mentioned
7. In the non-blocking send
a) the sending process keeps sending until the message is received
b) the sending process sends the message and resumes operation
c) the sending process keeps sending until it receives a message
d) None of these
8. Process synchronization can be done on
a) hardware level
b) software level
c) both (a) and (b)
d) none of the mentioned
9. A monitor is a module that encapsulates
a) shared data structures
b) procedures that operate on shared data structure
c) synchronization between concurrent procedure invocation
d) all of the mentioned
Interprocess Communication and Synchronization | 37

10. A system is in the safe state if


a) the system can allocate resources to each process in some order and still avoid a
deadlock
b) there exist a safe sequence
c) both (a) and (b)
d) none of the mentioned

11. Define inter-process communication. What are cooperating process?


12. Explain in brief the two models of IPC.
13. What are the needs for inter-process synchronization?
14. Explain critical section problem. Explain two methods to overcome this problem.
15. Define the term mutual exclusion.
16. Explain binary semaphores and counting semaphores. Explain implementation of
wait() and signal() operation.
17. Explain in brief hardware synchronization for critical problem solution.
18. Explain bounded buffer problems.
19. Explain reader-writer problem and provide a semaphore based solution to it.
20. Explain monitors. Compare them with semaphores with respect to their advantages
and disadvantages.
21. Explain the solution to dining philosophers problem using monitors.
22. What are deadlocks? Explain various methods for preventing them.
23. Explain various methods of recovery from deadlock.
24. Give necessary conditions for the occurrence of deadlock.
25. Consider a system with 12 tape drives, allocated as follows. Is this a safe state? What
is the safe sequence?
Maximum Needs Current Allocation
P0 10 5
P1 4 2
P2 9 2
What happens to the above table if process P2 requests and is granted one more tape
drive?
38 | Operating System

Suggested Reading

Silberschatz, Abraham, et al. Operating system concepts. Vol. 4. Reading: Addison-wesley,


1998.
Romero, Fernando. "Operating Systems. A concept-based approach." Journal of Computer
Science & Technology 9 (2009).
www.slideshare.net
www.tutorialspoint.com

Вам также может понравиться