Вы находитесь на странице: 1из 25

TOPIC : HOW DEADLOCK DEGRADE THE PERFORMANCE OF

CPU

Submitted To :- Submitted By
:-

Sahil Rampal Ravi


kant

Roll No. RTb805b14

Reg.No. 10808662

MCA-IInd

LOVELY INSTITUTE OF MANAGEMENT


Acknowledgement

I here by submit my term paper given to me by my teacher ‘Mr.


Sahil Rampal ’of the subject ‘Operating system ‘ on the topic ‘how
deadlocks degrade the performance of cpu ’.I have preparing this
term paper under the guidance of my subject teacher and I have
acquired extra knowledge on the topic from the Internet and
related books in the available in the market. I have not used any
unfair means to accomplish this term paper.

At the end I would thank my class teacher, my subject


teacher and my friends who have helped me to complete the term
paper. I am also highly thankful to all the staff and executives of
the esteemed university namely ‘LOVELY PROFESSIONAL
UNIVERSITY, PHAGWARA, JALANDHAR’
INTRODUCTION:-
Deadlock can be defined formally as follows:

A set of processes is deadlocked if each process in the set is


waiting for an event that only another process in the set can
cause.

Resources can be preemptable or non-preemptable. A resource is


preemptable if it can be taken way from the process that is
holding it [we can think that the original holder waits, frozen, until
the resource is returned to it]. Memory is an example of a
preemptable resource. Of course, one may choose to deal with
intrinsically preemptable resources as if they were non-
preemptable. In our discussion we only consider non-preemptable
resources.

Resources can be reusable or consumable. They are reusable if


they can be used again after a process is done using them.
Memory, printers, tape drives are examples of reusable resources.
Consumable resources are resources that can be used only once.
For example a message or an event. If two processes are waiting
for a message and one receives it, then the other process remains
waiting. To reason about deadlocks when dealing with
consumable resources is extremely difficult. Thus we will restrict
our discussion to reusable resources.
Resources are usually with a multiplicity, i.e. an indication of how
many copies of the resource exist. So we may have 3 tape drives,
2 printers, etc. We normally assume that resources have a
multiplicity different than 1. If it were always 1 the study of
deadlocks could be simplified.

Because all the processes are waiting, none of them will ever
cause any of the events that could wake up any of the other
members of the set, and all the processes continue to wait
forever. For this model, we assume that processes have only a
single thread and that there are no interrupts possible to wake up
a blocked process. The no-interrupts condition is needed to
prevent an otherwise deadlocked process from being awakened
by, say, an alarm, and then causing events that release other
processes in the set.

In most cases, the event that each process is waiting for is the
release of some resource currently possessed by another member
of the set. In other words, each member of the set of deadlocked
processes is waiting for a resource that is owned by a deadlocked
process. None of the processes can run, none of them can release
any resources, and none of them can be awakened. The number
of processes and the number and kind of resources possessed
and requested are unimportant. This result holds for any kind of
resource, including both hardware and software.
Dining Philosophers:-
The dining philosophers problem is summarized as five
philosophers sitting at a table doing one of two things – eating or
thinking. While eating, they are not thinking, and while thinking,
they are not eating. The five philosophers sit at a circular table
with a large bowl of spaghetti in the center. A fork is placed in
between each philosopher, and as such, each philosopher has one
fork to his or her left and one fork to his or her right. As spaghetti
is difficult to serve and eat with a single fork, it is assumed that a
philosopher must eat with two forks. The philosopher can only use
the fork on his or her immediate left or right.

In some cases, the dining philosophers problem is explained using


rice and chopsticks as opposed to spaghetti and forks, as it is
generally easier to understand that two chopsticks are required,
whereas one could arguably eat spaghetti using a single fork, or
using a fork and a spoon.
Conditions for Deadlock:-
Coffman (1971) showed that four conditions must hold for there
to be a deadlock:

1. Mutual exclusion condition. Each resource is either currently


assigned to exactly one process or is available.
2. Hold and wait condition. Processes currently holding resources
granted earlier can request new resources.

3. No preemption condition. Resources previously granted cannot


be forcibly taken away from a process. They must be explicitly
released by the process holding them.

4. Circular wait condition. There must be a circular chain of two or


more processes, each of which is waiting for a resource held by
the next member of the chain.

All four of these conditions must be present for a deadlock to


occur. If one of them is absent, no deadlock is possible.

Resource Allocation Graphs:-


Resource Allocation Graphs (RAGs) are directed labeled graphs
used to represent, from the point of view of deadlocks, the
current state of a system.
State transitions can be represented as transitions between the
corresponding resource allocation graphs. Here are the rules for
state transitions:

 REQUEST: if process Pi has no outstanding request, it can


request simultaneously any number (up to multiplicity) of
resources R1, R2, ..Rm. The request is represented by
adding appropriate requests edges to the RAG of the current
state.

 ACQUISITION: if process Pi has outstanding requests and


they can all be simultaneously satisfied, then the request
edges of these requests are replaced by assignment edges
in the RAG of the current state

 RELEASE: if process Pi has no outstanding request then it


can release any of the resources it is holding, and remove
the corrisponding assignment edges from the RAG of the
current state.

Deadlock Modeling:-

Resource allocation graphs:-


(a) Holding a resource

(b) Requesting a resource

(c) Deadlock

State Graphs:-
Where a resource allocation graph describes the current state of a
system, a State Graph describes from the point of view of
deadlocks all the states of a system.

The following kinds of states:

 Secure: if no matter the operation we do we will not get


deadlocked

 Safe: if there is a way to complete all processes without


getting into a deadlock state

 Unsafe: if we will not be safe. It is further distinguished into


semi-deadlock (we are not deadlocked, but we certainly will),

 deadlock (there are processes that are deadlocked), and


total deadlock (all processes are deadlocked).

Basic approaches to deadlock handling:-


One-basic strategy for handling deadlocks is to ensure violation
of at least one of the three conditions necessary for deadlock
(exclusive control, hold-wait, and no preemption). This method
is usually referred to as deadlock prevention, unless its primary
aim is to avoid deadlock by using information about the
processes' future intentions regarding resource requirements.
A totally different strategy interrogates the process/resource
relationships from time to time in order to identify the
existence of a deadlock. This latter method presumes that the
system can subsequently do something about the problem.

Detection techniques:-

These techniques assume that all resource requests will be


granted eventually. A periodically invoked algorithm examines
current resource allocations and outstanding requests to
determine if any processes or resources are deadlocked. If a
deadlock is discovered, the system must recover as gracefully.

Types of deadlocks:-

 Distributed deadlock

Distributed deadlocks can occur in distributed systems when


distributed transactions or concurrency control is being used.
Distributed deadlocks can be detected either by constructing a
global wait-for graph, from local wait-for graphs at a deadlock
detector or by a distributed algorithm like edge chasing.

Phantom deadlocks are deadlocks that are detected in a


distributed system but no longer actually exist.

 Livelock
A livelock is similar to a deadlock, except that the states of the
processes involved in the livelock constantly change with
regard to one another, none progressing. Livelock is a special
case of resource starvation; the general definition only states
that a specific process is not progressing.

A real-world example of livelock occurs when two people meet


in a narrow corridor, and each tries to be polite by moving
aside to let the other pass, but they end up swaying from side
to side without making any progress because they both
repeatedly move the same way at the same time.

Livelock is a risk with some algorithms that detect and recover


from deadlock. If more than one process takes action, the
deadlock detection algorithm can repeatedly trigger. This can
be avoided by ensuring that only one process (chosen
randomly or by priority) takes action.

Deadlocks example:-

When two or more processes are interacting, they can


sometimes get themselves into a stalemate situation they
cannot get out of. Such a situation is called a deadlock.

Deadlocks can best be introduced with a real-world example


everyone is familiar with, deadlock in traffic. Consider the
situation of Fig. 1-13(a). Here four buses are approaching an
intersection. Behind each one are more buses (not shown).
With a little bit of bad luck, the first four could all arrive at the
intersection simultaneously, leading to the situation of Fig. 1-
13(b), in which they are deadlocked because none of them can
go forward. Each one is blocking one of the others. They cannot
go backward due to other buses behind them. There is no easy
way out.

Figure 1-13. (a) A potential deadlock. (b) An actual deadlock.

Processes in a computer can experience an analogous situation


in which they cannot make any progress. For example, imagine
a computer with a tape drive and CD-recorder. Now imagine
that two processes each need to produce a CD-ROM from data
on a tape. Process 1 requests and is granted the tape drive.
Next process 2 requests and is granted the CD-recorder. Then
process 1 requests the CDrecorder and is suspended until
process 2 returns it. Finally, process 2 requests the tape drive
and is also suspended because process 1 already has it. Here
we have a deadlock from which there is no escape.
A real world example:-

For application programming, as opposed to server


implementation, thread pools pose some concurrency risks.
The reason is that the tasks making up an application tend to
be dependent on each other. In particular, deadlock is a
significant concern. A deadlock occurs when a set of threads
creates a cycle of waiting. For example, suppose that thread 1
holds mutex lock A and is waiting to acquire mutex B, thread 2
is holding mutex B and is waiting to acquire mutex C, and
thread 3 holds lock C and is waiting to acquire mutex A. In this
situation, none of the three threads can proceed. Although
deadlock is a concern in any asynchronous concurrency
platform, thread pools escalate the concern. In particular, a
deadlock can occur if all threads are executing tasks that are
waiting for another task on the work queue in order to produce
a result.

Banker's Algorithm for

Deadlock Avoidance

When a request is made, check to see if after the request is


satisfied, there is a (atleast one!) sequence of moves that can
satisfy all the requests. ie. the new state is safe. If so, satisfy
the request, else make the request wait.

The Banker's algorithm is a resource allocation & deadlock


avoidance algorithm developed by Edsger Dijkstra that tests for
safety by simulating the allocation of pre-determined maximum
possible amounts of all resources, and then makes a "safe-
state" check to test for possible deadlock conditions for all
other pending activities, before deciding whether allocation
should be allowed to continue.
The algorithm was developed in the design process for the THE
operating system and originally described (in Dutch) in
EWD108[1]. The name is by analogy with the way that bankers
account for liquidity constraints.

Algorithm

The Banker's algorithm is run by the operating system


whenever a process requests resources The algorithm prevents
deadlock by denying or postponing the request if it determines
that accepting the request could put the system in an unsafe
state (one where deadlock could occur).

Resources

For the Banker's algorithm to work, it needs to know three


things:

 How much of each resource each process could possibly


request

 How much of each resource each process is currently


holding

 How much of each resource the system has available

Some of the resources that are tracked in real systems are


memory, semaphores and interface access.

Example:-
Assuming that the system distinguishes between four types of
resources, (A, B, C and D), the following is an example of how
those resources could be distributed. Note that this example
shows the system at an instant before a new request for
resources arrives. Also, the types and number of resources are
abstracted. Real systems, for example, would deal with much
larger quantities of each resource.

• Available system resources

ABCD

3112

• Processes (currently allocated resources):

ABCD

P1 1 2 2 1

P2 1 0 3 3

P3 1 1 1 0

• Processes (maximum resources):

ABCD

P1 3 3 2 2

P2 1 2 3 4

P3 1 1 5 0
Safe and Unsafe States

A state (as in the above example) is considered safe if it is


possible for all processes to finish executing (terminate). Since
the system cannot know when a process will terminate, or how
many resources it will have requested by then, the system
assumes that all processes will eventually attempt to acquire
their stated maximum resources and terminate soon afterward.
This is a reasonable assumption in most cases since the system
is not particularly concerned with how long each process runs
(at least not from a deadlock avoidance perspective). Also, if a
process terminates without acquiring its maximum resources, it
only makes it easier on the system.

Safe State

Safe state is one where

1. It is not a deadlocked state

2. There is some sequence by which all requests can be


satisfied.

To avoid deadlocks, we try to make only those transitions that


will take you from one safe state to another. We avoid
transitions to unsafe state (a state that is not deadlocked, and
is not safe)

e.g.
Total of instances of resource = 12

(Max, Allocated, Still Needs)

P0 (10, 5, 5) P1 (4, 2, 2) P2 (9, 2, 7) Free = 3 - Safe

The sequence is a reducible sequence, the first state is safe.

What if P2 requests 1 more and is allocated 1 more instance?

- Results in Unsafe state

So do not allow P2's request to be satisfied

Deadlock Prevention:-

Deadlock Prevention is to use resources in such a way that we


cannot get into deadlocks. In real life we may decide that left
turns are too dangerous, so we only do right turns. It takes
longer to get there but it works. In terms of deadlocks, we may
constrain our use of resources so that we do not have to worry
about deadlocks.

Linear Ordering of Resources


Assume that all resources are totally ordered from 1 to r. We
may impose the following constraint:

# A process cannot request a resource Rk if it is holding a


resource Rh with k < h

It is easy to see that with this rule we will not get into
deadlocks. [Proof by contradiction.]
Here is an example of how we apply this rule. We are given a
process that uses resources ordered as A, B, C, D, E in the
following manner:

A strategy such as this can be used when we have a few


resources. It is easy to apply and does not reduce the degree of
concurrency too much.

Hierarchical Ordering of Resources


Another strategy we may use in the case that resources are
hierarchically structured is to lock them in hierarchical order.
We assume that the resources are organized in a tree (or a
forest) representing containment. We can lock any node or
group of nodes in the tree. The resources we are interested in
are nodes in the tree, usually leaves. Then the following rule
will guarantee avoidance of deadlocks.

# The nodes currently locked by a process must lay on all


paths from the root to the desired resources.
Deadlock Prevention Algorithms
One-shot Algorithm
Given a request from process P for resources R1, R2, ..., Rn, the
resource manager follows these rules:

if any resource R1, ... Rn, does not exist or is not free, then

refuse the request

else

grant process P exclusive access to resources R1, ... Rn

end if

Repeated One-shot (Multishot) Algorithm


Given a request from process P for resources R1, R2, ..., Rn, the
resource manager follows the same rule as for one-shot.

If a process P wants to request resources while holding


resources, they follow these steps:

1. P frees all resources being held


2. P requests all resources previously held plus the new
resources it wants to acquire

Hierarchical Algorithm
Given a request from process P for resource R, the resource
manager follows these rules:

if the resource R does not exist or is in use, then

refuse the request

else

if process P is holding a resource R' with higher priority than


resource R, then

refuse the request

else

grant process P exclusive access to resource R

end if

end if
Deadlock degrade the performance of
CPU
Deeper Pipelines
o 1984: Many cycles per instruction

o 2005: Many instructions per cycle

o 20 stage pipelines

o CPU logic executes instructions out-of-order to keep


pipeline full

o Synchronization instructions must not be reordered

o Or you could execute instructions inside c.s. without


completing entry instructions

o So synchronization stalls the pipeline

Performance
o Main issue with lock performance used to be contention

o Techniques were developed to reduce overheads in


contended case

o Today, issue is degraded performance even when locks


are always available

o Together with other concerns about locks

o Quick look at lock performance…

Hash Table Microbenchmark


o Read only

o Best case with brlock gets only 2X speedup on 4 CPUs


o Linux “Big Reader Lock”, per-cpu reader lock, writers
must acquire all

References:-
1. ^ E. W. Dijkstra "EWD108: Een algorithme ter
voorkoming van de dodelijke omarming" (in Dutch; An
algorithm for the prevention of the deadly embrace)

2. ^ Lubomir, F. Bic; Alan C. Shaw (2003). Operating


System Principles. Prentice Hall. ISBN 0-13-026611-6.
http://vig.prenhall.com/catalog/academic/product/0,114
4,0130266116,00.html.

3. ^ Concurrency

4. ^ A Treasury of Railroad Folklore, B.A. Botkin & A.F.


Harlow, p. 381

5. ^ Mogul, Jeffrey C.; K. K. Ramakrishnan (2007).


"Eliminating receive livelock in an interrupt-driven
kernel". http://citeseer.ist.psu.edu/326777.html.
6. ^ Anderson, James H.; Yong-jik Kim (2001). "Shared-
memory mutual exclusion: Major research trends since
1986".
http://citeseer.ist.psu.edu/anderson01sharedmemory.ht
ml.

Вам также может понравиться