Вы находитесь на странице: 1из 122

Pune Vidyarthi Griha’s

COLLEGE OF ENGINEERING, NASHIK.

“ OPERATING SYSTEM”

By
Prof. Anand N. Gharu
(Assistant Professor)
PVGCOE Computer Dept.

05 FEB 20181
.
CONTENTS :-
1. Introduction to different types of OS
2. Real time OS, System Component
3. OS Services, System Structure layer approach
4. Process management: Process concept-state
5. PCB, Thread
6. Process scheduling : preemptive & Non-preemptive
algorithms: FCFS, SJF, RR, Priority
7. Deadlock : Methods of handling deadlock, deadlock
prevention, avoidance, detection & recovery.
8. Case study : Process management in multicore OS
Prof. Gharu Anand N. 2
What’s operating system?
• An Operating System (OS) is an interface between a computer user and
computer hardware. An operating system is a software which performs all
the basic tasks like file management, memory management, process
management, handling input and output, and controlling peripheral devices
such as disk drives and printers.

Prof. Gharu Anand N. 3


FUNCTION
OF
OPERATING
SYSTEM
Prof. Gharu Anand N. 4
Services of Operating System
• Program execution

• I/O operations

• File System manipulation

• Communication

• Error Detection

• Resource Allocation

• Protection
Prof. Gharu Anand N. 5
• Program execution
• Operating systems handle many kinds of activities from user programs to
system programs like printer spooler, name servers, file server, etc. Each
of these activities is encapsulated as a process.
• A process includes the complete execution context (code to execute, data
to manipulate, registers, OS resources in use). Following are the major
activities of an operating system with respect to program management −
• Loads a program into memory.
• Executes the program.
• Handles program's execution.
• Provides a mechanism for process synchronization.
• Provides a mechanism for process communication.
• Provides a mechanism for deadlock handling.
Prof. Gharu Anand N. 6
• I/O Operation
• An I/O subsystem comprises of I/O devices and their
corresponding driver software. Drivers hide the peculiarities
of specific hardware devices from the users.
• An Operating System manages the communication between
user and device drivers.
• I/O operation means read or write operation with any file or
any specific I/O device.
• Operating system provides the access to the required I/O
device when required.
Prof. Gharu Anand N. 7
• File system manipulation
• A file represents a collection of related information. Computers can
store files on the disk (secondary storage), for long-term storage
purpose. Examples of storage media include magnetic tape, magnetic
disk and optical disk drives like CD, DVD. Each of these media has its
own properties like speed, capacity, data transfer rate and data access
methods.
• Following are the major activities of an operating system with respect
to file management −
• Program needs to read a file or write a file.
• The operating system gives the permission to the program for
operation on file.
• Permission varies from read-only, read-write, denied and so on.
• Operating System provides an interface to the user to create/delete
files.
• Operating System provides an interface to the user to create/delete
directories.
• Operating
Prof. Gharu Anand N.
System provides an interface to create the backup of file 8
• Communication
• In case of distributed systems which are a collection of processors that
do not share memory, peripheral devices, or a clock, the operating
system manages communications between all the processes. Multiple
processes communicate with one another through communication lines
in the network.

• Some activities are :

• Two processes often require data to be transferred between them

• Both the processes can be on one computer or on different computers,


but are connected through a computer network.

• Communication may be implemented by two methods, either by


Shared
Prof. Gharu Memory
Anand N. or by Message Passing. 9
• Error handling
• Errors can occur anytime and anywhere. An error may occur in CPU,
in I/O devices or in the memory hardware. Following are the major
activities of an operating system with respect to error handling −
• The OS constantly checks for possible errors.
• The OS takes an appropriate action to ensure correct and consistent
computing.
• Resource Management

• In case of multi-user or multi-tasking environment, resources such as


main memory, CPU cycles and files storage are to be allocated to each
user or job. Following are the major activities of an operating system
with respect to resource management −

• The OS manages all kinds of resources using schedulers.

• CPU scheduling
Prof. Gharu Anand N. algorithms are used for better utilization of CPU. 10
• Protection
• Considering a computer system having multiple users and concurrent
execution of multiple processes, the various processes must be
protected from each other's activities.

• Protection refers to a mechanism or a way to control the access of


programs, processes, or users to the resources defined by a computer
system.

• The OS ensures that all access to system resources is controlled.

• The OS ensures that external I/O devices are protected from invalid
access attempts.

• The OS provides authentication features for each user by means of


passwords.
Prof. Gharu Anand N. 11
Function of Operating System
• Memory Management
• Processor Management
• Device Management
• File Management
• Security
• Resource sharing & protection
• Job accounting
• Error detection
• Coordination between other software and users
(Accounting)
Prof. Gharu Anand N. 12
Memory management
Memory management refers to management of Primary Memory or Main Memory.
Main memory is a large array of words or bytes where each word or byte has its own
address.

- Main memory provides a fast storage that can be accessed directly by the CPU. For
a program to be executed, it must in the main memory. An Operating System does
the following activities for memory management −

- Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part
are not in use.

- In multiprogramming, the OS decides which process will get memory when and
how much.

- Allocates the memory when a process requests it to do so.

- De-allocates the memory


Prof. Gharu Anand N. when a process no longer needs it or has been terminated.
13
Process management
In multiprogramming environment, the OS decides which process
gets the processor when and for how much time. This function is
called process scheduling.

An Operating System does the following activities for


processor management −

- Keeps tracks of processor and status of process. The program


responsible for this task is known as traffic controller.

- Allocates the processor (CPU) to a process.

- De-allocates processor
Prof. Gharu Anand N.
when a process is no longer required. 14
Device management
An Operating System manages device communication via their
respective drivers.

It does the following activities for device management −

- Keeps tracks of all devices. Program responsible for this task is


known as the I/O controller.

- Decides which process gets the device when and for how much
time.

- Allocates the device in the efficient way.

- De-allocates devices.
Prof. Gharu Anand N. 15
File management
A file system is normally organized into directories for easy
navigation and usage. These directories may contain files and other
directions.

An Operating System does the following activities for file


management −
- Keeps track of information, location, uses, status etc. The
collective facilities are often known as file system.
- Decides who gets the resources.
- Allocates the resources.
- De-allocates the resources.
Prof. Gharu Anand N. 16
Other activities are :
Security − By means of password and similar other techniques, it
prevents unauthorized access to programs and data.
Control over system performance − Recording delays between
request for a service and response from the system.
Job accounting − Keeping track of time and resources used by
various jobs and users.
Error detecting aids − Production of dumps, traces, error
messages, and other debugging and error detecting aids.
Coordination between other softwares and users − Coordination
Prof. Gharu Anand N. 17
and assignment of compilers, interpreters, assemblers and other
TYPES
OF
OPERATING
SYSTEM
Prof. Gharu Anand N. 18
Batch Operating System

Prof. Gharu Anand N. 19


Batch Operating System
The users of a batch operating system do not interact with the
computer directly. Each user prepares his job on an off-line device
like punch cards and submits it to the computer operator. To speed
up processing, jobs with similar needs are batched together and run
as a group. The programmers leave their programs with the
operator and the operator then sorts the programs with similar
requirements into batches.
The problems with Batch Systems are as follows −
Lack of interaction between the user and the job.

CPU is often idle, because the speed of the mechanical I/O devices
is slower than the CPU.

Difficult toAnand
Prof. Gharu provide
N. the desired priority. 20
Multitasking

Prof. Gharu Anand N. 21


Multitasking
Multitasking is when multiple jobs are executed by the CPU
simultaneously by switching between them. Switches occur so frequently
that the users may interact with each program while it is running. An OS
does the following activities related to multitasking −
The user gives instructions to the operating system or to a program
directly, and receives an immediate response.
The OS handles multitasking in the way that it can handle multiple
operations/executes multiple programs at a time.
Multitasking Operating Systems are also known as Time-sharing systems.
These Operating Systems were developed to provide interactive use of a
computer system at a reasonable cost.
A time-shared operating system uses the concept of CPU scheduling and
multiprogramming to provide each user with a small portion of a time-
shared CPU.
Prof. Gharu Anand N. 22
Each user has at least one separate program in memory.
Multiprogramming

Prof. Gharu Anand N. 23


Multiprogramming
When two or more programs reside in memory at the same time, is referred
as multiprogramming. Multiprogramming assumes a single shared
processor. Multiprogramming increases CPU utilization by organizing jobs
so that the CPU always has one to execute.

An OS does the following activities related to multiprogramming.

The operating system keeps several jobs in memory at a time.

This set of jobs is a subset of the jobs kept in the job pool.

The operating system picks and begins to execute one of the jobs in the
memory.

Multiprogramming operating systems monitor the state of all active


programs and system resources using memory management programs to
Prof. Gharu Anand N. 24
ensures that the CPU is never idle, unless there are no jobs to process.
Spooling

Prof. Gharu Anand N. 25


Spooling
Spooling is an acronym for simultaneous peripheral operations on line.
Spooling refers to putting data of various I/O jobs in a buffer. This buffer is
a special area in memory or hard disk which is accessible to I/O devices.

An operating system does the following activities related to distributed


environment −

Handles I/O device data spooling as devices have different data access rates.

Maintains the spooling buffer which provides a waiting station where data
can rest while the slower device catches up.

Maintains parallel computation because of spooling process as a computer


can perform I/O in parallel fashion. It becomes possible to have the
Prof. Gharu Anand N. 26
computer read data from a tape, write data to disk and to write out to a tape
Time sharing Operating System
Time-sharing is a technique which enables many people, located at
various terminals, to use a particular computer system at the same
time. Time-sharing or multitasking is a logical extension of
multiprogramming. Processor's time which is shared among
multiple users simultaneously is termed as time-sharing.
Advantages of Timesharing operating systems are as follows −
Provides the advantage of quick response.
Avoids duplication of software.
Reduces CPU idle time.
Disadvantages of Time-sharing operating systems are as follows −
Problem of reliability.
Question of security and integrity of user programs and data.
Problem of Anand
Prof. Gharu dataN.communication. 27
Distributed Operating System
Distributed systems use multiple central processors to serve multiple real-
time applications and multiple users. Data processing jobs are distributed
among the processors accordingly.
The processors communicate with one another through various
communication lines (such as high-speed buses or telephone lines). These
are referred as loosely coupled systems or distributed systems. Processors
in a distributed system may vary in size and function. These processors are
referred as sites, nodes, computers, and so on.
The advantages of distributed systems are as follows −
- Speedup the exchange of data with one another via electronic mail.
- If one site fails in a distributed system, the remaining sites can potentially
continue operating.
- Better service to the customers.
- Reduction of the load on the host computer.
- Reduction of delays in data processing.
Prof. Gharu Anand N. 28
Network Operating System
A Network Operating System runs on a server and provides the
server the capability to manage data, users, groups, security,
applications, and other networking functions. The primary purpose
of the network operating system is to allow shared file and printer
access among multiple computers in a network, typically a local
area network (LAN), a private network or to other networks.

Examples of network operating systems include Microsoft


Windows Server 2003, Microsoft Windows Server 2008, UNIX,
Linux, Mac OS X, Novell NetWare, and BSD.
Prof. Gharu Anand N. 29
Real time Operating System
A real-time system is defined as a data processing system in which
the time interval required to process and respond to inputs is so
small that it controls the environment. The time taken by the
system to respond to an input and display of required updated
information is termed as the response time. So in this method, the
response time is very less as compared to online processing.

Example, Scientific experiments, medical imaging systems,


industrial control systems, weapon systems, robots, air traffic
control systems, etc.
Prof. Gharu Anand N. 30
Component of Operating System
• Process management
• • I/O management
• • Main Memory management
• • File & Storage Management
• • Protection
• • Networking
• • Protection
• • Command Interpreter
Prof. Gharu Anand N. 31
Component of OS
• Interpreter (shell) :
A command interpreter is the part of a computer
operating system that understands and executes
commands that are entered interactively by a human
being or from a program. In some operating systems,
the command interpreter is called the shell.

Prof. Gharu Anand N. 32


Component of OS
• System call :
In computing, a system call is the programmatic way in which a computer
program requests a service from the kernel of the operating system it is
executed on. A system call is a way for programs to interact with the operating
system. A computer program makes a system call when it makes a request to
the operating system’s kernel. System call provides the services of the
operating system to the user programs via Application Program Interface(API). It
provides an interface between a process and operating system to allow user-
level processes to request services of the operating system. System calls are
the only entry points into the kernel system. All programs needing resources
must use system calls
Prof. Gharu Anand N. 33
System call
Services Provided by System Calls :
• Process creation and management
• Main memory management
• File Access, Directory and File system management
• Device handling(I/O)
• Protection
• Networking, etc.
Types of System Calls : There are 5 different categories of system calls –
Process control: end, abort, create, terminate, allocate and free memory.
File management: create, open, close, delete, read file etc.
Device management
Information maintenance
Prof. Gharu Anand N. 34
System call

Prof. Gharu Anand N. 35


System call
UNIX
WINDOWS

CreateProcess() ExitProcess()
Process Control fork() exit() wait()
WaitForSingleObject()

CreateFile() ReadFile()
File Manipulation open() read() write() close()
WriteFile() CloseHandle()

SetConsoleMode() ReadConsole()
Device Manipulation ioctl() read() write()
WriteConsole()

GetCurrentProcessID()
Information Maintenance getpid() alarm() sleep()
SetTimer() Sleeo()

CreatePipe() CreateFileMapping()
Communication pipe() shmget() mmap()
MapViewOfFile()

SetFileSecurity()
Protection InitlializeSecurityDescriptor() chmod() umask() chown()
Prof. Gharu Anand N. SetSecurityDescriptorGroup() 36
Operating system structure

Prof. Gharu Anand N. 37


Operating system structure
When DOS was originally written its developers had no idea how
big and important it would eventually become. It was written by a
few programmers in a relatively short amount of time, without the
benefit of modern software engineering techniques, and then
gradually grew over time to exceed its original expectations. It does
not break the system into subsystems, and has no distinction
between user and kernel modes, allowing all programs direct access
to the underlying hardware. ( Note that user versus kernel mode
was not supported by the 8088 chip set anyway, so that really
wasn't
Prof.an option
Gharu Anand N. back then. ) 38
Monolithic structure

Prof. Gharu Anand N. 39


Monolithic structure
It is the oldest architecture used for developing operating system.
Operating system resides on kernel for anyone to execute. System call is
involved i.e. Switching from user mode to kernel mode and transfer
control to operating system shown as event 1. Many CPU has two modes,
kernel mode, for the operating system in which all instruction are allowed
and user mode for user program in which I/O devices and certain other
instruction are not allowed. Two operating system then examines the
parameter of the call to determine which system call is to be carried out
shown in event 2. Next, the operating system index’s into a table that
contains procedure that carries out system call. This operation is shown in
events. Finally, it is called when the work has been completed and the
Prof. Gharu Anand N. 40
system call is finished, control is given back to the user mode as shown in
Layered system structure

Prof. Gharu Anand N. 41


Layered system structure
Layer – 0 deals with hardware

Layer – 1 deals with allocation of CPU to processes

Layer – 2 implement memory management i,e. paging,


segmentation etc

Layer – 3 deals with device drivers. Device driver provide


device depenadant interaction to devices

Layer – 4 deals with input/output buffering

Layer – 5 deals with user interface


Prof. Gharu Anand N. 42
Virtual Machine structure

Prof. Gharu Anand N. 43


Virtual Machine structure
The system originally called CP/CMS, later renamed VM/370, was based
on an astute observation. That was a time sharing system, provides
multiprogramming and an extended machine with a more convenient
interface than the bare hardware.

The heart of the system known as virtual machine monitor that runs on
the bare hardware and does the multiprogramming, providing several
virtual machines to next layer up as shown in the given figure.

These virtual machines aren't extended machines, with files and other
nice features. They are the exact copies of the bare hardware, including
the kernel/user mode, Input/Output, interrupts, and everything else the
real Prof. Gharu Anand N.
machine has 44
Microkernel structure

Prof. Gharu Anand N. 45


Microkernel structure
This structures the operating system by removing all nonessential portions
of the kernel and implementing them as system and user level programs.
•Generally they provide minimal process and memory management, and a
.

communications facility.
•Communication between components of the OS is provided by message
passing.
The benefits of the microkernel are as follows:
•Extending the operating system becomes much easier.
•Any changes to the kernel tend to be fewer, since the kernel is smaller.
•The microkernel also provides more security and reliability.
Main disadvantage is poor performance due to increased system overhead
Prof. Gharu Anand N. 46
from message passing
Client server structure

Prof. Gharu Anand N. 47


Client server structure
in the client-server model, as shown in the figure given below, all the
kernel does is handle the communication between the clients and the
servers. .

By splitting the operating system (OS) up into parts, each of which only
handles one fact of the system, such as file service, terminal service,
process service, or memory service, each part becomes small and
manageable.

The adaptability of the client-server model, to use in distributed system is


the advantage of this model.

Prof. Gharu Anand N. 48


PROCESS .

MANAGEMENT
CONCEPTS
Prof. Gharu Anand N. 49
Process

Prof. Gharu Anand N. 50


Process
• A process is a program in execution. Process is not as same as
program code but a lot more than it. A process is an 'active' entity as
opposed to program which is considered to be a 'passive' entity.
Attributes held by process include hardware state, memory, CPU etc.

• Process memory is divided into four sections for efficient working :


• The text section is made up of the compiled program code, read in
from non-volatile storage when the program is launched.
• The data section is made up the global and static variables, allocated
and initialized prior to executing the main.
• The heap is used for the dynamic memory allocation, and is
managed via calls to new, delete, malloc, free, etc.
• The stack is used for local variables. Space on the stack is reserved
for local variables when they are declared.
Prof. Gharu Anand N. 51
Process state

Prof. Gharu Anand N. 52


Process state
New - The process is being created.

Ready - The process is waiting to be assigned to a processor.

Running - Instructions are being executed.

Waiting - The process is waiting for some event to occur(such


as an I/O completion or reception of a signal).

Terminated - The process has finished execution.

Prof. Gharu Anand N. 53


Process Control Block

Prof. Gharu Anand N. 54


Process Control Block
Process State - Running, waiting, etc., as discussed above.

Process ID, and parent process ID.

CPU registers and Program Counter - These need to be saved and


restored when swapping processes in and out of the CPU.

CPU-Scheduling information - Such as priority information and


pointers to scheduling queues.

Memory-Management information - E.g. page tables or segment


tables.

Accounting information - user and kernel CPU time consumed,


account numbers, limits, etc.
Prof. Gharu Anand N. 55
I/O Status information - Devices allocated, open file tables, etc.
Process Control Block

CPU switch from process to process using PCB


Prof. Gharu Anand N. 56
THREAD IN
OS
Prof. Gharu Anand N. 57
Thread concepts
Thread is an execution unit which consists of its own program counter, a stack, and a
set of registers. Threads are also known as Lightweight processes. Threads are popular
way to improve application through parallelism. The CPU switches rapidly back and
forth among the threads giving illusion that the threads are running in parallel.
As each thread has its own independent resource for process execution, multpile
processes can be executed parallely by increasing number of threads.

Prof. Gharu Anand N. 58


Types of Thread
There are two types of threads :

User Threads

Kernel Threads

User threads, are above the kernel and without kernel support. These are
the threads that application programmers use in their programs.

Kernel threads are supported within the kernel of the OS itself. All modern
OSs support kernel level threads, allowing the kernel to perform multiple
simultaneous tasks and/or to service multiple kernel system calls
simultaneously.

Prof. Gharu Anand N. 59


Advantages of Thread
Responsiveness

Resource sharing, hence allowing better utilization of resources.

Economy. Creating and managing threads becomes easier.

Scalability. One thread runs on one CPU. In Multithreaded


processes, threads can be distributed over a series of processors to
scale.

Context Switching is smooth. Context switching refers to the


procedure followed by CPU to change from one task to another.

.
Prof. Gharu Anand N. 60
PROCESS
SCHEDULLING

Prof. Gharu Anand N. 61


Process Schedulling
The process scheduling is the activity of the process manager that
handles the removal of the running process from the CPU and the
selection of another process on the basis of a particular strategy.Process
scheduling is an essential part of a Multiprogramming operating systems.

Prof. Gharu Anand N. 62


Process Schedulling
An operating system uses two types of scheduling processes
execution, preemptive and non - preemptive.

1. Preemptive process:
In preemptive scheduling policy, a low priority process has to be suspend
its execution if high priority process is waiting in the same queue for its
execution.

2. Non - Preemptive process:


In non - preemptive scheduling policy, processes are executed in first
come first serve basis, which means the next process is executed only
when currently running process finishes its execution.
Prof. Gharu Anand N. 63
.
Types of scheduler
1) Long Term Scheduler

It selects the process that are to be placed in ready queue. The


long term scheduler basically decides the priority in which
processes must be placed in main memory. Processes of long term
scheduler are placed in the ready state because in this state the
process is ready to execute waiting for calls of execution from
CPU which takes time that’s why this is known as long term
scheduler.

Prof. Gharu Anand N. 64


Types of scheduler
2) Mid – Term Scheduler
It places the blocked and suspended processes in the secondary memory of
a computer system. The task of moving from main memory to secondary
memory is called swapping out.The task of moving back a swapped out
process from secondary memory to main memory is known as swapping
in. The swapping of processes is performed to ensure the best utilization
of main memory.

3) Short Term Scheduler

It decides the priority in which processes is in the ready queue are


allocated the central processing unit (CPU) time for their execution. The
short term scheduler is also referred as central processing unit (CPU)
scheduler.
Prof. Gharu Anand N. 65
Compare types of scheduler
S.N. Long-Term Short-Term Medium-Term
Scheduler Scheduler Scheduler
1 It is a job scheduler It is a CPU scheduler
It is a process
swapping scheduler.
2 Speed is lesser than Speed is fastest among Speed is in between
short term scheduler other two both short and long
term scheduler.
3 It controls the degree It provides lesser It reduces the degree of
of multiprogramming control over degree of multiprogramming.
multiprogramming

4 It is almost absent or It is also minimal in It is a part of Time


minimal in time time sharing system sharing systems.
sharing system
5 It selects processes It selects those It can re-introduce the
from pool and loads processes which are process into memory
them into memory for ready to execute and execution can be
Prof.execution
Gharu Anand N. continued. 66
Schedulling criteria
CPU utilization : To make out the best use of CPU and not to waste any
CPU cycle, CPU would be working most of the time(Ideally 100% of the
time). Considering a real system, CPU usage should range from 40%
(lightly loaded) to 90% (heavily loaded.)

Throughput : It is the total number of processes completed per unit time


or rather say total amount of work done in a unit of time. This may range
from 10/second to 1/hour depending on the specific processes.

Turnaround time : It is the amount of time taken to execute a particular


process, i.e. The interval from time of submission of the process to the
time of completion of the process(Wall clock time).
Prof. Gharu Anand N. 67
Schedulling criteria
Waiting time : The sum of the periods spent waiting in the ready queue
amount of time a process has been waiting in the ready queue to acquire
get control on the CPU.

Load average : It is the average number of processes residing in the ready


queue waiting for their turn to get into the CPU.

Response time : Amount of time it takes from when a request was


submitted until the first response is produced. Remember, it is the time till
the first response and not the completion of process execution(final
response).

Prof. Gharu Anand N. 68


SCHEDULLING
ALGORITHMS
Prof. Gharu Anand N. 69
Schedulling Algorithms
First-Come, First-Served (FCFS) Scheduling

Shortest-Job-First (SJF) Scheduling

Priority Scheduling

Round Robin(RR) Scheduling

SJF Preemptive Scheduling

Prof. Gharu Anand N. 70


FCFS Algorithms
 Jobs are executed on first come, first serve basis.

 It is a non-preemptive, pre-emptive scheduling


algorithm.

 Easy to understand and implement.

 Its implementation is based on FIFO queue.

 Poor in performance as average wait time is high

Prof. Gharu Anand N. 71


FCFS Algorithms

Prof. Gharu Anand N. 72


FCFS Algorithms

Prof. Gharu Anand N. 73


FCFS Algorithms

Prof. Gharu Anand N. 74


FCFS Algorithms

Prof. Gharu Anand N. 75


FCFS Algorithms

Prof. Gharu Anand N. 76


SJF(SJN) Algorithms
 This is also known as shortest job first, or SJF

 This is a non-preemptive, pre-emptive scheduling algorithm.

 Best approach to minimize waiting time.

 Easy to implement in Batch systems where required CPU time


is known in advance.

 Impossible to implement in interactive systems where required


CPU time is not known.

 The processer should know in advance how much time process


will take.
Prof. Gharu Anand N. 77
SJF(SJN) Algorithms

Prof. Gharu Anand N. 78


SJF(SJN) Algorithms

Prof. Gharu Anand N. 79


SJF(SJN) Algorithms

Prof. Gharu Anand N. 80


SJF(SJN) Algorithms

Prof. Gharu Anand N. 81


RR Algorithms
 Round Robin is the preemptive process scheduling algorithm.

 Each process is provided a fix time to execute, it is called


a quantum.

 Once a process is executed for a given time period, it is preempted


and other process executes for a given time period.

 Context switching is used to save states of preempted processes.

Prof. Gharu Anand N. 82


RR Algorithms

Time Quantum is 5

Prof. Gharu Anand N. 83


RR Algorithms

Prof. Gharu Anand N. 84


RR Algorithms

Prof. Gharu Anand N. 85


RR Algorithms

Prof. Gharu Anand N. 86


SJF Preemption Algorithms

Prof. Gharu Anand N. 87


SJF Preemption Algorithms

Prof. Gharu Anand N. 88


SJF Preemption Algorithms

Prof. Gharu Anand N. 89


SJF Preemption Algorithms

Prof. Gharu Anand N. 90


Priority Algorithms
 Priority scheduling is a non-preemptive algorithm and one of the most
common scheduling algorithms in batch systems.

 Each process is assigned a priority. Process with highest priority is to


be executed first and so on.

 Processes with same priority are executed on first come first served
basis.

 Priority can be decided based on memory requirements, time


requirements or any other resource requirement.

Prof. Gharu Anand N. 91


Interprocess communication
In computer science, inter-process communication or interprocess
communication (IPC) allows communicating processes to exchange the
data and information.

There are two methods of IPC :

1. Shared memory

2. Message passing

Prof. Gharu Anand N. 92


Interprocess communication
There are two primary models of inter process communication:
 shared memory and
 message passing.

Prof. Gharu Anand N. 93


Interprocess communication
 shared memory :
In this, processes are interact with each other through shared variable;
processes are exchange information by reading & writing data using
shared variable.

 message passing :.
in this, instead of reading or writing, processes send and receive the
messages.
send and receive functions are implemented in OS.

SEND (B, message)


RECEIVE (A, memory address)
Prof. Gharu Anand N. 94
Critical section
critical section is a piece of code that accesses a shared resource (data
structure or device) that must not be concurrently accessed by more than
one thread of execution.

Prof. Gharu Anand N. 95


Mutual Exclusion
A mutual exclusion (mutex) is a program in which Shared resource is not
allowed to access by more than one process at same time is called mutual
exclusion.

Prof. Gharu Anand N. 96


Semaphore in IPC
In computer science, a semaphore is a variable or abstract data
type used to control access to a common resource by multiple processes in
a concurrent system such as a multiprogramming operating system.

 Semaphore is a simply a variable. This variable is used to solve critical


section problem and to achieve process synchronization in the multi
processing environment

The two most common kinds of semaphores are counting semaphores and
binary semaphores. Counting semaphore can take non-negative integer
valuesProf.
and Binary
Gharu Anand N. semaphore can take the value 0 & 1. only. 97
Types of Semaphore
Semaphores are a useful tool in the prevention of race conditions; however,
their use is by no means a guarantee that a program is free from these
problems. Semaphores which allow an arbitrary resource count are
called counting semaphores, while semaphores which are restricted to the
values 0 and 1 (or locked/unlocked, unavailable/available) are
called binary semaphores and are used to implement locks.

Prof. Gharu Anand N. 98


Primitives of Semaphore
There are two types of Primitives :

1. wait()

2. Signal()

Prof. Gharu Anand N. 99


Monitor
 A monitor is a synchronization construct that allows threads to have both mutual
exclusion and the ability to wait (block) for a certain condition to become true.
 Monitors also have a mechanism for signaling other threads that their condition has
been met.
 A monitor consists of a mutex (lock) object and condition variables. A condition
variable is basically a container of threads that are waiting for a certain condition.
 Monitors provide a mechanism for threads to temporarily give up exclusive access
in order to wait for some condition to be met, before regaining exclusive access and
resuming their task.

Prof. Gharu Anand N. 100


Monitor
ASPECTS SEMAPHORE MONITOR
Basic Semaphores is an integer variable Monitor is an abstract data
S. type.
Action The value of Semaphore S The Monitor type contains
indicates the number of shared shared variables and the set
resources availabe in the system of procedures that operate on
the shared variable.

Access When any process access the When any process wants to
shared resources it perform wait() access the shared variables in
operation on S and when it the monitor, it needs to
releases the shared resources it access it through the
performs signal() operation on S. procedures.

Condition Semaphore does not have Monitor has condition


Prof. Gharu Anand N. 101
variable condition variables. variables.
IPC Problem
(Classical Problem of
Synchronization)
Prof. Gharu Anand N. 102
IPC Problem
1. Producer Consumer Problem
2. Reader Writer Problem
3. Dining Philosopher Problem
4. Sleeping Barber Problem

Prof. Gharu Anand N. 103


Producer-consumer problem
In computing, the producer–consumer problem (also known as
the bounded-buffer problem) is a classic example of a multi-
process synchronization problem. The problem describes two
processes, the producer and the consumer, who share a common,
fixed-size buffer used as a queue. The producer's job is to generate
data, put it into the buffer, and start again. At the same time, the
consumer is consuming the data (i.e., removing it from the buffer),
one piece at a time. The problem is to make sure that the producer
won't try to add data into the buffer if it's full and that the consumer
won'tProf.
tryGharu
toAnand
remove
N.
data from an empty buffer. 104
Producer-consumer problem
The solution for the producer is to either go to sleep or discard data
if the buffer is full. The next time the consumer removes an item
from the buffer, it notifies the producer, who starts to fill the buffer
again. In the same way, the consumer can go to sleep if it finds the
buffer to be empty. The next time the producer puts data into the
buffer, it wakes up the sleeping consumer. The solution can be
reached by means of inter-process communication, typically
using semaphores. An inadequate solution could result in
a deadlock where both processes are waiting to be awakened. The
problem can also be generalized to have multiple producers and
Prof. Gharu Anand N. 105
Reader - writer problem
The R-W problem is another classic problem for which design of
synchronization and concurrency mechanisms can be tested. The
producer/consumer is another such problem; the dining philosophers is
another.
Definition
 There is a data area that is shared among a number of processes.
 Any number of readers may simultaneously write to the data area.
 Only one writer at a time may write to the data area.
 If a writer is writing to the data area, no reader may read it.
 If there is at least one reader reading the data area, no writer may write to
it.
 Readers only read and writers only write
 A process that reads and writes to a data area must be considered a writer
(consider producer or consumer)
Prof. Gharu Anand N. 106
Dining philosopher problem
The Dining Philosopher Problem – The Dining Philosopher Problem states
that K philosophers seated around a circular table with one chopstick
between each pair of philosophers. There is one chopstick between each
philosopher. A philosopher may eat if he can pickup the two chopsticks
adjacent to him. One chopstick may be picked up by any one of its adjacent
followers but not both.

Prof. Gharu Anand N. 107


Barber sleeping problem

Prof. Gharu Anand N. 108


DEADLOCK
• A deadlock is a situation in which two computer programs sharing the
same resource are effectively preventing each other from accessing the
resource, resulting in both programs ceasing to function. The earliest
computer operating systems ran only one program at a time.

Prof. Gharu Anand N. 109


DEADLOCK condition
Mutual Exclusion: One or more than one resource are non-
sharable (Only one process can use at a time)

Hold and Wait: A process is holding at least one resource


and waiting for resources.

Prof. Gharu Anand N. 110


DEADLOCK condition
No Preemption: A resource cannot be taken from a process
unless the process releases the resource.

Circular Wait: A set of processes are waiting for each other


in circular form.

Prof. Gharu Anand N. 111


Methods for handling
deadlock
• There are three ways to handle deadlock
1) Deadlock prevention or avoidance: The idea is to not
let the system into deadlock state.

• 2) Deadlock detection and recovery: Let deadlock occur,


then do preemption to handle it once occurred.

• 3) Ignore the problem all together: If deadlock is very


rare, then let it happen and reboot the system. This is the
Prof. Gharu Anand N. 112
approach that both Windows and UNIX take.
Deadlock recovery
• Preemption We can take a resource from one process and give it to
other. This will resolve the deadlock situation, but sometimes it does
causes problems.

• RollbackIn situations where deadlock is a real possibility, the system


can periodically make a record of the state of each process and when
deadlock occurs, roll everything back to the last checkpoint, and restart,
but allocating resources differently so that deadlock does not occur.

• Kill one or more processesThis is the simplest way, but it works.


Prof. Gharu Anand N. 113
Deadlock prevention
We can prevent Deadlock by eliminating any of the above four condition.

Eliminate Mutual Exclusion


It is not possible to dis-satisfy the mutual exclusion because some resources, such as the tap drive and
printer, are inherently non-shareable.

Eliminate Hold and wait


1. Allocate all required resources to the process before start of its execution, this way hold and wait
condition is eliminated but it will lead to low device utilization. for example, if a process requires
printer at a later time and we have allocated printer before the start of its execution printer will
remained blocked till it has completed its execution.
2. Process will make new request for resources after releasing the current set of resources. This
solution may lead to starvation.

Eliminate No Preemption
Preempt resources from process when resources required by other high priority process.

Eliminate Circular Wait


Each resource will be assigned with a numerical number. A process can request for the resources only
in increasing order of numbering.
For Example, if P1 process is allocated R5 resources, now next time if P1 ask for R4, R3 lesser than
R5 suchProf.
request
Gharuwill
AnandnotN.be granted, only request for resources more than R5 will be granted. 114
Deadlock Avoidance
Banker’s Algorithm

Banker's algorithm is a deadlock avoidance algorithm. It is named so


because this algorithm is used in banking systems to determine whether a
loan can be granted or not.

Consider there are n account holders in a bank and the sum of the money in
all of their accounts is S. Everytime a loan has to be granted by the bank, it
subtracts the loan amount from the total money the bank has. Then it
checks if that difference is greater than S. It is done because, only then, the
bank would have enough money even if all the n account holders draw all
their money at once.
Banker's algorithm works in a similar way in computers. Whenever a new
Prof. Gharu Anand N. 115
process is created, it must exactly specify the maximum instances of each
Deadlock Avoidance
Let us assume that there are n processes and m resource types. Some data structures
are used to implement the banker's algorithm. They are:

Available: It is an array of length m. It represents the number of available resources


of each type. If Available[j] = k, then there are k instances available, of resource
type Rj.

Max: It is an n x m matrix which represents the maximum number of instances of


each resource that a process can request. If Max[i][j] = k, then the process Pi can
request atmost k instances of resource type Rj.3
Allocation: It is an n x m matrix which represents the number of resources of each
type currently allocated to each process. If Allocation[i][j] = k, then process Pi is
currently allocated k instances of resource type Rj.
Need: It is an n x m matrix which indicates the remaining resource needs of each
process. If Need[i][j] = k, then process Pi may need k more instances of resource
type Rj to complete its task.

Need[i][j] = Max[i][j]
Prof. Gharu Anand N. - Allocation [i][j] 116
Bankers Algorithms

Prof. Gharu Anand N. 117


Bankers Algorithms

Prof. Gharu Anand N. 118


Bankers Algorithms

Prof. Gharu Anand N. 119


Bankers Algorithms

Prof. Gharu Anand N. 120


pipe
• pipe is a connection between two processes, such that the standard
output from one process becomes the standard input of the other process.
In UNIX Operating System, Pipes are useful for communication
between related processes(inter-process communication).
• Pipe is one-way communication only i.e we can use a pipe such that One
process write to the pipe, and the other process reads from the pipe. It
opens a pipe, which is an area of main memory that is treated as
a “virtual file”.
• The pipe can be used by the creating process, as well as all its child
processes, for reading and writing. One process can write to this “virtual
file” or pipe and another related process can read from it.
• If a process tries to read before something is written to the pipe, the
process is suspended until something is written.
• The pipe system call finds the first two available positions in the
process’s open file table and allocates them for the read and write ends
ofProf.
theGharu
pipe.
Anand N. 121
Thank You
Gharu.anand@gmail.com

2/22/2018 Prof. Gharu Anand N. 122122

Вам также может понравиться