Вы находитесь на странице: 1из 44

ComputerOrganization and SoftwareSystems

CONTACT SESSION 8

Process Management
Prof. C R Sarma
BITS Pilani Guest Faculty, BITS
Pilani Campus
7.1. Concept of Process 3.1 (105-107)
7.2. Process State Diagram 3.1.2 (107-110)
7.7.* Process Scheduling criteria 3.2 (110-115)
7.3. Operations on Processes 3.3 (115-122)
7.4. Inter-process communications 3.4-3.5 (122-130)
7.4. Inter-process communications with examples 3.4-3.5 (130-136)
7.5. Process vs. Threads 4.1 (163-166)
7.6. Multithreading Models 4.3 (169-171)
7.8. Process Scheduling Algorithms -FCFS, SJF, Priority 6.3 (266-270)
7.8. contd.. - RR, Multilevel Queue, Multilevel Feedback Queue 6.3.3 (270-277)
Process Management

• To introduce the notion of a process—a program in execution,


which forms the basis of all computation.

• To describe the various features of processes, including


scheduling, creation, and termination.

• To explore interprocess communication using shared memory and


message passing.

BITS Pilani, Pilani Campus


Concept of Processes
Process is a program in execution. A process is
-program code, which is sometimes known as the text section.
It also includes the current activity, as represented by the value of the
program counter and the contents of the processor’s registers.
-includes the process stack, which contains temporary data (such as
function parameters, return addresses, and local variables),
- a data section, which contains global variables.A process may also
- include a heap,which is memory that is dynamically allocated during
process run time.
- A program is a
passive entity, such as a file containing a list of instructions stored on
disk (often called an executable file).
active entity, In contrast, a process is with a program counter
specifying the next instruction to execute and a set of associated
resources.
A program becomes a process when an executable file is loaded into
memory.

Operating Systems BITS Pilani, Pilani Campus


Concept of Processes
As a process executes, it changes state.
The state of a process is defined by the current activity of that process.
A process may be in one of the following states:
• New. The process is being created.
• Running. Instructions are being executed.
• Waiting. The process is waiting for some event to occur (such as an I/O
completion or reception of a signal).
• Ready. The process is waiting to be assigned to a processor.
• Terminated. The process has finished execution.

Operating Systems BITS Pilani, Pilani Campus


Concept of Processes (105-107)

Process Control Block- represents each process in the


OS
(PCB)—also called a task control block.
Process state. new, ready, running, waiting, halted etc
Process Number …..is its id also called Process id
• Program counter. Address of the next instruction
to be executed for this process.
• CPU registers along with program counter keeps
state-saved as shown in figure on Right
• CPU-scheduling information-process priority, pointers
to scheduling queues and other scheduling parameters.
• Memory-management information.- base and limit
registers ,page tables, segment tables
Accounting information. CPU time, time limits, account
numbers, process numbers….
• I/O status information. List of I/O devices
allocated to the process, open files, and so on.
THREADS are ATOMIC operations performing single
operation just as if it were a PROCESS
Operating Systems BITS Pilani, Pilani Campus
Process & Multiprogramming
In multiprogramming environment, many jobs can be in memory

Jobs in memory are ready to execute

At any point of time one Job might be running and others are in not running state.

Operating Systems 5
7 Operating Systems BITS Pilani, Pilani Campus
Two State Process Model

Process may in any of the 2 states



– Running
– Not running

8 Operating Systems BITS Pilani, Pilani Campus


Three State Model

Running: The process that is currently


being executed
Ready: A process that is prepared to
execute when given the opportunity
Blocked : A process that can not execute until
some events occur

Occurrence of event is usually indicated

by interrupt signal
9 Operating Systems BITS Pilani, Pilani Campus
Queuing Diagram

10 Operating Systems BITS Pilani, Pilani Campus


Five State Model

Why “New” state


When job is submitted , OS creates data structure for keeping track of the process context and
then it tries to load the process.
While loading the process

– System May not have enough memory to hold the process

– If we attempt to load Jobs as they are submitted, there may not be enough resource to execute
processes in efficient manner

– To efficiently execute processes, system may put a maximum limit on processes in ready queue
11 Operating Systems BITS Pilani, Pilani Campus
Valid State Transitions
• New to ready In the event when all ready processes get blocked on I/O one
by one, the system tries to bring in a process from new to ready
• Ready to running state and it is found no memory is available to accommodate this
• Running to exit process….
• Running to ready Swapping is performed
• Running to blocked to offline is done
• Blocked to ready
• Ready to exit
• Blocked to exit Swapping - an I/O operation.
enhancing performance.
The operating system can
suspended one process by
putting it in the Suspended
state and transferring it to
disk.
The space that is freed up in
the memory can then be
used to bring in another
process.
Needed when all the ready
processes get blocked

12 Operating Systems BITS Pilani, Pilani Campus
One & Two Suspend States

Reasons for process suspension: Swapping - the


operating system needs to release sufficient main
memory to bring in a process that is ready to
execute.
Other OS reason - the operating system may
suspend a background or utility process or a
process that is suspected of causing a problem.

Blocked Blocked/suspended:

– If Ready queue is empty and insufficient


memory is available then one of the blocked
process can be swapped out

– If currently running process requires more


memory

– If OS determines that ready process will


require more memory

13 Operating Systems BITS Pilani, Pilani Campus


CPU Scheduling
Maximize CPU utilization with multi programming---CPU–I/O Burst Cycles – Process execution is CPU execution
and I/O wait.
Dispatcher
• • Fast -Minimum latency
CPU Scheduler • context switching
Select a process for execution and allocate • switching to user mode
CPU • jumping to the proper location in the
Scheduling
• Division will require: user program to restart that program
1.From running to waiting state
2.From running to ready state Criteria
3.From waiting to running state • CPU utilisation
• Throughput
4.Terminates
• Turn Around Time
1 & 4 is non-premptive rest are premptive
• Waiting time
Scheduling Types
• Response time
Non-preemptive Real Time Scheduling
Preemptive Hard Real time
Goal of CPU Scheduler Soft real time

• Fairness
• Policy enforcement
• Efficiency
• Min.Response time • CPU Bound : Processes spend bulk of its time
• Min.Turn around time executing on processor and do very little I/O
• I/O Bound : Process spend very little time executing
• Min. Waiting Time on processor and do lot of I/O operation
• Max.Throughput
• Load Balancing • Mix of processes in the system can not be predicted
• Predictability

14 Operating Systems BITS Pilani, Pilani Campus


Performance Measures

CPU utilization

– keep the CPU as busy as possible. CPU utilization vary from


0 to 100. It varies from 40% (lightly loaded) to 90% (heavily
loaded).
Throughput

– Number of processes that complete their execution per time


unit.


Turnaround time
– Amount of time to execute a particular process (interval
from time of submission to time of completion of
process)
1 Operating Systems BITS Pilani, Pilani Campus
CPU Scheduling
CPU Scheduling
Short-term scheduler
Allows one processes to fully use the CPU in efficient, fast and
fair manner.
OS selects one of the processes in the ready queue to be
executed.
The Short-term scheduler (or CPU scheduler) implements it.
CPU Scheduling: Dispatcher
This module gives control of the CPU to the process selected by
the short-term scheduler.
Its function involve:
• Switching context
• Switching to user mode
• Executing program from last executed.
It should be given that it is invoked during every process switch.
The time taken by the dispatcher to stop one process and start
another process is known as the Dispatch Latency. Dispatch
Latency can be explained using the below figure:
TypesScheduling
Types of CPU Scheduling

a) Process switching from the running to waiting state.


b)Process switching from the running to ready state.
c)Process switching from waiting to ready state .
d)Process terminates.
a) and d) no scheduling options however b) and c) give a scope for pre-empting
a) and d) scheduling is non-preemptive scheme, for b) and c) it is preemptive.

Non-Preemptive Scheduling
Once allotted the CPU, the process keeps the it until it terminates or switching to waiting state.
Used in platforms which don’t have special hardwareneeded for preemptive scheduling.
Preemptive Scheduling
In this tasks are usually assigned with priorities-necessitating to run a certain task that has a
higher priority before another task although it is running.
The running task is interrupted and resumed after the priority task has finished its execution.
Scheduling Methods
CPU Utilization
Optimal use of CPU without waste of any CPU cycle (Ideally 100% of the time-40% typical).
Throughput
Total number of processes completed/unit time typically 10/sec.
Turnaround Time
Time taken to execute a particular processTime of admission- time of completion.
Waiting Time
Amount of time a process has been waiting in the ready queue to acquire to get control on
the CPU.
Load Average
Average number of processes residing in the ready queue waiting for their turn.
Response Time
Time taken from when a request was submitted until the first response is produced-
completion of process execution is final response.
CPU utilization and Throughput are maximised-rest are minimised for proper optimization.

Scheduling Algorithms
First Come First Serve(FCFS) Shortest-Job-First(SJF) Priority Scheduling
Round Robin(RR) Multilevel Queue Scheduling Multilevel Feedback Queue
CPU Scheduling
Long-Term Scheduling
Programs admitted for processing – Control

degree fmultiprogramming

More processes less execution time

Medium-Term Scheduling
Part of the swapping function.
Need to manage
multiprogramming

Short-Term Scheduling
Known as the dispatcher Executes
most frequently Invoked when an Blocked/Blocked/suspended:
event occurs –If
Ready queue is empty and insufficient memory is available then one of the blocked
--Clock interrupts --I/O interrupts process
- can be swapped out

-Operating system calls --Signals –If currently running process requires more memory
–If OS determines that ready process will require more memory
First Come First Serve(FCFS) Scheduling
Process arriving first, gets executed first - First In First Out) –used in Batch Systems.
PROCE BURST Process Wait time Time for Process to wait for
SS TIME for its turn their turns
P1=0 P2=25 P3=30 P4=37
P1 25 0 Total wait time upto P4
P2 5 0+25 Ttot=0+25+30+37=92
Average time is therefore
P3 7 0+25+5
Tavg=92/4 =9.25
P4 3 0+25+5+7
P1 P2 P3 P4
GANTT Chart- graphically depicts

Problems with FCFS Scheduling : It is Non Pre-emptive algorithm -improper process


scheduling-Not optimal Average Waiting Time -leads to Convoy Effect.
Convoy Effect: many processes,needing a resource for short time are blocked by process
holding that resource for a long time.
Completion Time: Time taken for complete the execution.
Average Turn Around Time: Time taken to complete after arrival=92/4=9.25
Waiting Time: The difference between the Turn Around time and the Burst time.
Here all the processes are having 0 arrival time
Shortest-Job-First(SJF) NonPremptive

The process with the shortest burst time goes in first.


Minimises waiting time- used in Batch Systems.
There are two types: 1. Non Pre-emptive 2.Pre-emptive
Implemented by knowing ahead burst time/duration time of processes, generally not feasible.
It is optimal if all the jobs/processes are available at a time with each arrival time= 0 or same.

PROCESS BURST Process Wait time


TIME for its turn
P4 P2 P3 P1
P4 3 0
P2 5 0+3
P3 7 0+3+5
Average waiting time=0+3+5+8=15/4=3.75
P1 25 0+3+5+25 this is better than FCFS

Problems with SJF –NP: Arrival times different for processes -not available in the ready
queue at time 0- some arrive after some time causes process with short burst time have
to wait for the current process's execution to finish.
Leads to the problem of Starvation, where a shorter process has to wait for a long time
until the current longer process gets executed. This happens if shorter jobs keep coming
- can be solved using the concept of aging.
Shortest-Job-First(SJF) Premptive

Processes are put into ready queue as they arrive


Existing process is preempted when a Process with short burst time arrives
PROCESS BURST Arrival Waiting Turn around
Number TIME Time Time

P1 25 0 15 39 P1 waits from 1-16


P2 waits from 3-6
P2 5 1 3 8
P3 waits from 2-9
P3 7 2 7 14 P4 no wait
Average waiting=(15+3+7)/4=6.25
P4 3 3 0 3

Average Turn Around Time=(39+8+14+3)/4=16

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16………………………………39
P1 P2 P3 P4
P1 P2 P4 P2 P3 (7) P1
(1) (2) (3) (3) (24)…………………………………

Example
Shortest-Job-First(SJF) Premptive

from the GANTT chart figure out BT and fill the table)
Process No. Arrival Burst Time Completion Turn Waiting Response Each UNIT time checks for
Time Time Around Time Time
Time what is the shortest job
AT BT CT TAT WT RT (TAT=CT-AT
WT=TAT-BT
P1 0 ? 9 9 4 0
RT={CPU firstallotted time-AT}
P2 1 ? 4 3 0 0 RT=WT in non Preemptive
P3 2 ? 13 11 7 7

P4 4 ? 5 1 0 0

GANTT Chart
P1 P2 P2 P2 P4 P1 P1 P1 P1 P3
0 1 2 3 4 5 6 7 8 9 13

TAT=9,4,13

Back to the Previous


Priority Scheduling
Each process is assigned its own Priority --Highest priority process Executes first
Same priority processes execute in FCFS manner--Priority basis memory/ time/resource
requirement.
PROCESS BURST Priority
TIME P1 waits 5
P2 waits 0
P1 25 2 P3 waits 30
P2 5 1 P4 waits 33
P3 7 4 Average waiting=(68/4)=17
P4 3 3
Average Turn Around Time=(25+5+7+3)/4=10
0 1 2 3 4 5………………………………………………………………………………………………………….29 30….32 33…….39

P2 P1 P4 P3
(5) (25) (3) (7)
Round Robin Scheduling
Fixed time allotted to each process, called quantum for execution.
Once a process is executed for given time it is preempted and next process executes its time.
Context switching is used to save states of preempted processes.
PROC BURST Round Allotted
ESS TIME Robin Time P1 waits …..
P2 waits …..
P1 25 1 6 P3 waits …..
P2 5 2 4 P4 waits …..
P3 7 3 5 Average waiting=( /4)=
P4 3 4 3
Average Turn Around Time=( )/4=
0……5 6……..9 10………14 15……17 18….23 24

P1 (6) P2 P3 P4 P1 (6)
(4) (5) (3)

RR scheduling algorithm, no process is allocated the CPU for more than 1 time quantum in a row
If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in
chunks of at most q time units.
Each process must wait no longer than (n − 1) × q time units until its next time quantum. For example, with five processes
and a time quantum of 20 milliseconds, each process will get up to 20 milliseconds every 100 milliseconds.
Round Robin Scheduling+-
How a Smaller Time Quantum Increases Context
In practice, need to balance short-job performance
Switches and long-job throughput:
– Typical time slice today is between 10ms –
100ms
– Typical context-switching overhead is 0.1ms –
1ms
– Roughly 1% overhead due to context-
switching

Response Ratio = (time spent waiting + expected service time)// expected service time

This method accounts for the age of the process and the shorter jobs are favored.
Favors CPU-bound Processes
An I/O bound process will use CPU for a time less than the time quantum and gets blocked for I/O
A CPU-bound process run for all its complete time slice and is put back into the ready queue (thus
getting in front of blocked processes)
A solution: Virtual Round Robin
When a I/O has completed, the blocked process is moved to an auxiliary queue which gets preference over the
main ready queue .
A process dispatched from the auxiliary queue runs no longer than the basic time quantum minus the time
spent running since it was selected from the ready queue
2 BITS Pilani, Pilani Campus
Multilevel QueueScheduling
Created for processes are easily classified into different groups.
For example: A common division is made between foreground(or interactive) processes and
background (or batch) processes.
These two types of processes have different response-time requirements, and so might have
different scheduling needs.
In addition, foreground processes may have priority over background processes.
Partitions the ready queue into several separate queues-The processes are permanently
assigned to one queue based memory size, process priority, or process type.
Each queue has its own scheduling algorithm.
For example: separate queues might be used for foreground and background processes. The
foreground queue might be scheduled by Round Robin algorithm, while the background queue
is scheduled by an FCFS algorithm.
In addition, there must be scheduling among the queues, which is commonly implemented as
fixed-priority preemptive scheduling. For example: The foreground queue may have absolute
priority over the background queue.

Let us consider an example of a multilevel queue-scheduling algorithm with five queues:


System Processes
Interactive Processes
Interactive Editing Processes
Multilevel QueueScheduling
Each queue has absolute priority over lower-
Multilevel queue-scheduling priority queues. No process in the batch queue, for
algorithm with five queues: example, could run unless the queues for system
• System Processes processes, interactive processes, and interactive
• Interactive Processes editing processes were all empty. If an interactive
• Interactive Editing Processes editing process entered the ready queue while a
• Batch Processes batch process was running, the batch process will
• Student Processes be preempted.


Multilevel Feedback Queue
Scheduling
Multilevel queue-scheduling algorithm: processes are permanently assigned to a queue on
entry to the system. Processes do not move between queues. This setup has the advantage of
low scheduling overhead, but the disadvantage of being inflexible.
Multilevel feedback queue scheduling, however, allows a process to move between queues.
The idea is to separate processes with different CPU-burst characteristics.
If a process uses too much CPU time, it will be moved to a lower-priority queue. Similarly, a
process that waits too long in a lower-priority queue may be moved to a higher-priority queue.
This form of aging prevents starvation.

This scheduler is defined by the following:


•The number of queues.
•The scheduling algorithm for each queue.
•The method to upgrade a process to a higher-priority queue.
•The method when to demote a process to a lower-priority queue.
•The method used to determine which queue a process will enter when that process
needs service.
This makes it the most general CPU-scheduling algorithm.
It requires selecting values for all the parameters to define the best scheduler.
This is the by far the most general scheme and it is also the most complex.
Process Management, IPC & Scheduling
Done

39 BITS Pilani, Pilani Campus


The following slides are not as per the HO

39 BITS Pilani, Pilani Campus


Fair-share Scheduling6.7.3

Fairness ?
• User’s application runs as a collection of
User
• processes (sets)
Process

User is concerned about the
• performance of the application made
up of a set of processes

Need to make scheduling decisions based
on process sets(groups)

Think of processes as part of a group

40 Each group
allowed to use hasOperating
a specified
Systems share of the BITS Pilani, Pilani Campus
Fair Share Scheduling

Values defined
– Pj(i) = Priority of process j at beginning of ith

interval
Uj (i)
– = Processor use by process j during ith interval
GUk(i) = Processor use by group k during ith interval

CPUj(i) = Exponentially weighted average for process j
from beginning to the start of ith interval

GCPUk(i) = Exponentially weighted average for group k
from beginning to the start of ith interval

Wk = Weight assigned to group k, 0 Wk 1, kWk=1
=> CPUj(1) = 0, GCPUk(1) = 0, i =
1,2,3,….
41 Operating Systems BITS Pilani, Pilani Campus
Fair Share Scheduling

Values defined
– Pj(i) = Priority of process j at beginning of ith

interval
Uj (i)
– = Processor use by process j during ith interval
GUk(i) = Processor use by group k during ith interval

CPUj(i) = Exponentially weighted average for process j
from beginning to the start of ith interval

GCPUk(i) = Exponentially weighted average for group k
from beginning to the start of ith interval

Wk = Weight assigned to group k, 0 Wk 1, kWk=1
=> CPUj(1) = 0, GCPUk(1) = 0, i =
1,2,3,….
42 Operating Systems BITS Pilani, Pilani Campus
Fair Share Example

Three processes A, B, C; B,C are in one group; A is by itself

Both groups get 50% weighting
Process A Process B Process C
Priority Process Group Priority Process Group Priority Process Group
t=0 60 0 0 60 0 0 60 0 0
A +60 +60
t=1 90 30 30 60 0 0 60 0 0
B +60 +60 +60
t=2 74 15 15 90 30 30 75 0 30
A +60 +60
t=3 96 37 37 74 15 15 67 0 15
C +60 +60 +60
t=4 78 18 18 81 7 37 93 30 37
A +60 +60
t=5 98 39 39 70 3 18 76 15 18
B +60 +60 +60
t=6 78 19 19 94 31 39 82 7 39
A +60 +60
t=7 98 39 39 76 15 19 70 3 19
C +60 +60 +60
t=8 78 19 19 82 7 39 94 31 39
A +60 +60
t=9 98 39 39 70 3 19 76 15 19
B +60 +60 +60
t=10 78 19 19 94 31 39 82 7 39
A +60 +60
t=11 98 39 39 76 15 19 70 3 19
C +60 +60 +60
t=12 78 19 19 82 7 39 94 31 39
Operating Systems BITS Pilani, Pilani Campus
Traditional UNIX Scheduling

Multilevel queue using round robin within each of the priority

queues Priorities are recomputed once per second

Base priority divides all processes into fixed bands of priority levels

Adjustment factor used to keep process in its assigned band (called
nice)

44 Operating Systems BITS Pilani, Pilani Campus


Bands

Decreasing order of
priority
– Swapper

– Block I/O device control


– File manipulation
– Character I/O device control
– User processes

Values

Pj(i) =Priority of process j at start of ith

Uj(i) interval
• =Processor
Calculations (done use by j during the ith interval
each second):

CPUj = Uj(i-1)/2 +CPUj(i-
45 • 1)/2
Pj = Basej + OCpPerUatji/n2g +Synstiecmesj BITS Pilani, Pilani Campus
Multiprocessor Scheduling
Issues

Processors are functionally identical ?
– Homogeneous
– Heterogeneous

System in which i/o is attached to private bus of a
processor.

Keep all processor equally busy .
– Load balancing

46 Operating Systems BITS Pilani, Pilani Campus


Approach to MP
• scheduling
Asymmetric Multiprocessing

All scheduling , I/O handling, System activity is handled by one processor

Other processors are used for executing user code

Simple as only one processor accesses the system data structure ( reduced need for
data sharing)

Symmetric Multiprocessing
– Each processor is self scheduling

Can have single common queue for all processor

alternatively individual queue for each processor
– Processor affinity

Soft affinity

Hard affinity
47– NUMA & CPU schedulingOperating Systems BITS Pilani, Pilani Campus
Load Balancing6.5.3

In SMP load balancing is necessary only when each processor
has independent (private)ready queue

Push/pull migration
– A specific task checks periodically the load of each processor .

In case of imbalance , moves process from overload to idle processor.

Pull occurs when Idle processor pulls a process from a busy processor

Load balancing counteracts the benefits of processor affinity

48 Operating Systems BITS Pilani, Pilani Campus


Questions?

49 Operating Systems BITS Pilani, Pilani Campus


Process No. Arrival Burst Time Completion Turn Waiting Response TAT=CT-AT
Time Time Around Time Time
Time WT=TAT-BT
AT BT CT TAT WT RT RT={CPU firstallotted time-AT}
RT=WT in non Preemptive
P1 0 9 9 4 0

P2 1 4 3 0 0

P3 2 13 11 7 7

P4 4TAT 5 1 0 0

GANTT Chart
P1 P2 P2 P2 P4 P1 P1 P1 P1 P3
0 1 2 3 4 5 6 7 8 9 13

TAT=9,4,13

Back to the Previous


Process No. Arrival Burst Time Completion Turn Waiting Response TAT=CT-AT
Time Time Around Time Time
Time WT=TAT-BT
AT BT CT TAT WT RT RT={CPU firstallotted time-AT}
P1
Average times are
P2 Times/number of processes
P3

P4

GANTT Chart
P1 P2 P2 P2 P4 P1 P1 P1 P1 P3
0 1 2 3 4 5 6 7 8 9 13

Back to the Previous

Вам также может понравиться