Вы находитесь на странице: 1из 114

Amity School of Engineering & Technology

CPU Scheduling
Amity School of Engineering & Technology

What is CPU Scheduling?

• CPU scheduling is a process which allows one process


to use the CPU while the execution of another
process is on hold(in waiting state) due to
unavailability of any resource like I/O etc, thereby
making full use of CPU. The aim of CPU scheduling is
to make the system efficient, fast and fair.
• Whenever the CPU becomes idle, the operating
system must select one of the processes in the ready
queue to be executed. The selection process is
carried out by the short-term scheduler (or CPU
scheduler). The scheduler selects from among the
processes in memory that are ready to execute, and
allocates the CPU to one of them.
Amity School of Engineering & Technology

CPU Scheduling: Dispatcher


• Another component involved in the CPU scheduling
function is the Dispatcher. The dispatcher is the
module that gives control of the CPU to the process
selected by the short-term scheduler. This function
involves:
 Switching context
 Switching to user mode
 Jumping to the proper location in the user program
to restart that program from where it left last time.
• The dispatcher should be as fast as possible, given
that it is invoked during every process switch. The
time taken by the dispatcher to stop one process and
start another process is known as the Dispatch
Amity School of Engineering & Technology

Dispatch Latency 
Amity School of Engineering & Technology

Types of CPU Scheduling

CPU scheduling decisions may take place under the


following four circumstances:
1.When a process switches from the running state to
the waiting state(for I/O request or invocation of wait
for the termination of one of the child processes).
2.When a process switches from the running state to
the ready state (for example, when an interrupt occurs).
3.When a process switches from the waiting state to
the ready state(for example, completion of I/O).
4.When a process terminates.
Amity School of Engineering & Technology

• In circumstances 1 and 4, there is no choice in terms


of scheduling. A new process(if one exists in the
ready queue) must be selected for execution. There is
a choice, however in circumstances 2 and 3.
• When Scheduling takes place only under
circumstances 1 and 4, we say the scheduling scheme
is non-preemptive; otherwise the scheduling
scheme is preemptive.
Amity School of Engineering & Technology

Non-Preemptive Scheduling

• Under non-preemptive scheduling, once the CPU has


been allocated to a process, the process keeps the
CPU until it releases the CPU either by terminating or
by switching to the waiting state.
• This scheduling method is used by the Microsoft
Windows 3.1 and by the Apple Macintosh operating
systems.
• It is the only method that can be used on certain
hardware platforms, because It does not require the
special hardware(for example: a timer) needed for
preemptive scheduling.
Amity School of Engineering & Technology

Preemptive Scheduling
• In this type of Scheduling, the tasks are usually
assigned with priorities. At times it is necessary to
run a certain task that has a higher priority before
another task although it is running. Therefore, the
running task is interrupted for some time and
resumed later when the priority task has finished its
execution.
Amity School of Engineering & Technology

CPU Scheduling: Scheduling Criteria


There are many different criterias to check when
considering the "best" scheduling algorithm, they are:
CPU Utilization
• To make out the best use of CPU and not to waste
any CPU cycle, CPU would be working most of the
time(Ideally 100% of the time). Considering a real
system, CPU usage should range from 40% (lightly
loaded) to 90% (heavily loaded.)
Throughput
• It is the total number of processes completed per
unit time or rather say total amount of work done in
a unit of time. This may range from 10/second to
1/hour depending on the specific processes.
Amity School of Engineering & Technology

Turnaround Time
• It is the amount of time taken to execute a particular
process, i.e. The interval from time of submission of
the process to the time of completion of the
process(Wall clock time).
Waiting Time
• The sum of the periods spent waiting in the ready
queue amount of time a process has been waiting in
the ready queue to acquire get control on the CPU.
Load Average
• It is the average number of processes residing in the
ready queue waiting for their turn to get into the
CPU.
Amity School of Engineering & Technology

Response Time
• Amount of time it takes from when a request was
submitted until the first response is produced.
Remember, it is the time till the first response and
not the completion of process execution(final
response).
• In general CPU utilization and Throughput are
maximized and other factors are reduced for proper
optimization.
Amity School of Engineering & Technology

Important CPU scheduling Terminologies

• Burst Time/Execution Time: It is a time required by the


process to complete execution. It is also called running time.
• Arrival Time: when a process enters in a ready state
• Finish Time: when process complete and exit from a system
• Multiprogramming: A number of programs which can be
present in memory at the same time.
Amity School of Engineering & Technology

• Jobs: It is a type of program without any kind of user


interaction.
• User: It is a kind of program having user interaction.
• Process: It is the reference that is used for both job and user.
• CPU/IO burst cycle: Characterizes process execution, which
alternates between CPU and I/O activity. CPU times are
usually shorter than the time of I/O.
Amity School of Engineering & Technology

A CPU scheduling algorithm tries to maximize and minimize the


following:
Amity School of Engineering & Technology

Types of CPU scheduling Algorithm

There are mainly six types of process scheduling algorithms

1. First Come First Serve (FCFS)


2. Shortest-Job-First (SJF) Scheduling
3. Shortest Remaining Time
4. Priority Scheduling
5. Round Robin Scheduling
6. Multilevel Queue Scheduling
Amity School of Engineering & Technology

First Come First Serve Scheduling

In the "First come first serve" scheduling algorithm, as the


name suggests, the process which arrives first, gets
executed first, or we can say that the process which
requests the CPU first, gets the CPU allocated first.

• First Come First Serve, is just like FIFO(First in First out)


Queue data structure, where the data element which is
added to the queue first, is the one who leaves the queue
first.
• This is used in Batch Systems.
Amity School of Engineering & Technology

• It's easy to understand and


implement programmatically, using a Queue data
structure, where a new process enters through
the tail of the queue, and the scheduler selects
process from the head of the queue.
• A perfect real life example of FCFS scheduling
is buying tickets at ticket counter.
Amity School of Engineering & Technology

Calculating Average Waiting Time


• For every scheduling algorithm, Average waiting
time is a crucial parameter to judge it's performance.
• AWT or Average waiting time is the average of the
waiting times of the processes in the queue, waiting
for the scheduler to pick them for execution
• Lower the Average Waiting Time, better the
scheduling algorithm.
Amity School of Engineering & Technology
Example: Consider the processes P1, P2, P3, P4 given in
the below table, arrives for execution in the same order,
with Arrival Time 0, and given Burst Time, let's find the
average waiting time using the FCFS scheduling algorithm.
Amity School of Engineering & Technology

• The average waiting time will be 18.75 ms

For the above given processes, first P1 will be provided


with the CPU resources,

• Hence, waiting time for P1 will be 0


• P1 requires 21 ms for completion, hence waiting time for
P2 will be 21 ms
• Similarly, waiting time for process P3 will be execution
time of P1 + execution time for P2, which will be (21 + 3)
ms = 24 ms.
• For process P4 it will be the sum of execution times of
P1, P2 and P3.

The GANTT chart on previous slide perfectly represents the


waiting time for each process.
Amity School of Engineering & Technology

How FCFS Works? Calculating Average Waiting Time


Ex. Here is an example of five processes arriving at different times. Each
process has a different burst time.
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Amity School of Engineering & Technology

Ex
Amity School of Engineering & Technology

Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time


P0 0-0=0
P1 5-1=4
P2 8-2=6
P3 16 - 3 = 13

Average Wait Time: (0+4+6+13) / 4 = 5.75


Amity School of Engineering & Technology
Amity School of Engineering & Technology

Problems with FCFS Scheduling


Below we have a few shortcomings or problems with the
FCFS scheduling algorithm:
1. It is Non Pre-emptive algorithm, which means
the process priority doesn't matter.If a process with
very least priority is being executed, more like daily
routine backup process, which takes more time, and all
of a sudden some other high priority process arrives,
like interrupt to avoid system crash, the high priority
process will have to wait, and hence in this case, the
system will crash, just because of improper process
scheduling.
2. Not optimal Average Waiting Time.
3. Resources utilization in parallel is not possible, which
leads to Convoy Effect, and hence poor resource(CPU,
I/O etc) utilization.
Amity School of Engineering & Technology

What is Convoy Effect?


• Convoy Effect is a situation where many processes,
who need to use a resource for short time are
blocked by one process holding that resource for a
long time.
• This essentially leads to poor utilization of resources
and hence poor performance.
Amity School of Engineering & Technology

Here we have simple formulas for calculating various


times for given processes:
• Completion Time(CT): Time taken for the execution
to complete, starting from arrival time.
• Turn Around Time(TAT): Time taken to complete
after arrival. In simple words, it is the difference
between the Completion time and the Arrival
time(CT-AT).
• Waiting Time(WT): Total time the process has to wait
before it's execution begins. It is the difference
between the Turn Around time and the Burst time of
the process(TAT-BT).
Amity School of Engineering & Technology

Shortest Job First(SJF) Scheduling


• Shortest Job First scheduling works on the process
with the shortest burst time or duration first.
• This is the best approach to minimize waiting time.
• This is used in Batch Systems.
• It is of two types:
• Non Pre-emptive
• Pre-emptive
Amity School of Engineering & Technology

• To successfully implement it, the burst time/duration time


of the processes should be known to the processor in
advance, which is practically not feasible all the time.

• This scheduling algorithm is optimal if all the


jobs/processes are available at the same time. (either
Arrival time is 0 for all, or Arrival time is same for all)
Amity School of Engineering & Technology

Non Pre-emptive Shortest Job First


Consider the below processes available in the ready queue
for execution, with arrival time as 0 for all and given burst
times.
Amity School of Engineering & Technology

• As you can see in the GANTT chart above, the process


P4 will be picked up first as it has the shortest burst time,
then P2, followed by P3 and at last P1.

• We scheduled the same set of processes using the First


come first serve algorithm, and got average waiting time
to be 18.75 ms, whereas with SJF, the average waiting
time comes out 4.5 ms.
Amity School of Engineering & Technology

Problem with Non Pre-emptive SJF


• If the arrival time for processes are different, which
means all the processes are not available in the ready
queue at time 0, and some jobs arrive after some time, in
such situation, sometimes process with short burst time
have to wait for the current process's execution to finish,
because in Non Pre-emptive SJF, on arrival of a process
with short duration, the existing job/process's execution
is not halted/stopped to execute the short job first.
Amity School of Engineering & Technology

• This leads to the problem of Starvation, where a shorter


process has to wait for a long time until the current
longer process gets executed. This happens if shorter
jobs keep coming, but this can be solved using the
concept of aging.
Amity School of Engineering & Technology

Pre-emptive Shortest Job First


• In Preemptive Shortest Job First Scheduling, jobs are
put into ready queue as they arrive, but as a process
with short burst time arrives, the existing process is
preempted or removed from execution, and the
shorter job is executed first.
Amity School of Engineering & Technology
Amity School of Engineering & Technology

As you can see in the GANTT chart above, as P1 arrives first, hence
it's execution starts immediately, but just after 1 ms, process P2 arrives
with a burst time of 3 ms which is less than the burst time of P1, hence
the process P1(1 ms done, 20 ms left) is preemptied and process P2 is
executed.

As P2 is getting executed, after 1 ms, P3 arrives, but it has a burst time


greater than that of P2, hence execution of P2 continues. But after
another millisecond, P4 arrives with a burst time of 2 ms, as a result
P2(2 ms done, 1 ms left) is preemptied and P4 is executed.

After the completion of P4, process P2 is picked up and finishes, then


P2 will get executed and at last P1.

The Pre-emptive SJF is also known as Shortest Remaining Time First,


because at any given point of time, the job with the shortest remaining
time is executed first.
Amity School of Engineering & Technology

Consider three CPU-intensive processes, which require 10,


20 and 30 time units and arrive at times 0, 2 and 6,
respectively. How many context switches are needed if the
operating system implements a shortest remaining time
first scheduling algorithm? Do not count the context
switches at time zero and at the end.
A. 1
B. 2
C. 3
D. 4
Amity School of Engineering & Technology

Which of the following process scheduling algorithm may


lead to starvation

(A) FIFO
(B) Round Robin
(C) Shortest Job Next
(D) None of the above
Amity School of Engineering & Technology

Priority Scheduling Algorithm: Preemptive, Non-


Preemptive

Priority Scheduling is a method of scheduling processes that is


based on priority. In this algorithm, the scheduler selects the
tasks to work as per the priority.

The processes with higher priority should be carried out first,


whereas jobs with equal priorities are carried out on a round-
robin or FCFS basis. Priority depends upon memory
requirements, time
Amity School of Engineering & Technology

Preemptive Scheduling
• In Preemptive Scheduling, the tasks are mostly assigned with
their priorities. Sometimes it is important to run a task with a
higher priority before another lower priority task, even if the
lower priority task is still running. The lower priority task
holds for some time and resumes when the higher priority task
finishes its execution.
Amity School of Engineering & Technology

Non-Preemptive Scheduling
• In this type of scheduling method, the CPU has been allocated
to a specific process. The process that keeps the CPU busy,
will release the CPU either by switching context or
terminating. It is the only method that can be used for various
hardware platforms. That's because it doesn't need special
hardware (for example, a timer) like preemptive scheduling.
Amity School of Engineering & Technology

Characteristics of Priority Scheduling


• A CPU algorithm that schedules processes based on priority.
• It used in Operating systems for performing batch processes.
• If two jobs having the same priority are READY, it works on a
FIRST COME, FIRST SERVED basis.
• In priority scheduling, a number is assigned to each process
that indicates its priority level.
• Lower the number, higher is the priority.
• In this type of scheduling algorithm, if a newer process
arrives, that is having a higher priority than the currently
running process, then the currently running process is
preempted.
Amity School of Engineering & Technology

Example of Priority Scheduling


Consider following five processes P1 to P5. Each process has its unique
priority, burst time, and arrival time.

Process Priority Burst time Arrival time


P1 1 4 0
P2 2 3 0
P3 1 7 6
P4 3 4 11
P5 2 2 12
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Amity School of Engineering & Technology

Step 18) Let's calculate the average waiting time for the
above example.

Waiting Time = start time - arrival time + wait time for next
burst

P1 = o - o = o
P2 =4 - o + 7 =11
P3= 6-6=0
P4= 16-11=5
Average Waiting time = (0+11+0+5+2)/5 = 18/5= 3.6
Amity School of Engineering & Technology

Advantages of priority scheduling

• Easy to use scheduling method


• Processes are executed on the basis of priority so high priority
does not need to wait for long which saves time
• This method provides a good mechanism where the relative
important of each process may be precisely defined.
• Suitable for applications with fluctuating time and resource
requirements.
Amity School of Engineering & Technology

Disadvantages of priority scheduling

• If the system eventually crashes, all low priority processes get


lost.
• If high priority processes take lots of CPU time, then the lower
priority processes may starve and will be postponed for an
indefinite time.
• This scheduling algorithm may leave some low priority
processes waiting indefinitely.
• A process will be blocked when it is ready to run but has to
wait for the CPU because some other process is running
currently.
• If a new higher priority process keeps on coming in the ready
queue, then the process which is in the waiting state may need
to wait for a long duration of time.
Amity School of Engineering & Technology

Ex:Priority CPU Scheduling


Amity School of Engineering & Technology

Input :
process no-> 1 2 3 4 5
arrival time-> 0 1 3 2 4
burst time-> 3 6 1 2 4
priority-> 3 4 9 7 8
Output :
Process Start_time Complete_time Trun_Around_Time Wating_Time
1 0 3 3 0
2 3 9 8 2
4 9 11 9 7
3 11 12 9 8
5 12 16 12 8

Average Wating Time is : 5.0


Average Trun Around time is : 8.2
Amity School of Engineering & Technology

Preemptive Priority Scheduling


In Preemptive Priority Scheduling, at the time of
arrival of a process in the ready queue, its Priority
is compared with the priority of the other
processes present in the ready queue as well as
with the one which is being executed by the CPU
at that point of time. The One with the highest
priority among all the available processes will be
given the CPU next.
Amity School of Engineering & Technology

The difference between preemptive priority


scheduling and non preemptive priority scheduling
is that, in the preemptive priority scheduling, the
job which is being executed can be stopped at the
arrival of a higher priority job.

Once all the jobs get available in the ready queue,


the algorithm will behave as non-preemptive
priority scheduling, which means the job scheduled
will run till the completion and no preemption will
be done.
Amity School of Engineering & Technology

Example
There are 7 processes P1, P2, P3, P4, P5, P6 and P7
given. Their respective priorities, Arrival Times and
Burst times are given in the table below.

Process Id Priority Arrival Time Burst Time

1 2(H) 0 1
2 6 1 7
3 3 2 3
4 5 3 6
5 4 4 5
6 10(L) 5 15
7 9 15 8
Amity School of Engineering & Technology

GANTT chart Preparation


At time 0, P1 arrives with the burst time of 1 units and
priority 2. Since no other process is available hence this will
be scheduled till next job arrives or its completion (whichever
is lesser).

At time 1, P2 arrives. P1 has completed its execution and no other


process is available at this time hence the Operating system has to
schedule it regardless of the priority assigned to it.
Amity School of Engineering & Technology
The Next process P3 arrives at time unit 2, the priority of P3 is higher
to P2. Hence the execution of P2 will be stopped and P3 will be
scheduled on the CPU.

During the execution of P3, three more processes P4, P5 and P6


becomes available. Since, all these three have the priority lower to the
process in execution so PS can't preempt the process. P3 will complete
its execution and then P5 will be scheduled with the priority highest
among the available processes.
Amity School of Engineering & Technology

Meanwhile the execution of P5, all the processes got available in the
ready queue. At this point, the algorithm will start behaving as Non
Preemptive Priority Scheduling. Hence now, once all the processes
get available in the ready queue, the OS just took the process with
the highest priority and execute that process till completion. In this
case, P4 will be scheduled and will be executed till the completion.

Since P4 is completed, the other process with the highest priority


available in the ready queue is P2. Hence P2 will be scheduled next.
Amity School of Engineering & Technology

P2 is given the CPU till the completion. Since its remaining burst time
is 6 units hence P7 will be scheduled after this.

The only remaining process is P6 with the least priority, the


Operating System has no choice unless of executing it. This will be
executed at the last.

The Completion Time of each process is determined with the help of


GANTT chart. The turnaround time and the waiting time can be
calculated by the following formula.
Amity School of Engineering & Technology

Process Id Priority Arrival Burst Completi Turn Waiting


Time Time on Time around Time
Time

1 2 0 1 1 1 0
2 6 1 7 22 21 14
3 3 2 3 5 3 0
4 5 3 6 16 13 7
5 4 4 5 10 6 1
6 10 5 15 45 40 25
7 9 6 8 30 24 16

Avg Waiting Time = (0+14+0+7+1+25+16)/7 = 63/7 = 9 units


Amity School of Engineering & Technology

Round Robin CPU Scheduling


• The name of this algorithm comes from the round-robin
principle, where each person gets an equal share of something
in turns. It is the oldest, simplest scheduling algorithm, which
is mostly used for multitasking.
• In Round-robin scheduling, each ready task runs turn by turn
only in a cyclic queue for a limited time slice. This algorithm
also offers starvation free execution of processes.
Amity School of Engineering & Technology

Characteristics of Round-Robin Scheduling


• Round robin is a pre-emptive algorithm
• The CPU is shifted to the next process after fixed interval time, which is
called time quantum/time slice.
• The process that is preempted is added to the end of the queue.
• Round robin is a hybrid model which is clock-driven
• Time slice should be minimum, which is assigned for a specific task that
needs to be processed. However, it may differ OS to OS.
• It is a real time algorithm which responds to the event within a specific
time limit.
• Round robin is one of the oldest, fairest, and easiest algorithm.
• Widely used scheduling method in traditional OS.
Amity School of Engineering & Technology

Example of Round-robin Scheduling


Amity School of Engineering & Technology
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Amity School of Engineering & Technology
Amity School of Engineering & Technology

Step 7) Let's calculate the average waiting time for above


example.

Wait time
P1= 0+ 4= 4
P2= 2+4= 6
P3= 4+3= 7
Amity School of Engineering & Technology

Advantage of Round-robin Scheduling


• It doesn't face the issues of starvation or convoy effect.
• All the jobs get a fair allocation of CPU.
• It deals with all process without any priority
• This scheduling method does not depend upon burst time.
That's why it is easily implementable on the system.
Amity School of Engineering & Technology

• Once a process is executed for a specific set of the period, the


process is preempted, and another process executes for that
given time period.
• Allows OS to use the Context switching method to save states
of preempted processes.
• It gives the best performance in terms of average response
time.
Amity School of Engineering & Technology

Disadvantages of Round-robin Scheduling


• If slicing time of OS is low, the processor output will be
reduced.
• This method spends more time on context switching
• Its performance heavily depends on time quantum.
• Priorities cannot be set for the processes.
• Round-robin scheduling doesn't give special priority to more
important tasks.
• Lower time quantum results in higher the context switching
overhead in the system.
• Finding a correct time quantum is a quite difficult task in this
system.
Amity School of Engineering & Technology

Ex In the following example, there are six processes named as P1,


P2, P3, P4, P5 and P6. Their arrival time and burst time are given
below in the table. The time quantum of the system is 4 units.

Process ID Arrival Time Burst Time

1 0 5
2 1 6
3 2 3
4 3 1
5 4 5
6 6 4

According to the algorithm, we have to maintain the ready queue


and the Gantt chart. The structure of both the data structures will
be changed after every scheduling.
Amity School of Engineering & Technology

Ready Queue:
Initially, at time 0, process P1 arrives which will be scheduled
for the time slice 4 units. Hence in the ready queue, there will
be only one process P1 at starting with CPU burst time 5
units.
Amity School of Engineering & Technology

Ready Queue
Meanwhile the execution of P1, four more processes P2, P3, P4 and
P5 arrives in the ready queue. P1 has not completed yet, it needs
another 1 unit of time hence it will also be added back to the ready
queue.

GANTT chart
After P1, P2 will be executed for 4 units of time which is shown in the
Gantt chart.
Amity School of Engineering & Technology

Ready Queue
During the execution of P2, one more process P6 is arrived in the
ready queue. Since P2 has not completed yet hence, P2 will also be
added back to the ready queue with the remaining burst time 2
units.

GANTT chart
After P1 and P2, P3 will get executed for 3 units of time since its CPU
burst time is only 3 seconds.
Amity School of Engineering & Technology

Ready Queue
Since P3 has been completed, hence it will be terminated and not be added to the ready
queue. The next process will be executed is P4.

P4 P5 P1 P6 P2
1 5 1 4 2

GANTT chart
After, P1, P2 and P3, P4 will get executed. Its burst time is only 1 unit which is lesser then the time
quantum hence it will be completed.

              
Amity School of Engineering & Technology
Ready Queue
The next process in the ready queue is P5 with 5 units of burst
time. Since P4 is completed hence it will not be added back to the
queue.

P5 P1 P6 P2
5 1 4 2

GANTT chart
P5 will be executed for the whole time slice because it requires 5
units of burst time which is higher than the time slice.
Amity School of Engineering & Technology

Ready Queue
P5 has not been completed yet; it will be added back to the queue
with the remaining burst time of 1 unit.

P1 P6 P2 P5
1 4 2 1

GANTT Chart
The process P1 will be given the next turn to complete its execution.
Since it only requires 1 unit of burst time hence it will be completed.
Amity School of Engineering & Technology

Ready Queue
P1 is completed and will not be added back to the ready queue. The
next process P6 requires only 4 units of burst time and it will be
executed next.
P6 P2 P5
4 2 1

GANTT chart
P6 will be executed for 4 units of time till completion.
Amity School of Engineering & Technology

Ready Queue
Since P6 is completed, hence it will not be added again to the queue.
There are only two processes present in the ready queue. The Next
process P2 requires only 2 units of time.

P2 P5
2 1

GANTT Chart
P2 will get executed again, since it only requires only 2 units of time
hence this will be completed.
Amity School of Engineering & Technology

Ready Queue
Now, the only available process in the queue is P5 which requires 1 unit
of burst time. Since the time slice is of 4 units hence it will be
completed in the next burst.

P5
1

GANTT chart
P5 will get executed till completion.
Amity School of Engineering & Technology

Process ID Arrival Burst Time Completion Turn Waiting


Time Time Around Time
Time

1 0 5 17 17 12
2 1 6 23 22 16
3 2 3 11 9 6
4 3 1 12 9 8
5 4 5 24 20 15
6 6 4 21 15 11

 Avg Waiting Time = (12+16+6+8+15+11)/6 = 76/6 units


Amity School of Engineering & Technology

Which scheduling policy is most suitable for a time-shared operating


systems?
(a) Shortest Job First                        (b) Round Robin
(c) First Come First Server                (d) Elevator

(b) Round robin

Which of the following scheduling algorithms is non-preemptive?


(a) Round-Robin                                       (b) First In First Out
(c) Multilevel Queue Scheduling                (d) Multilevel Queue Scheduling
with Feedback

(b) First In First Out


Amity School of Engineering & Technology

Assume that the following jobs are to be executed on a single processor


system 
-----------------------
Job-Id   CPU-Burst Time
----------------------- 
 p           4  
 q           1
 r           8
 s           1
 t           2 
-----------------------
The jobs are assumed to have arrived at time  0 and in the order p, q, r, s, t.
Calculate the departure time (completion time) for job p if scheduling is round
robin with time slice 1. 
(a) 4   (b) 10  (c) 11   (d) 12 
option (c)

P Q R S T P R T P R P R
1 2 3 4 5 6 7 8 9 10 11 16
Amity School of Engineering & Technology

Consider the following set of processes, with the arrival times and the
CPU burst times given in milliseconds.
-------------------------------------
Process    Arrival-Time    Burst-Time
-------------------------------------
 P1             0             5
 P2             1             3
 P3             2             3
 P4             4             1
------------------------------------- 
What is the average turnaround time for these processes with the
preemptive shortest remaining processing time first (SROT) algorithm?
(a) 5.50    (b) 5.75     (c) 6.00    (d) 6.25

Ans: option (a)


Amity School of Engineering & Technology

P1 P2 P4 P3 P1

1 4 5 8 12

Calculate the Turn Around Time (TAT) for each process as shown in the table
below.
TAT = Completion Time - Arrival Time
-------------------------------------
Pro    AT      BT   CT     TAT(CT-AT)
-------------------------------------
 P1     0       5   12        12
 P2     1       3    4         3
 P3     2       3    8         6
 P4     4       1    5         1
------------------------------------- 
Avg TAT = (12+3+6+1)/4 = 5.50
Amity School of Engineering & Technology

Process Synchronization

Process Synchronization is a way to coordinate processes that use


shared data. It occurs in an operating system among cooperating
processes. Cooperating processes are processes that share
resources. While executing many concurrent processes, process
synchronization helps to maintain shared data consistency and
cooperating process execution. Processes have to be scheduled to
ensure that concurrent access to shared data does not create
inconsistencies. Data inconsistency can result in what is called
a race condition. A race condition occurs when two or more
operations are executed at the same time, not scheduled in the
proper sequence, and not exited in the critical section correctly.
Amity School of Engineering & Technology

• When two or more process cooperates with each other,


their order of execution must be preserved otherwise there
can be conflicts in their execution and inappropriate
outputs can be produced.
• A cooperative process is the one which can affect the
execution of other process or can be affected by the
execution of other process. Such processes need to be
synchronized so that their order of execution can be
guaranteed.
• The procedure involved in preserving the appropriate order
of execution of cooperative processes is known as Process
Synchronization. There are various synchronization
mechanisms that are used to synchronize the processes.
Amity School of Engineering & Technology

Race Condition
A Race Condition typically occurs when two or more threads
try to read, write and possibly make the decisions based on
the memory that they are accessing concurrently.

Critical Section
The regions of a program that try to access shared resources and
may cause race conditions are called critical section. To avoid
race condition among the processes, we need to assure that only
one process at a time can execute within the critical section.
Amity School of Engineering & Technology

How Process Synchronization Works?

For Example, process A changing the data in a memory location


while another process B is trying to read the data from
the same memory location. There is a high probability that data
read by the second process will be erroneous.

Memory
Amity School of Engineering & Technology

Sections of a Program
Here, are four essential elements of the critical section:
• Entry Section: It is part of the process which decides the
entry of a particular process.
• Critical Section: This part allows one process to enter and
modify the shared variable.
• Exit Section: Exit section allows the other process that are
waiting in the Entry Section, to enter into the Critical Sections.
It also checks that a process that finished its execution should
be removed through this Section.
• Remainder Section: All other parts of the Code, which is not
in Critical, Entry, and Exit Section, are known as the
Remainder Section.
Amity School of Engineering & Technology

What is a Process?
A process is a program in execution. Process is not as
same as program code but a lot more than it. A process is
an 'active' entity as opposed to program which is
considered to be a 'passive' entity. Attributes held by
process include hardware state, memory, CPU etc.
Amity School of Engineering & Technology

• Process memory is divided into four sections for efficient


working :

• The Text section is made up of the compiled program


code, read in from non-volatile storage when the
program is launched. The Data section is made up the
global and static variables, allocated and initialized prior
to executing the main. The Heap is used for the dynamic
memory allocation, and is managed via calls to new,
delete, mallow, free, etc. The Stack is used for local
variables. Space on the stack is reserved for local
variables when they are declared.
Amity School of Engineering & Technology

Basically there are two types of process:


1. Independent process.
2. Cooperating process.
a) Independent Process--->a process which does not need any other external
factor to trigger is an independent process
b) Cooperative Process--->a process which works on occurance of any event
and the outcome effects any part of the rest of the system is a cooperating
process.
• For execution a process should be mixer of CPU bound and
I/O bound.
• CPU bound: is a time process reside in processor and
perform it's execution.
• I/O bound: is a time in which a process perform input output
operation.e.g take input from keyboard or display output in
monitor.

Вам также может понравиться