Вы находитесь на странице: 1из 48

Operating System

Lecture # 15 , 16 , 17

Recap of Lecture # 08 , 09 & 10


Process concept Process States Process scheduling concepts Process creation and termination

Overview of Lecture
Basic concepts of CPU Scheduling Scheduling criteria Preemptive and non-preemptive algorithms First-Come-First-Serve scheduling algorithm

Basic Concepts
The objective of multiprogramming is to have some process running at all times, in order to maximize CPU utilization. In a uniprocessor system, only one process may run at a time; any other processes must wait until the CPU is free and can be rescheduled. In multiprogramming, a process is executed until it must wait, typically for the completion of some I/O request. In a simple computer system, the CPU would then sit idle; all this waiting time is wasted. Multiprogramming entails productive usage of this time. When one process has to wait, the OS takes the CPU away from that process and gives the CPU to another process. Almost all computer resources are scheduled before use.

Life of a Process
Process execution consists of a cycle of CPU execution and I/O wait. Processes alternates between these two states. Process execution begins with a CPU burst. An I/O burst follows that, and so on. Eventually, the last CPU burst will end with a system request to terminate execution, rather than with another I/O burst. An I/O bound program would typically have many very short CPU bursts. A CPU bound program might have a few very long CPU bursts. This distribution can help us select an appropriate CPU-scheduling algorithm.

CPU Scheduler
Whenever the CPU becomes idle, the operating system must select one of the processes in the ready queue to be executed. The short-term scheduler (i.e., the CPU scheduler) selects a process (in ready queue) to give it the CPU. It selects from among the processes in memory that are ready to execute, and invokes the dispatcher to have the CPU allocated to the selected process. A ready queue can be implemented as a FIFO queue, a tree, or simply an unordered linked list. The records (nodes) in the ready queue are generally the process control blocks (PCBs) of processes.

Dispatcher
The dispatcher is a kernel module that takes control of the CPU from the current process and gives it to the process selected by the short-term scheduler. This function involves: Switching the context (i.e., saving the context of the current process and restoring the context of the newly selected process) Switching to user mode Jumping to the proper location in the user program to restart that program

The time it takes for the dispatcher to stop one process and start another running is known as the dispatch latency. (typically, few microseconds)

Preemptive and Non-Preemptive Scheduling


CPU scheduling can take place under the following circumstances: 1. When a process switches from the running state to the waiting state (for example, an I/O request is being completed) 2. When a process switches from the running state to the ready state (for example when an interrupt occurs) 3. When a process switches from the waiting state to the ready state (for example, completion of I/O) 4. When a process terminates In 1 and 4, there is no choice in terms of scheduling; a new process must be selected for execution. There is a choice in case of 2 and 3. When scheduling takes place only under 1 and 4, we say, scheduling is non-preemptive; otherwise the scheduling scheme is preemptive.

Scheduling Criteria
The scheduling criteria include:

CPU utilization: We want to keep CPU as busy as possible. In a real system it should range from 40 percent (for a lightly loaded system) to 90 percent (for a heavily used system)
Throughput: If CPU is busy executing processes then work is being done. One measure of work is the number of processes completed per time, called, throughput. We want to maximize the throughput.

Scheduling Criteria
Turnaround time: The interval from the time of submission to the time of completion is the turnaround time. Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU and doing I/O. We want to minimize the turnaround time.

Waiting time: Waiting time is the time spent waiting in the ready queue. We want to minimize the waiting time to increase CPU efficiency. Response time: It is the time from the submission of a request until the first response is produced. Thus response time is the amount of time it takes to start responding but not the time it takes to output that response. Response time should be minimized.

Optimization Criteria
Maximize CPU utilization Maximize throughput Minimize turnaround time Minimize waiting time Minimize response time

Scheduling Algorithms
Commonly used short-term scheduling algorithms. First-Come-First-Served (FCFS) Scheduling Shorted Job First (SJF) Scheduling Shortest Remaining Time First (SRTF) Scheduling Priority Scheduling Round-Robin Scheduling Multilevel Queues Scheduling Multilevel Feedback Queues Scheduling UNIX System V Scheduling

First-Come, First-Served (FCFS) Scheduling

The process that requests the CPU first (i.e., enters the ready queue first) is allocated the CPU first. The implementation of an FCFS policy is managed with a FIFO queue. When a process enters the ready queue, its PCB is linked onto the tail of the queue. When CPU is free, it is allocated to the process at the head of the queue. The running process is removed from the queue. The average waiting time under FCFS policy is not minimal and may vary substantially if the process CPU-burst times vary greatly. FCFS is a non-preemptive scheduling algorithm. We use the following system state to demonstrate the working of this algorithm. For simplicity, we assume that processes are in the ready queue at time 0. Process Burst Time P1 24 P2 3 P3 3

Suppose that processes arrive into the system in the order: P1, P2, P3. Processes are served in the order: P1, P2, P3. The Gantt chart for the schedule is shown in Figure 14.3.

Here are the waiting times for the three processes and the average waiting time per process. Waiting times P1 = 0; P2 = 24; P3 = 27 Average waiting time: (0+24+27)/3 = 17

Suppose that processes arrive in the order: P2, P3, P1. The Gantt chart for the schedule is shown in Figure 14.4:

Here are the waiting times for the three processes and the average waiting time per process. Waiting time for P1 = 6; P2 = 0; P3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 When FCFS scheduling algorithm is used, the convoy effect occurs when short processes wait behind a long process to use the CPU and enter the ready queue in a convoy after completing their I/O. This results in lower CPU and device utilization than might be possible if shorter processes were allowed to go first.

Shortest-Job-First Scheduling of the latters next CPU This algorithm associates with ach process the length

burst. When the CPU is available, it is assigned to the process that has the smallest next CPU burst. If two processes have the same length next CPU burst, FCFS scheduling is used to break the tie. The real difficulty with the SJF algorithm is in knowing the length of the next CPU request. For long term scheduling in a batch system, we can use as the length the process time limit that a user specifies when he submits the job. For short-term CPU scheduling, there is no way to length of the next CPU burst. One approach is to try to approximate SJF scheduling, by assuming that the next CPU burst will be similar in length to the previous ones, for instance. The next CPU burst is generally predicted as an exponential average of the measured lengths of previous CPU bursts. Let tn be the length of the nth CPU burst and

Priority Scheduling
SJF is a special case of the general priority-scheduling algorithm. A priority is associated with each process, and the CPU is allocated to the process with the highest priority (smallest integer highest priority). Equal priority processes are scheduled in FCFS order. The SJF algorithm is simply a priority algorithm where the priority (p) is the inverse of the (predicted) next CPU burst. The larger the CPU burst of a process, the lower its priority, and vice versa. Priority scheduling can either be preemptive or non-preemptive. When a process arrives at the ready queue, its priority is compared with the priority of the currently running process. A preemptive priority-scheduling algorithm will preempt the CPU if the priority of the newly arrived process is higher than the priority of the currently running

Why is SJF optimal?

SJF is an optimal algorithm because it decreases the wait times for short processes much more than it increases the wait times for long processes. Lets consider the example shown in Figure 16.1, in which the next CPU bursts of P1, P2, and P3 are 5, 3, and 2, respectively. The first Gantt chart shows execution of processes according to the longest jobfirst algorithm, resulting in the waiting times for P1, P2, and P3 to be 0, 5, and 8 times units. The second Gantt chart shows execution of processes according to the shortest-job-first algorithm, resulting in the waiting times for P1, P2, and P3 to be 0, 2, and 5. Note that the waiting time for P2 has decreased from 5 to 2 and that of P3 has decreased from 8 to 0. The increase in the wait time for P1 is from 0 to 5, which is much smaller than the decrease in the wait times for P2 and P3.

Round-Robin Scheduling

The round-robin (RR) scheduling algorithm is designed especially for timesharing systems. It is similar to FCFS scheduling but preemption is added to switch between processes. A small unit of time, called a time quantum (or time slice) is defined. The ready queue is treated as a circular queue. The CPU scheduler goes around the ready queue, allocating the CPU to each process for a time interval of up to 1 time quantum. To implement RR scheduling, we keep ready queue as a FIFO queue of processes. New processes are added to the tail of the ready queue. The CPU scheduler picks the first process from the ready queue, sets a timer to interrupt after 1 time quantum, and then dispatches the process. One of the two things will then happen. The process may have a

The Thread Concept

The turnaround time of a process under round robin is also depends on the size of the time quantum. In Figure 16.3 we show a workload of four processes P1, P2, P3, and P4 with their next CPU bursts as 6, 3, 1, and 7 time units. The graph in the figure shows that best (smallest) turnaround time is achieved when quantum size is 6 or greater. Note that most of the given processes finish their next CPU bursts with quantum of 6 or greater. We can make a general statement that the round-robin algorithm gives smallest average turnaround time when quantum value is chosen such that most of the processes finish their next CPU bursts within the quantum.

The Thread Concept

We now consider the following system workload to illustrate working of the roundrobin algorithm. Execution of P1 though P4 with quantum 20 is shown in Figure 16.4. In the table, original CPU bursts are shown in bold and remaining CPU bursts (after a process has used the CPU for one quantum) are shown in non-bold font.

The Thread Concept

Figure 16.5 shows wait and turnaround times for the four processes. The average wait time for a process comes out to be 73 time units for round robin and 38 for SJF. Typically, RR has a higher average turnaround than SJF, but better response. In timesharing systems, shorter response time for a process is more important than shorter turnaround time for the process. Thus, round-robin scheduler matches the requirements of time-sharing systems better than the SJF algorithm. SJF scheduler is better suited for batch systems, in which minimizing the turnaround time is the main criterion.

Multilevel Queue Scheduling

Another class of scheduling algorithms has been created for situations in which processes are easily classified into different groups. For example, a common division is made between foreground (or interactive) processes and background (or batch) processes. These two types of processes have different response time requirements and so might have different scheduling needs. In addition, foreground processes may have priority over background processes. A multilevel queue-scheduling algorithm partitions the ready queue into several separate queues, as shown in Figure 16.5. Each queue has its own priority and scheduling algorithm. Processes are permanently assigned to one queue, generally based o some property of the process, such as memory size, process priority or process type.

The Thread Concept

The Thread Concept


There are two main issues with processes: 1. The fork() system call is expensive (it requires memory to memory copy of the executable image of the calling process and allocation of kernel resources to the child process)

The Thread Concept


There are two main issues with processes: 1. The fork() system call is expensive (it requires memory to memory copy of the executable image of the calling process and allocation of kernel resources to the child process)

The Thread Concept


There are two main issues with processes: 1. The fork() system call is expensive (it requires memory to memory copy of the executable image of the calling process and allocation of kernel resources to the child process)

The Thread Concept


There are two main issues with processes: 1. The fork() system call is expensive (it requires memory to memory copy of the executable image of the calling process and allocation of kernel resources to the child process)

The Thread Concept


There are two main issues with processes: 1. The fork() system call is expensive (it requires memory to memory copy of the executable image of the calling process and allocation of kernel resources to the child process)

The Thread Concept


There are two main issues with processes: 1. The fork() system call is expensive (it requires memory to memory copy of the executable image of the calling process and allocation of kernel resources to the child process)

The Thread Concept


There are two main issues with processes: 1. The fork() system call is expensive (it requires memory to memory copy of the executable image of the calling process and allocation of kernel resources to the child process)

The Thread Concept


There are two main issues with processes: 1. The fork() system call is expensive (it requires memory to memory copy of the executable image of the calling process and allocation of kernel resources to the child process)

The Thread Concept


There are two main issues with processes: 1. The fork() system call is expensive (it requires memory to memory copy of the executable image of the calling process and allocation of kernel resources to the child process)

The Thread Concept


There are two main issues with processes: 1. The fork() system call is expensive (it requires memory to memory copy of the executable image of the calling process and allocation of kernel resources to the child process)

The Thread Concept


There are two main issues with processes: 1. The fork() system call is expensive (it requires memory to memory copy of the executable image of the calling process and allocation of kernel resources to the child process)

The Thread Concept


There are two main issues with processes: 1. The fork() system call is expensive (it requires memory to memory copy of the executable image of the calling process and allocation of kernel resources to the child process)

The Thread Concept


There are two main issues with processes: 1. The fork() system call is expensive (it requires memory to memory copy of the executable image of the calling process and allocation of kernel resources to the child process)

The Thread Concept


There are two main issues with processes: 1. The fork() system call is expensive (it requires memory to memory copy of the executable image of the calling process and allocation of kernel resources to the child process)

Summary
Process concept Process States Process scheduling concepts Process creation and termination

Вам также может понравиться