Вы находитесь на странице: 1из 10

CPU Scheduling

wasted General CPU rule — keep the CPU busy; an idle CPU is a

Major source of CPU idleness: I/O (or waiting for it)

Many cycle — programs alternating have phases a characteristic of CPU activity CPU–I/O and burst I/O inactivity

CPU-bound programs have fewer, longer CPU bursts programs have fewer, longer CPU bursts

I/O-bound programs have more, shorter CPU bursts programs have more, shorter CPU bursts

Core Definitions

CPU selecting scheduling the next (a.k.a. process short-term for the scheduling) CPU to “service” is the act of once the current process leaves the CPU idle

Many algorithms for making this selection

Implementation-wise, operating system’s various CPU PCB scheduling queues manipulates the

The work dispatcher of passing is the the CPU software to the that next performs selected the process dirty — the time to do this is called the dispatch latency

Preemptive Scheduling vs. Cooperative

Two levels at which a program can relinquish the CPU

First level = cooperative scheduling

Process system call explicitly for I/O goes or awaiting from running child termination) to waiting (e.g.,the CPU • First level = cooperative scheduling Process terminates • Cooperative was used in older

Process terminatesawaiting from running child termination) to waiting (e.g., • Cooperative was used in older scheduling personal

Cooperative was used in older scheduling personal (or computer “cooperative operating multitasking”) systems to what PC (<Windows hardware 95, could < Mac do OS X), primarily due

Second level = preemptive scheduling

ready Interrupt state causes a process to go from running toprimarily due • Second level = preemptive scheduling Interrupt ready state causes (e.g., asynchronous a process

Interrupt ready state causes (e.g., asynchronous a process to go completion from waiting of I/O) state toready Interrupt state causes a process to go from running to • feature Key word that

feature Key word that is “interrupt” is required for — this preemptive is a hardware-level scheduling

Prevents a process from “running away” with the CPU

But brings up new issues of its own:

Coordinating access to shared dataa process from “running away” with the CPU • But brings up new issues of its

Can you interrupt an interrupt?from “running away” with the CPU • But brings up new issues of its own: Coordinating

Scheduling Criteria

There “goodness” is no “one, of an algorithm true scheduling can be algorithm,” measured because by many, the sometimes contradictory, criteria:

CPU utilization — Give the CPU steady work

Throughput work is a completed — “Work” process per unit time; one definition of

Turnaround time — Real time to process completion

Waiting time —Time spent in ready queue

Response time —Time from request to response

appropriateness Note mainly how when the individual purpose of a criterion: program of a system throughput runs determines correspond is applicable the to completions interactive systems of a task, (e.g., while graphical response user time interfaces) is crucial to

Typical but minimize goals: maximize times CPU utilization and throughput

Other choices for optimization:

Optimize bad can the the “worse-case average measure, scenario” or min/max be?) (i.e., howand throughput • Other choices for optimization: predictability not Minimize much variance work —

predictability not Minimize much variance work — meaningful done in measure, in this for area thus not Minimize much variance work — meaningful done in measure, in this for area thus interactive resulting systems, in better but

First-Come, (FCFS) Scheduling First-Served

Simple premise: the sooner a process asks for the in an CPU, line I/O the until wait sooner the state) ones it gets before it; subsequent them finish processes up (or fall wait into

taken This is away a cooperative from a process algorithm — the CPU can’t be

Simple implementation: FIFO queue

Analysis: time; certain order combinations significantly of affects CPU- the and average I/O-bound waiting processes decrease utilization

Shortest-Job-First Scheduling (SJF)

Requires knowledge of CPU burst durations: give the burst; CPU to provably the process minimizes with the the shortest average waiting next CPU time

But how the heck do we know the length of the next scheduling), CPU burst? where Better users fit for can batch assign systems time limits (long-term to jobs

CPU next burst scheduling length can — approximate generally via SJF exponential by predicting average the

In CPU preemptive burst < what’s flavor, left we of change the current processes one if the next

Priority Scheduling

higher Assign CPU to priority, a the priority process but p to this with a isn’t process the set highest in (typically stone), priority lower and give p = the

Note is the how inverse SJF of is the a special next predicted case of priority CPU burst scheduling: p

Processes of equal priority are scheduled FCFS

(time CPU Priorities burst limits, range ratio) memory from to external requirements, internally-calculated factors open files, metrics I/O-to-

Comes in both preemptive and cooperative flavors:

process Preemptive when version a higher-priority interrupts the process currently comes running in• Comes in both preemptive and cooperative flavors: running at Cooperative the top process of version

running at Cooperative the top process of version the to queue relinquish puts and the waits the higher-priority CPU for the currently processinterrupts the process currently comes running in • Key — low-priority issue: indefinite processes

Key — low-priority issue: indefinite processes blocking may or starvation wait forever of a of process higher- priority ones keep showing up

Address process’s starvation priority as through waiting aging: time increases gradually — increase caps the a maximum waiting time

Round-Robin (RR) Scheduling

slice; Necessarily processes preemptive: never have defines the CPU a time for quantum > 1 quantum or time

Implementation: FIFO, then traverse maintain each PCB the ready queue as a circular

Two possibilities:

Current process relinquishes in < 1 quantumPCB the ready queue as a circular • Two possibilities: 1 quantum passes, resulting in a

1 quantum passes, resulting in a timer interruptCurrent process relinquishes in < 1 quantum • In of both the queue, cases, and the

In of both the queue, cases, and the “done” the new process head becomes is moved current to the tail

Average better response waiting time time is with longer sufficiently with RR, small in exchange quantum for

sufficiently Note how RR large, is effectively unless CPU FCFS bursts if quantum are really is long really is long

With RR, the cost of a context switch gains significance:

multiple we want orders quantum of > magnitude context-switch (i.e., milli- time, vs. preferably µ-seconds) by

But again, not too large! Or else we’re back at FCFS(i.e., milli- time, vs. preferably µ -seconds) by Rule CPU of bursts thumb: are choose shorter

Rule CPU of bursts thumb: are choose shorter time than quantum that quantum so that 80% ofby But again, not too large! Or else we’re back at FCFS • Interesting proportional tidbit:

Interesting proportional tidbit: (direct turnaround or inverse) time to is the not time necessarily quantum

Multilevel Queue Scheduling

Our into different first “composite” queues, each algorithm: with its partition own scheduling processes algorithm (as appropriate for that queue)

Canonical one queue, example: batch processes interactive use processes FCFS in another use RR in

Now of course we have to schedule among queues:

Priority scheduling — queues have preset prioritiesin • Now of course we have to schedule among queues: RR during scheduling which its

RR during scheduling which its — processes each queue do is work given some quantumPriority scheduling — queues have preset priorities Multilevel Scheduling Feedback-Queue • Multilevel to

Multilevel Scheduling Feedback-Queue

Multilevel to another queues queue + the ability for a process to move

for bound For I/O-bound example, processes track processes to CPU a lower-priority burst times queue; and move vice CPU- versato another queues queue + the ability for a process to move Use process aging move

Use process aging move to prevent it to a higher-priority starvation: long queue wait times for aa lower-priority burst times queue; and move vice CPU- versa • scheduling Lots of parameters algorithms

scheduling Lots of parameters algorithms to per play queue, with: number queue selection of queues, rules

Multiple-Processor Scheduling

Potential for better overall performance

Focus available on processor homogeneous to run multiple any available processors: process allows any

Two approaches — most modern OSes do SMP:

processor Asymmetric other processors (i.e., multiprocessing the to one run running user has code a single Asymmetric other processors (i.e., multiprocessing the to one run running user has code a single OS only code), master relegating

Symmetric to make decisions multiprocessing for itself (SMP) (“self-scheduling”) allows each processor to make decisions multiprocessing for itself (SMP) (“self-scheduling”) allows each processor

Processor they stay affinity on the — same Processes CPU, particularly tend to perform due to better caching if

Soft affinity tries to keep processes on the CPU, but doesn’t absolutely guarantee it tries to keep processes on the CPU, but doesn’t absolutely guarantee it

Hard affinity disallows process migration disallows process migration

Load amount balancing of work — at Ideally, any given all CPUs time; have counteracts about the same processor affinity somewhat

Process migration may be push (overloaded CPUs dump on idle ones) or pull (idle CPUs yank someone else’s work) push (overloaded CPUs dump on idle ones) or pull (idle CPUs yank someone else’s work)

Many systems do both (Linux, ULE in BSD)idle ones) or pull (idle CPUs yank someone else’s work) • present Symmetric the OS doesn’t

present Symmetric the OS doesn’t a single multithreading really physical need (SMT) CPU to as know, — multiple Hardware but it logical may ability help CPUs: to to be aware of which physical CPU has which logical CPU

Thread Scheduling

Contention scope — determines a thread’s within with “competition” other the same threads for process the in the CPU: (process-contention entire either system with other (system- scope) threads or contention scope)

Contention system’s many system-contention use threading process-contention scope is model scope) typically (many-to-one bound scope; to one-to-one the and operating many-to- uses

Scheduling Examples

Solaris

Windows XP

Linux

Priority-based with 4 classes (real time, system, interactive, time sharing); multilevel feedback queue for time sharing and interactive class; the lower the priority, the higher the time quantum

Priority-based with preemption; 32 priority levels split into variable class (1–15) and real-time class (16–31); priority 0 thread does memory management; special idle thread gets run if no ready threads are found

Priority-based with preemption; 141 priority levels split into real-time (0– 99) and nice (100–140); the higher the priority, the higher the time quantum; for SMP, each CPU has its own runqueue

Mac OS X

Policy-based with preemption; priority is embedded in the scheduling policy — a standard policy uses a system-defined fair algorithm; a time constraint policy is for real-time needs; and a precedence policy allows externally-set priorities

Evaluating Algorithms Scheduling

strengths, As you’ve and seen, no different single one algorithms is ideal for have absolutely different every situation

characteristics So, informed we need choice techniques of each algorithm for quantifying in order the to make an

metrics First things for first a “good” — we algorithm need to for specify our the particular quantitative need:

“Maximum CPU utilization with maximum response time of 1-second”need to for specify our the particular quantitative need: “Maximum throughput with turnaround time linearly

“Maximum throughput with turnaround time linearly proportional to execution time”CPU utilization with maximum response time of 1-second” EvaluationTechniques • simple Deterministic and

EvaluationTechniques

simple Deterministic and accurate, modeling: but Real requires calculations exact on information exact cases;

Analytic — reasoned evaluation: direct Superclass study of algorithm’s of deterministic properties modeling

with Queueing queueing models: properties, Model system then solve resources Little’s as formula servers

Simulations: statistical or Create real data a model (trace of tapes) the system, then process

Implementations:“Just do it” — then measure